Deepak Menghani edited sectionFormatting_yo.tex  about 8 years ago

Commit id: 9a3dbd8c9e337eca8c3d307f4077800fa8343018

deletions | additions      

       

We have been able to construct simple gradients of a neuron with respect to the input image. Gradients of neurons at different layers of VGG net are shown below in Fig[].  \label{fig:g1} Gradients of activation of Neuron at Layer 1 of VGG Net  \label{fig:g5} Gradients of activation of Neuron at Layer 5 of VGG Net  \label{fig:g10} Gradients of activation of Neuron at Layer 10 of VGG Net  \label{fig:g16} Gradients of activation of Neuron at Layer 16 of VGG Net  From these results it seems that the neurons in the early layers do not contain information rich enough for us to sample and work backwards from. Neurons further down the layers represent more abstract and macro feature from the image eg. a neuron which fires up when the input image contains an ear. We think it would be ideal for us to re-construct the image using sampled activations from some layer in the middle as we hypothesize that the information contained is maximal somewhere in between the first and last layer. We aim to do experiments to figure out which layer would be the best to work with using our reconstruction algorithm.