Deepak Menghani edited sectionFormatting_yo.tex  about 8 years ago

Commit id: da6ab44e252f0992c0caa764f9e2b677d4bc9bb0

deletions | additions      

       

Our Approach to the Stochastic Deconvolution problem (described in Section 1) to recreate realistic images backwards from a class label, can be broken down into 3 main steps-  1. Calcualte Gradients & Deconv for a Neuron`s Activations  As of now, we have been able to calculate gradients of a downstream neuron`s activations with respect to the input image to visualize the output sensitivity of the neuron with respect to the input image pixels. We are exploring the Keras & Theano package to modify the standard gradient, using approaches similar to guided backprop, in order to better represent effects of different image areas on the activations of neurons in the a given  layer. 2. Statistical Sampling  Given an input class and a set of images corresponding to it, we want to be able to use summarize  the statistics of activations at each layer. layer using techniques like PCA. We plan to save the statistics of neural activations at a chosen layer for an image class to disk. We then plan to sample a set of activations for the layer and work backwards from it to reconstruct the corresponding input image from it. Since, we getting a random set of activations each time we sample from the distribution, we are expecting to getting different images each time that are representative of the class.  3. Image Reconstruction  Given a set of activations of neurons at the chosen layer, we then aim to reconstruct a representative of the class image. We will work backwards from the activations sampled in Step 2 using gradients(calculated in step 1) to optimize an objective function corresponding to the image. We will also be adding other components to the objective function to ensure that the images formed are realistic eg. sharpness of edges, colour composition, etc.  \subsection{Preliminary Results}  We have been able to construct simple gradients of a neuron with respect to the input image. Gradients of neurons at different layers of VGG net are shown below in Fig[].  We