Mihir Mongia edited sectionIntroduction_.tex  about 8 years ago

Commit id: c57f840ba6129bc93b7f721dd27ba5c7335c3910

deletions | additions      

       

\section{Introduction}  In the deep learning literature there have been many methods to produce images that correspond to certain classes or specific neurons in the CNN[Zeiler]. There are two main methods in the literature. Deconvolution methods rely on an input image and highlight pixels in an image that activate a neuron of interest. Deconvolution requires the presence of an image. There are other methods that try maximize the class scores or activations of neurons with respect to pixel intensities[Deep Dream]. However these methods only work for lower level features or more shallow neurons. At higher layers, the neurons represent more abstract concepts such as a dog. Thus an optimal image may have 10 dogs all over the image in different orientations and also have tails and dog ears that don’t are  actually lie on not part of  a dog. We propose several potential methods that do not rely on an input image that can also create realistic abstract concepts that correspond to certain neurons ”deep” in the CNN. The key reason abstract concepts such as "dog" can not be generated using the above method is that that there are multiple features in multiple locations that may fire the "dog" neuron. In real life however dog images do not occur with dogs all over the sky and big gigantic ears that exist by themselves without an attached body. Since intuitively, shallower neurons correspond to smaller features and the higher level neurons correspond to combinations of shallower features, a natural approach to fix the issue of generating unrealistic images would be to gather statistics of joint distributions of shallower features. We could use these statistics in a variety of ways. We could for example, use the deep dream method and then look at the activations that the deep dream image generates. If the activations of the shallow features seem to be an outlier of the joint distribution , we can decide that we need to reduce the activations of certain neurons. Once those neurons have been decided, we can back prop any one of those unneeded neurons, and instead take gradients steps to increase the loss rather than minimize it. This could be seen as a method combining both deconv and deep dream.