Bernal Jimenez edited section_Descending_the_Alternate_Sparse__.tex  about 8 years ago

Commit id: 27b76a1b20f4107461cf26789214ff391c0f6dcf

deletions | additions      

       

\end{equation}  The first term is the same linear filtering term as in SAILnet. The second is the leakiness term with an additional scaling by the length squared of the dictionary element. The dictionary is commonly normalized to have length 1 to prevent it from growing without bound, although Oja's rule does not require this. Empirically, the mean norm will be on the order of length 1, but can vary by a small integer factor and will have non-zero variance. It is an interesting prediction that the leakiness of the membrane of a neuron should scale with the overall strength of its synapses. $-\theta$ would be converted into a spike-threshold in a LIF version of this analog equation. Finally, the last term is twice the SAILnet value due to $W$ being symmetric.  Without an LIF circuit, the analog optimization problem is thus solved by  \begin{equation}  \frac{da_i}{dt} = -\frac{\partial E(a|X; \Phi, W, \theta)}{\partial a_i}  \end{equation}  Given this analog optimization problem, there are a number of different ways to instantiate a LIF circuit to approximate inference. The negative gradient can be interpreted as the driving force in the circuit  \begin{equation}  \frac{du_i}{dt} = -\frac{\partial E(a|X; \Phi, W, \theta)}{\partial u_i} = -\frac{\partial E(a|X; \Phi, W, \theta)}{\partialu_i} \frac{\partial E(a|X; \Phi, W, \theta)}{\partial  a_i} \frac{\partial a_i}{\partial u_i} \end{equation}