Bernal Jimenez edited section_Descending_the_Alternate_Sparse__.tex  about 8 years ago

Commit id: dabd86cf345dfb6fc3828cd469410ae0b45c6e98

deletions | additions      

       

E(X, a; \Phi, W, \theta) = \frac{1}{2}\sum_i(X_i-\sum_j\Phi_{ij}a_j)^2 + \sum_i\theta_i(a_i-p) + \sum_{ij}W_{ij}(a_ia_j-p^2)  \end{equation}  The original sparse coding reconstruction term in \ref{eq:1}  will lead to both non-local learning rules for $\Phi$ and a non-local inference circuit. So, it is approximated by an objective which will lead to Oja's learning rule \begin{equation}\label{local}  E(X, a; \Phi, W, \theta) = \frac{1}{2}\sum_{ij}(X_i-\Phi_{ij}a_j)^2 + \sum_i\theta_i(a_i-p) + \sum_{ij}W_{ij}(a_ia_j-p^2).  \end{equation}