this is for holding javascript data
Bernal Jimenez edited section_Descending_the_Alternate_Sparse__.tex
about 8 years ago
Commit id: 3e6a1867526b154f292fe6780ced9b3cde452c29
deletions | additions
diff --git a/section_Descending_the_Alternate_Sparse__.tex b/section_Descending_the_Alternate_Sparse__.tex
index 1d13d8a..86846c6 100644
--- a/section_Descending_the_Alternate_Sparse__.tex
+++ b/section_Descending_the_Alternate_Sparse__.tex
...
\section{Descending the Alternate Sparse Coding Objective}\label{descend}
The original SAILnet paper \cite{Zylberberg_2011} maintains the reconstruction part of the sparse coding objective but instead of L1 regularization, the sparse prior was turned into a set of two constraints: homeostasis for the individual neuron's firing rates and decorrelation for the pairwise statistics:
\begin{equation} \begin{equation}\label{eq:1}
E(X, a; \Phi, W, \theta) = \frac{1}{2}\sum_i(X_i-\sum_j\Phi_{ij}a_j)^2 + \sum_i\theta_i(a_i-p) + \sum_{ij}W_{ij}(a_ia_j-p^2)
\end{equation}
The original sparse coding reconstruction term in \ref{eq:1} will lead to both non-local learning rules for $\Phi$ and a non-local inference circuit. So, it is approximated by an objective which will lead to Oja's learning
rule rule:
\begin{equation}\label{local}
E(X, a; \Phi, W, \theta) = \frac{1}{2}\sum_{ij}(X_i-\Phi_{ij}a_j)^2 + \sum_i\theta_i(a_i-p) + \sum_{ij}W_{ij}(a_ia_j-p^2).
\end{equation}