this is for holding javascript data
Jacob Sanders deleted file Theory.tex
over 9 years ago
Commit id: 01757aebec4f3b90fc38313d4f1be615368a79b0
deletions | additions
diff --git a/Theory.tex b/Theory.tex
deleted file mode 100644
index 0275ddc..0000000
--- a/Theory.tex
+++ /dev/null
...
\section{Numerical foundation: compressed sensing for matrices}
In this section we develop a method to calculate the coefficients of sparse matrices without knowing \textit{a priori} where those elements are
in the matrix. This method results from the application of the
compressed sensing approach to matrices. Related work has been
presented in the field of compressive principal component
pursuit~\cite{Baraniuk2011,Candes2011,Zhou2010,Wright2013}, which
focuses on reconstructing matrices that are the sum of a low-rank
component and a sparse component. Our work instead outlines a general
procedure for reconstructing any sparse matrix by measuring it in a
different basis.
Suppose we wish to recover an \(N \times N\) matrix \(A\) known to be sparse in a particular orthonormal basis $\{\psi_i\}$(for simplicity we restrict ourselves to square matrices an orthonormal bases). Without no prior knowledge of where the \(S\) non-zero
entries of \(A\) are located, to recover the matrix we need to calculate all the elements of of the matrix.
In a second basis \(\{\phi_i\}\), the matrix \(A\) has a second representation \(B\) given by
\begin{equation}
\label{eq:cob_matrix}
B = PAP^{T}\ ,
\end{equation}
where \(P\) is the orthogonal matrix that transforms any vector from
the basis $\{\psi_i\}$ to $\{\phi_i\}$. Note that in general, \(B\) is not a sparse matrix.
If we regard \(A\) and \(B\) as \(N^2\)-element \emph{vectors}
formed by stacking the columns of the matrices, we can note that the change-of-basis
transformation between \(A\) and \(B\) given by eq.~(\ref{eq:cob_matrix}) is \emph{linear}. This fact enables us to use of the machinery of compressed sensing for the problem of determining \(A\).
Given the matrix \(P\) or equivalently the basis \(\{\phi_i\}\), compressed sensing allows to reconstruct the full matrix \(A\) sampling \texit{some} of the entries of \(B\). The reconstruction is done by solving the basis pursuit problem (BP),
\begin{equation}
\label{eq:bpdn}
\min_{A} ||A||_1 \quad \textrm{subject to} \quad (PAP^T)_{ij} = B_{ij}\quad \forall
\ i,j \in W \ ,
\end{equation}
where the 1-norm is considered as a \emph{vector} norm
(\(||A||_1 = \sum_{i,j} \left|A_{ij}\right|\)), and \(W\) is a set of randomly chosen matrix entries.
The size of \(W\), a number that we call \(M\), is the number of matrix elements of \(B\) that we need to sample and determines the quality of the reconstruction. From compressed sensing theory we can find a lower bound for \(M\), as a function of \(P\), \(S\) and \(N\).
One important requirement for compressed sensing is that the basis
$\{\psi_i\}$ for \(A\) and the basis $\{\phi_i\}$ for \(B\) should be
%
\begin{equation}
\label{eq:coherence}
\mu = \sqrt{N} \max_{i,j} \langle \psi_i | \phi_j \rangle
\end{equation}
%
should be as \emph{small} as possible (in general \(\mu\) ranges from
1 to \(\sqrt{N}\)). Intuitively speaking, this incoherence condition
simply means that the change-of-basis matrix \(P\) should thoroughly
scramble the entries of \(A\) to generate \(B\).
It can be proven~\cite{Candes2006,Donoho2006,Candes2008} that the
number of entries of \(B\) which must be measured in order to fully
recover \(A\) by solving the BP problem in eq.~\ref{eq:bpdn} scales
as
\begin{align}
\label{eq:csscaling}
M^* \propto \mu^2 S \log N^2\ .
\end{align}
This scaling equation encapsulates the important aspects of compressed
sensing applied to sparse matrices: if a proper basis is chosen, the
number of entries which must be measured scales linearly with the
the matrix.