this is for holding javascript data
Jacob Sanders deleted FigreffigSamplingVsS.tex
almost 10 years ago
Commit id: 497103890f292c656fd1913c91afc90d376bb4fd
deletions | additions
diff --git a/FigreffigSamplingVsS.tex b/FigreffigSamplingVsS.tex
deleted file mode 100644
index 07fcc27..0000000
--- a/FigreffigSamplingVsS.tex
+++ /dev/null
...
Fig.~\ref{fig:SamplingVsSparsity} depicts this relationship more explicitly, showing how the minimum number of sampled entries (\(M^*\)) required for exact recovery (relative error \( < 10^{-7}\)) scales with the sparsity of the matrix to be recovered. For comparison, the graph also compares two limiting cases: the worst-case scenario of ``no prior knowledge,'' where \emph{all} entries of the matrix must be measured, and the best-case scenario of a ``perfect oracle'' who reveals where the non-zero entries of the matrix are located, so that only those entries need to be measured. Not surprisingly, compressed sensing falls between these two limits: knowing the matrix is sparse provides a clear advantage, but there is a price to pay for not knowing \emph{a priori} where the non-zeros are located. As the graph shows, when the matrix to be recovered is very sparse (less than 10\% non-zero entries), the required sampling scales linearly with the sparsity, in accordance with the relationship \(M^* \propto \mu^2 S \log N^2\). This is the case of interest in the physical applications covered in the next section: we can make a number of measurements that scales linearly with the number of non-zero elements to be recovered, rather than a number of measurements that scales with the full size of the matrix.
diff --git a/layout.md b/layout.md
index 271856d..952e388 100644
--- a/layout.md
+++ b/layout.md
...
In_this_section_we.tex
figures/RelErrorVsSampling/ErrorVsSampling.png
figures/SamplingVsSparsity/SamplingVsSparsity.png
FigreffigSamplingVsS.tex
sectionCompressibili.tex
sectionApplication_m.tex
figures/Hessian/Hessian.png