deletions | additions
diff --git a/Introduction.tex b/Introduction.tex
index 41c55f6..6760bce 100644
--- a/Introduction.tex
+++ b/Introduction.tex
...
\section{Introduction}
Matrices are one of the most fundamental objects in the mathematical description of nature, and as such they are ubiquitous in every area of science. For example, they arise naturally in linear response theory as the first term in a multidimensional Taylor series, encoding the response of each component of the system to each component of the stimulus. Hence, in many scientific applications, matrices contain the essential information about the system being studied.
Despite their ubiquity, the calculation of matrices often requires considerable computational effort. Returning to the linear response theory example, it might be necessary to individually calculate the response of every component of the system to every component of the stimulus and, depending on the area of application, each individual computation may itself be quite expensive. The overall expense stems from the fact that evaluating a matrix of dimension \(N\times M\) requires, in principle, the individual evaluation of \(N\times M\) elements. But this does not always have to be the case.
For example, if we know \emph{a priori} the eigenvectors of a \(N\times N\) diagonalizable matrix, then we can obtain the full matrix by only calculating the \(N\) diagonal elements. Similarly, a sparse matrix, which contains many zero elements, can be evaluated by calculating only the non-zero elements, if we know in advance where such elements are located. In this article, we present a general approach that can produce a considerable reduction in the cost of constructing a matrix in many scientific applications by substantially reducing the number of elements that need to be calculated.
For example, if we know \emph{a priori} the eigenvectors of a \(N\times N\) diagonalizable matrix, then we can obtain the full matrix by only calculating the \(N\) diagonal elements. Similarly, a sparse matrix, which contains many zero elements, can be evaluated by calculating only the non-zero elements, if we know in advance where such elements are located.
In this article, we present a general approach that can produce a
considerable reduction in the cost of constructing a matrix in many
scientific applications by substantially reducing the number of
elements that need to be calculated. The key numerical procedure of our approach is a method to cheaply recover sparse matrices with a cost proportional to the number of non-zero elements. The matrix reconstruction procedure is based on the increasingly popular compressed sensing
approach~\cite{Candes2006,Donoho2006,Candes2008}, a state-of-the-art signal processing technique developed for minimizing the amount of data that needs to be measured to reconstruct a sparse signal.
The use of compressed sensing and sparse sampling methods for scientific development has been dominated by experimental applications~\cite{Kazimierczuk_2011,Holland_2011,Gross_2010,Zhu_2012,Sanders2012,Doneva_2010}. However compressed sensing is also becoming a tool for theoretical applications~\cite{Schaeffer_2013,Almeida_2012,Nelson_2013}. In particular, in previous work we have shown that compressed sensing can also be used to reduce the amount of computation in numerical simulations~\cite{Andrade2012b}.
In this article, we extend compressed sensing to the problem of reconstructing
sparse scientific matrices. This method has two key properties. First, the cost of the procedure is proportional to the size of the number of non-zero elements in the matrix, without the need to know
a priori \emph{a priori} the location of the non-zero elements.
And second, Second, the rescontruction is exact.
The availabilty of Furthermore, such
method, a method is
useful not only
useful for sparse matrices. It makes it practical to devise methods to find bases where the matrix is known or suspected to be sparse based on the characteristics and previous knowledge of each problem.
To demonstrate the power of our approach, we apply these ideas
for the problem of to the determination of the vibrational modes of molecules from electronic structure
methods, that requires methods. These methods require the calculation of the matrix of the second derivatives of the energy with respect to the nuclear displacements, known as the
Hessian, dynamical matrix, force-constant or
force constants. Hessian matrix. This matrix is routinely obtained in numerical
simulation simulations by
chemist chemists and
physicist, however, physicists, but it is relatively expensive to compute when accurate quantum mechanical methods are used. Our application shows that how we can exploit the sparsity of the matrix to make important improvements in the efficiency of this calculation, and at the same time makes it practical to bootstrap this calculations using lower accuracy models, something that previously was not worthwile to do.
We
start the discussion begin by introducing the mathematical foundations of the method of compressed sensing and applying it to the problem of sparse matrix reconstruction. This is the numerical tool that
constitutes forms the
base foundation of our approach. Next, we discuss how this tool makes it practical to take a new approach for the calculation of matrices based on the idea of finding strategies to make
our the matrix sparse.
Finally Finally, we illustrate these new ideas by applying them to the problem of obtaining molecular vibrations from quantum mechanical simulations.