this is for holding javascript data
Jacob Sanders renamed sectionIntroduction_.tex to Introduction.tex
over 9 years ago
Commit id: 32329058ee0f7b74a35d1ff6d7b899894d119c96
deletions | additions
diff --git a/Introduction.tex b/Introduction.tex
new file mode 100644
index 0000000..dfdd474
--- /dev/null
+++ b/Introduction.tex
...
\section{Introduction}
Matrices are one of the most fundamental objects in the mathematical description of nature, and as such they are ubiquitous in every area of science. For example, they arise naturally in all types of linear response theories as the first term in a multidimensional Taylor series, encoding the response of each component of the system to each component of the stimulus. Hence, in many scientific applications, matrices contain the essential information about the system being studied.
Despite their ubiquity, the calculation of matrices often requires considerable computational effort. Returning to the linear response theory example, it might be necessary to individually calculate the response of every component of the system to every component of the stimulus and, depending on the area of application, each individual computation may itself be quite expensive. The overall expense stems from the fact that evaluating a matrix of dimension \(N\times M\) requires, in principle, the individual evaluation of \(N\times M\) elements. But this does not always have to be the case.
For example, if we know \emph{a priori} the eigenvectors of a \(N\times N\) diagonalizable matrix, then we can obtain the full matrix by only calculating the \(N\) diagonal elements. Similarly, a sparse matrix, which contains many zero elements, can be evaluated by calculating only the non-zero elements, if we know in advance where such elements are located.
In this article, we present a general approach that can produce a
considerable reduction in the cost of constructing a matrix in many
scientific applications by substantially reducing the number of
elements that need to be calculated.
The key numerical procedure of our approach is a method to cheaply
recover sparse matrices with a cost proportional to the number of
non-zero elements. For the matrix reconstruction procedure is based on
the increasingly popular compressed sensing
approach~\cite{Candes2006,Donoho2006,Candes2008}. Compressed sensing
is a state-of-the-art signal processing technique developed for
minimizing the amount of data that needs to be measured to reconstruct
a sparse signal.
The use of compressed sensing and sparse sampling methods for scientific development has been dominated by expermiental applications~\cite{Kazimierczuk_2011,Holland_2011,Gross_2010,Zhu_2012,Sanders2012,Doneva_2010}. However compressed sensing is also becoming a tool for theoretical applications \cite{Schaeffer_2013,Almeida_2012,Nelson_2013}. In particular, in previous work we have shown that compressed sensing can also be used to reduce the amount of computation in numerical simulations~\cite{Andrade2012b}.
In this article, we extend compressed sensing to the problem of reconstructing sparse matrices. This method has two key properties. First, the cost of the procedure is proportional to the size of the number of non-zero elements in the matrix, without the need to know a priori the location of the non-zero
elements. And second, the rescontruction is exact. The availabilty of such method, is not only useful for sparse
matrices. It makes it practical to devise methods to find bases where
the matrix is known or suspected to be sparse based on the
characteristics and previous knowledge of each problem.
To demonstrate the power of our approach, we apply these ideas for the problem of the determination of the vibrational modes of
molecules from electronic structure methods. The method involves
calculating the matrix of the second derivatives of the energy with
respect to the nuclear displacements, known as the quantum mechanical
matrices in day-to-day computational chemistry investigations. This
matrix is relatively expensive to compute via standard methods, and
our approach achieves a more favorable scaling with the number of
atoms and, consequently, a substantial decrease in the computational
cost. By incorporating lower accuracy result as a way to bootstrap the calculation.