Xavier Andrade edited Compressibility.tex  over 9 years ago

Commit id: c869439fdca152814db832497444a7960dc3a84d

deletions | additions      

       

\section{Finding a sparse description of the problem}  The first step in our approach is to find a representation for the problem where the matrix to be calculated is expected to be sparse. In general, the issue  finding this \textit{sparsifying basis} is specific for each problem and can go from trivial to quite complex, and complex. In general,  it is a problem that  has to do with the knowledge we have about the problem or what we expect about its solution. The idea of using Using  additional information about is an essential concept in compressed sensing and it is intimately related to  the problem concepts of sparsity and compressibility, but is also a concept that  is often used in numerical simulations. For example, in quantum chemistry is customary to represent the orbitals of a molecule in a basis formed by the orbitals of the atoms in the molecule~\cite{Szabo}, which allows for an efficient and compact representation and a controlled discretization error. This comes from the notion that the electronic structure of the molecule is roughly described by ``patching together'' the electronic structure of the constituent atoms. Using additional information An ideal basis to reconstruct a matrix  isalso an essential concept in  the theory basis of its eigenvectors, or \textit{eigenbasis}, as this only requires the evaluation of the diagonal elements to obtain the whole matrix. Of course, the determination of the eigenvalues implies the knowledge  of compressed sensing and the matrix so they are not useful for practical purposes. However, in many cases  it is intimately related possible to obtain a set of reasonable approximations  to the concepts eigenvectors, an idea that is the base  of sparsity and compressibility. perturbation theory in quantum mechanics. The approximated eigenbasis probably constitutes a good sparsifying basis for many problems, as we expect the matrix to be diagonally dominant, with a large fraction of the off diagonal elements to be zero or at least quite small.  There are serveral peculiarities to our approach. While in general the additional knowledge about Since  the problem must be integrated a priori while designing determination of an eigenbasis depends on  the simulation strategy, our approach specific problem, it  is difficult to give a  general and rule on how to do it. For example, in simulation where there are several steps or iterations, results from previous iterations  can be used. Or in general, predictions of low-accuracy low-cost methods could be used to bootstrap calculation using more expensive and more accurate methods.  What makes looking for sparsifying basis attractive, even at some computational cost and code complexity, are the properties of the recovery method (that  we present an approach will discuss in detail the next section). In first place, the cost of recovering the matrix is roughly proportional to the sparsity of the matrix. And, in second place, the reconstruction of the matrix is always exact up to the requested precision; even if the basis is not a good approximation we obtain the correct result. The penalty for a bad guess is additional computational cost, which in the worst case makes the calculation as costly as if compressed sensing was not used at all. This implies that the method will almost certainly offer some performance gain.  Another issue appears when %There are serveral peculiarities to our approach. While in general the additional knowledge %about the problem must be integrated a priori while designing the simulation strategy, our %approach is general and can  In the case %The properties  ofusing a certain guess or approximation for compressed sensing, as  the reconstruction is exact up method: no need  to know  the requested precision, even if the basis is not a good approximation we obtain the correct result. The penalty for a bad guess is additional computational cost, which in location of  the worst case makes non-zero elements, quasi%-linear scaling with  the reconstruction as costly as if compressed sensing was not used at all. matrix sparsity and exact recovery, make it practical to use  However, the properties of   An ideal basis would be a set of approximated eigenvectors where we expect the matrix to be diagonally dominant, with several of the off diagonal elements to be zero or quite small.  For example, in a time-dependent or iterative simulation, information from previous steps can   The properties of the method: no need to know the location of the non-zero elements, quasi-linear scaling with the matrix sparsity and exact recovery, make it practical to use   Add extra knowledge.  opens new possibilities for the calculations of matrices, even if a basis where the basis is sparse is not known.  The idea of finding a basis where the matrix is approximately diagonal is the basis of perturbation theory, for example. However in general this methods involve approximations with an error that depends on how good the eigenvalues are approximated.   As in compressed sensing, however, the reconstruction is exact up the requested precision, the approximated basis does not influence the error in the result. Only the computational cost.  begs the obvious question: for a given scientific application, how can  we find a basis in which the matrix we wish to calculate is sparse?  Of course, every diagonalizable matrix has a sparse basis, namely the  basis of its eigenvectors. The catch, of course, is that finding the  eigenvectors of a matrix is at least as hard as calculating the whole  matrix itself. This is where the compressed sensing approach we  presented in the last section can help.  In many scientific applications, notably computational chemistry, it  turns out to be possible to cheaply \emph{approximate} the  eigenvectors of a matrix, and thus obtain a basis in which the matrix  is sparse (and nearly diagonal). This is the approach we take to  estimating the vibrational modes and frequencies of molecules, where  it enables a reduction of about 75\% of the expensive quantum  mechanical computations.