Xavier Andrade edited Conclusions.tex  over 9 years ago

Commit id: edad0dbd9a29234f5889e22ed56b771a4423d994

deletions | additions      

       

We have presented a new approach for calculating matrices. This method is suitable for applications where the cost of computing each matrix element is high in comparison to the cost of linear algebra operations. Our approach leverages the power of compressed sensing to avoid computing every matrix element, thereby achieving substantial computational savings.   When applied to the problem of molecular vibrations of organic molecules, our method results in accurate frequencies and normal modes with about 30\% of the expensive quantum mechanical computations usually required, which represents a quite significant 3x speed-up. Depending on the sparsity of the Hessian, we have shown that our method can improve the overall scaling of the computation. This numbers could be further improved by using more sofisticated compressed sensing approaches, for example an interesting development is recovery algorithms based on belief propagation\cite{Krzakala2012a,Krzakala2012b}, that offer a recovery cost that is directly proportional to the sparsity of the signal, and that could be easily integrated in our approach.  It is interesting to be applied to other calculations where calculations that are common in the computational chemistry, for example the Fock matrix or the reponse matrix in linear-response time-dependent DFT. Nevertheless, our method is not restricted to quantum chemistry and it is applicable to many problems throughout the physical sciences and beyond. The main requirement is an \emph{a priori} guess of a basis in which the matrix to be computed is sparse. The optimal way to achieve this requirement is problem-dependent, but as research into sparsifying transformations continues to develop, we believe our method will enable considerable computational savings in a diverse array of scientific fields.  In fact, a recent area of interest in compressed sensing is to develop dictionary learning methods that do not directly require the knowledge of a sparsifying basis, but that generate it on-the-fly based on the problem~\cite{Aharon_2006,Rubinstein_2010}. We believe that combining our matrix recovery protocol with state-of-the-art dictionary learning methods may eventually result in a new computational paradigm for the calculation of scientific matrices.  Beyond the specific problem of computing matrices, matrices that we have addressed,  in this work we have shown that compressed sensing can be integrated in the core of computational simulation with the purpose or reducing the numerical cost by optimizing the information we get from each computation. We show that using compressed sensing introduces the concept have introduced as well an effective method  of bootstraping calculations by using information from lower accuracy calculations, something that is not routinely done in quantum chemical calculations. In this new picture, the role of expensive high-accuracy models would be to correct the low accuracy results, with a cost proportional to the magnitude of required correction, rather than recalculating all results from the scratch.  %There are serveral peculiarities to our approach. While in general the additional knowledge %about the problem must be integrated a priori while designing the simulation strategy, our %approach is general and can