Camil Demetrescu edited prev-eval-sol.tex  over 8 years ago

Commit id: 10b91f8f78691ca189ce69897b663ad80e28d2a2

deletions | additions      

       

% !TEX root = article.tex  \subsection{Existing \subsection{Current  Approaches} \label{ss:prev-eval-sol}  Lameed and Hendren~\cite{lameed2013feval} proposed two dynamic techniques for optimizing \feval\ instructions in McVM: {\em JIT-based} and {\em OSR-based} specialization. Both attempt to optimize a function $f$ that contains instructions of the form \feval$(g,...)$, leveraging information about $g$ and the type of its arguments observed at run-time. The optimization produces a specialized version $f'$ where \feval$(g,x,y,z...)$ instructions are replaced with direct calls of the form $g(x,y,z,...)$.  

%\item Guard computation in $f'$ can be rather expensive, as it may require checking many parameters.  %\end{enumerate}  Our approach combines the flexibility of OSR-based specialization with the efficiency of the JIT-based method, answering an open question raised by Lameed and Hendren~\cite{lameed2013feval}. Similarly to OSR-based specialization, it does not place restrictions on the functions that can be optimized. On the other hand, it works at IIR (rather than IR) level as in JIT-based specialization, which makes it possible to perform type inference on the specialized code. Working at IIR level eliminates the two main sources of inefficiency of OSR-based specialization: 1) we can replace generic istructions with specialized instructions, and 2) the types of $g$'s arguments do not need to be cached or guarded as they are statically inferred. These observations are confirmed in practice by experiments on real MATLAB programs, as we will show in \mysection\ref{ss:experim-results}.  %Furthermore, since the dynamic code optimization works at IIR (rather than IR) level, type inference can replace generic istructions with specialized instructions. %, removing the main source of inefficiency of OSR-based specialization.  %Furthermore, our solution is cheaper because the types for the other arguments do not need to be cached or guarded: as we will see later on, the type inference engine will compute the most accurate yet sound type information in the analysis of the optimized IIR where direct calls are used.