Camil Demetrescu edited eval-new-approach.tex  over 8 years ago

Commit id: f5b46ff1e25bed7f884ca1c01b4cbcb276ca06bb

deletions | additions      

       

\subsection{A New Approach}  \label{ss:eval-opt-mcvm}  The main idea for optimizing In this section, we present  a function $f$ containing an \feval\ instruction is to dynamically generate a variant $f'$ where new approach that combines  the \feval$(g,...)$ is replaced by a direct call flexibility  of OSR-based specialization with  theform $g(...)$. The key to  efficiency is the ability to perform type inference on of  the IIR level, JIT-based method, answering an open question raised by Lameed and Hendren~\cite{lameed2013feval}.  Our approach to The main idea for optimizing a function $f$ containing an  \feval\ optimization leverages OSR instruction is  to dynamically  generate specialized code a variant $f'$  where the \feval$(g,...)$ is replaced by a direct call of the form $g(...)$. The key to efficiency is the ability to perform type inference on the IIR level, [...]  Our approach to \feval\ optimization leverages OSR to generate specialized code where [...]  capture %capture  run-time information after After  porting it from the LLVM legacy JIT to MCJIT, we have extended it with the following components to enable the optimization of \feval\ instructions: \begin{enumerate}  \item An analysis pass to identify optimization opportunities for \feval\ instructions in the IIR of a function. 

\newcommand{\gOptIIR}{$g^{IIR}_{opt}$}  \newcommand{\gOptIR}{$g^{IR}_{opt}$}  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \paragraph{Generating Optimized Code.}  %\label{sse:optimizing_feval}  The core of our optimization pipeline is the optimizer module, which is responsible for generating optimized code for the running function \gBase\ using contextual information passed by an open-OSR stub. As a first step, the optimizer inspects {\tt val} to resolve the target $f$ of the \feval\ and checks whether a previously compiled version of \gBase\ optimized for $f$ is available from the cache. 

\fi  Once a state mapping object has been constructed, the optimizer calls our OSR library to generate the continuation function for the OSR transition and eventually compiles it. A pointer to the compiled function is stored in the code cache and returned to the stub, which invokes it through an indirect call passing the live state saved at the OSR point.  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \paragraph{Discussion.}  The approach presented in this section combines the flexibility of OSR-based specialization with the efficiency of the JIT-based method. Similarly to OSR-based specialization, we do not place restrictions on the functions that can be optimized. On the other hand, we works at IIR (rather than IR) level as in JIT-based specialization, which makes it possible to perform type inference on the specialized code. Working at IIR level eliminates the two main sources of inefficiency of OSR-based specialization: 1) we can replace generic istructions with specialized instructions, and 2) the types of $g$'s arguments do not need to be cached or guarded as they are statically inferred. These observations are confirmed in practice by experiments on real MATLAB programs, as we will show in \mysection\ref{ss:experim-results}.