Daniele Cono D'Elia edited case-study.tex  over 8 years ago

Commit id: a908820a58650aaa30ad5b88a374cdcc3e267b7b

deletions | additions      

       

\newcommand{\gbase}{$g$}  \newcommand{\gOpt}{$g_{opt}$}  \newcommand{\gIIR}{$f^{IIR}$}  \newcommand{\gIR}{} \newcommand{\gIIR}{$g^{IIR}$}  \newcommand{\gIR}{$g^{IR}$}  \newcommand{\gOptIIR}{$g^{IIR}_{opt}$}  \newcommand{\gOptIR}{$g^{IR}_{opt}$}  \subsection{Generating Optimized Code}  The core of our optimization pipeline is the optimizer module that is responsible for generating optimized code for the current function $f$ using the run-time value of the first argument for \feval and contextual information passed from the open-OSR stub. As a first step, the optimizer inspects {\tt val} to resolve the target of the call - which we call $f$ - and check whether a previously compiled optimized function is available from the code cache. If not, a new function \gOpt\ is generated by cloning the IIR representation \gIIR\ of \gbase\ into \gOptIIR\ and replacing all the \feval\ calls in the same group of the instrumented one with direct calls to $f$.  As a next step, the optimizer asks the IIR compiler to analyze \gOptIIR\ and generate optimized LLVM IR \gOptIR, also making a copy of storing  the variable map between IIR and IR objects when compiling the direct call corresponding to the \feval\ instruction that triggered the OSR. This map is essential for the construction of next step, which is constructing  a state mapping between $f^{IR}$ \gIR\  to $f^{IR}_{opt}$, \gOptIR,  as it is compared against the corresponding map stored during the lowering of $f$ \gbase\  to determine for each value in $f^{IR}_{opt}$ \gOptIR\  live at the continuation block whether: \begin{itemize}  \item an {\tt llvm::Value*} from $f^{IR}$ \gIR\  passed as argument at the OSR point can be used directly \item or, compensation code is required to reconstruct its value before jumping to the block.  \end{itemize}  \noindent In fact, since the type inference engine yields more accurate results for $f^{IIR}_{opt}$ \gOptIIR\  compared to $f^{IIR}$, \gIIR,  the IIR compiler can in turn generate efficient specialized IR code for representing and manipulating IIR variables, and compensation code is typically required to unbox or downcast some of the live values passed at the OSR point. Compensation code might also be required to materialize an IR object for an IIR variable that were previously accessed through get/set methods from the environment. Once a state mapping object has been constructed, the optimizer calls our OSR library to generate the continuation function for the OSR transition and eventually compiles it. A pointer to the compiled function is stored in the code cache and returned to the stub, which invokes it through an indirect call passing the live state saved at the OSR point.