Daniele Cono D'Elia edited case-study.tex  over 8 years ago

Commit id: 2211ba9c83fe530a890853893f16857e6850a8e9

deletions | additions      

       

\subsection{Generating optimized code}  The core of the optimization pipeline is the callback optimizer component, that is responsible for generating optimized code for the current function $f$ using profiling (i.e., the object containing the first argument for \feval\) and contextual information passed from the open-OSR stub. As a first step, the optimizer will process the profiling object to resolve the target of the call - which we call $g$ - and check whether a previously compiled optimized function is available from the code cache. If not, a new function $f_{opt}$ is generated by cloning the IIR representation $f^{IIR}$ of $f$ into $f^{IIR}_{opt}}$ and replacing all the \feval\ calls in the same group of the instrumented one with direct calls to $g$.  As a next step, the optimizer asks the IIR compiler to analyze $f^{IIR}_{opt}$ and generate optimized LLVM IR $f^{IR}_{opt}$, also making a copy of the variable map between IIR and IR objects  when compiling the direct call corresponding to the \feval\ instruction that triggered the OSR transition. OSR.  Thisvariable  map is essential for the construction of a state mapping between $f^{IR}$ to $f^{IR}_{opt}$, as it is compared against the variable corresponding  map stored during the lowering of $f$ to determine  for each value in $f^{IR}_{opt}$ live at  the corresponding point to determine: continuation block if either:  \begin{itemize}  \item [...] an {\tt llvm::Value*} from $f^{IR}$ passed as argument at the OSR point can be used directly,  \item or compensation code is required for reconstructing its value  \end{itemize}