Camil Demetrescu  over 8 years ago

Commit id: 24883ac12e4b00b244a40a782884c6ec7d737f1a

deletions | additions      

       

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \subsection{Description}  The main component of the artifact is an interactive VM called \tinyvm\ built on top of the LLVM MCJIT runtime environment and the \osrkit\ library. The VM provides an interactive environment for IR manipulation, JIT-compilation, and execution of functions either generated at run-time or loaded from disk: for instance, it allows the user to insert OSR points in loaded functions, run optimization passes on them, display their CFGs, and repeatedly invoke a function for a specified amount of times. \tinyvm\ supports dynamic library loading and linking, and includes a helper component for MCJIT that simplifies tasks such as handling multiple IR modules, symbol resolution in presence of multiple versions of a function, and tracking native code and other machine-level generated objects such as Stackmaps.  \subsubsection{Check-list (artifact meta information)}  %{\em Fill in whatever is applicable with some informal keywords and remove the rest} 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \subsection{Evaluation and Expected Result}  The main component of the artifact is the interactive VM \tinyvm\ built on top of the LLVM MCJIT runtime environment and the \osrkit\ library. The VM supports interactive invocation of LLVM IR functions either generated at run-time or loaded from disk. The main design goal behind \tinyvm\ is the creation of an interactive environment for IR manipulation and JIT-compilation of functions: for instance, it allows the user to insert OSR points in loaded functions, run optimization passes on them, display their CFGs, and repeatedly invoke a function for a specified amount of times. \tinyvm\ supports dynamic library loading and linking, and includes a helper component for MCJIT that simplifies tasks such as handling multiple IR modules, symbol resolution in presence of multiple versions of a function, and tracking native code and other machine-level generated objects such as Stackmaps.  \input{artifact/session1}  \input{artifact/session2}         

\end{verbatim}  \end{small}  \noindent Experiment duration $\approx1$:$29$ sec. Time per trial: $\approx8.98$ sec. The benchmark with the hottest loop instrumented with a never-firing OSR can be run as follows:  \begin{small}  \begin{verbatim} 

\end{verbatim}  \end{small}  \noindent Experiment duration $\approx1$:$31$ sec. Time per trial: $\approx9.15$ sec. The ratio $9.15/8.98=1.01$ is reported for {\tt fasta} in \ref{fig:code-quality-base}.  %[Q2] What is the run-time overhead of an OSR transition, for instance to a clone of the running function?  %[Q3] What is the overhead of \osrkit\ for inserting OSR points and creating a stub or a continuation function?