Camil Demetrescu  over 8 years ago

Commit id: 88189a843bdf37c6294ab242ac745881daaf9573

deletions | additions      

       

The main component of the artifact is an interactive VM called \tinyvm\ built on top of the LLVM MCJIT runtime environment and the \osrkit\ library. The VM provides an interactive environment for IR manipulation, JIT-compilation, and execution of functions either generated at run-time or loaded from disk: for instance, it allows the user to insert OSR points in loaded functions, run optimization passes on them, display their CFGs, and repeatedly invoke a function for a specified amount of times. \tinyvm\ supports dynamic library loading and linking, and includes a helper component for MCJIT that simplifies tasks such as handling multiple IR modules, symbol resolution in presence of multiple versions of a function, and tracking native code and other machine-level generated objects such as Stackmaps.  \tinyvm\ is located in {\small\tt /home/osrkit/Desktop/tinyvm/} and runs a case-insensitite command-line interpreter:  \begin{small}  \begin{verbatim}  osrkit@osrkit-AE:~/Desktop/tinyvm$ tinyvm  Welcome! Enter 'HELP' to show the list of commands.  TinyVM>   \end{verbatim}  \end{small}  \noindent Use ``help'' to print basic documentation on how to use the shell. Usage scenarios are discussed in \ref{ss:art-eval-res}.  \subsubsection{Check-list (artifact meta information)} 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \subsection{Evaluation and Expected Result}  \label{ss:art-eval-res}  \input{artifact/session1}  \input{artifact/session2}         

\subsubsection{Session 2: Performance Figures}  The experiments can be repeated by executing scripts on a selection of the \shootout\ benchmarks~\cite{shootout}. Each benchmark was compiled in {\tt clang} with both {\tt -O0} and {\tt -O1}. For each benchmark {\tt X}, {\tt tinyvm/shootout/X/} contains the unoptimized and optimized ({\tt -O1}) IR code: code in two versions:  \begin{itemize}[parsep=0pt]  \item {\tt bench}: bench} and {\tt bench-O1}:  IR code of the benchmark; \item %\item  {\tt codeQuality}: IR code of the benchmark  with the hottest loop instrumented with a never-firing OSR; \item {\tt finalAlwaysFire}: finalAlwaysFire} and {\tt finalAlwaysFire-O1}:  IR code with of the benchmark preprocessed by turning the body of  the hottest loop instrumented with an always-firing OSR. into a separate function (see \ref{ss:experim-results}).  \end{itemize}  \noindent Each experiment runs a warm-up phase followed by 10 identical trials. We manually collected the figures from the console output and analyzed them, computing confidence intervals, etc. intervals.  We show how to run the code using {\tt n-body} as an example. Times reported in this section have been measured in VirtualBox on an Intel Core i7 platform, a different setup than the one discussed in \ref{ss:bench-setup}. \paragraph{Question Q1.} The purpose of the experiment is assessing the impact on code quality due to the presence of OSR points.  The first step consists in generating figures for the baseline (uninstrumented) benchmark version: 

\end{verbatim}  \end{small}  \noindent Experiment The script is as follows:  \begin{small}  \begin{verbatim}  LOAD_IR shootout/n-body/bench.ll  bench(50000000)  REPEAT 10 bench(50000000)  QUIT  \end{verbatim}  \end{small}  \noindent which loads the IR code, performs a warm-up execution of the benchmark, and then 10 repetitions. The experiment  duration $\approx1$m. Time is $\approx1$m, with a time  per trial: trial  $\approx5.725$s. The benchmark with the hottest loop instrumented with a never-firing OSR can be run as follows: \begin{small}  \begin{verbatim} 

\end{verbatim}  \end{small}  \noindent Experiment The script is as follows:  \begin{small}  \begin{verbatim}  LOAD_IR shootout/n-body/bench.ll  INSERT_OSR 5 NEVER OPEN UPDATE IN bench AT %8 CLONE  bench(50000000)  REPEAT 10 bench(50000000)  QUIT  \end{verbatim}  \end{small}  \noindent The experiment  duration $\approx1$m. Time is $\approx1$m with a time  per trial: $\approx5.673$s. The ratio $5.673/5.725=0.990$ is reported for {\tt n-body} in \ref{fig:code-quality-base}. The experiment for building \ref{fig:code-quality-O1} uses scripts in {\tt bench-O1} and {\tt codeQuality-O1}. \paragraph{Question Q2.}