Camil Demetrescu  over 8 years ago

Commit id: 54e82e47b0ba83d9b7e4abfb42b08301b13dced4

deletions | additions      

       

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \subsection{Experiment Workflow}  We propose two usage sessions. In the first session, we show how to generate and instrument an LLVM IR code based on the \texttt{isord} example presented in \mysection\ref{se:osr-llvm}. The second session focuses on how to run the scripts used to generate the performance tables of \mysection\ref{se:experiments} related to questions Q1, Q2, and Q3. Question Q4 is based on additional third-party software (the MATLAB McVM runtime) runtime\footnote{The source code of the version used in the paper, which we ported to LLVM 3.6+ and extended with the {\tt feval} optimization technique discussed in \ref{ss:eval-opt-mcvm}, is available at \url{https://github.com/dcdelia/mcvm}.})  and is not addressed in the artifact. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  \subsection{Evaluation and Expected Result}         

\noindent Each experiment runs a warm-up phase followed by 10 identical trials. We manually collected the figures from the console output and analyzed them, computing confidence intervals. We show how to run the code using {\tt n-body} as an example. Times reported in this section have been measured in VirtualBox on an Intel Core i7 platform, a different setup than the one discussed in \ref{ss:bench-setup}.  \paragraph{Question Q1.} The purpose of the experiment is assessing the impact on code quality due to the presence of OSR points.  The first step consists in generating figures for the baseline (uninstrumented) benchmark version: version. Go to {\small\tt /home/osrkit/Desktop/tinyvm} and type:  \begin{small}  \begin{verbatim}  tinyvm$ $  tinyvm shootout/scripts/bench/n-body \end{verbatim}  \end{small} 

\begin{small}  \begin{verbatim}  tinyvm$ $  tinyvm shootout/scripts/codeQuality/n-body \end{verbatim}  \end{small}