this is for holding javascript data
Camil Demetrescu
over 8 years ago
Commit id: 7388fed07ff44c60a45075f66ef07ff4f3988457
deletions | additions
diff --git a/artifact/artifact.tex b/artifact/artifact.tex
index 1278dfa..8402cd4 100644
--- a/artifact/artifact.tex
+++ b/artifact/artifact.tex
...
\subsubsection{Session 2: Performance Figures}
The experiments can be repeated by executing scripts on a selection of the \shootout\ benchmarks~\cite{shootout}. Each benchmark was compiled in {\tt clang} with both {\tt -O0} and {\tt -O1}. For each benchmark {\tt X},
the directory {\tt tinyvm/shootout/X/}
contains: contains the unoptimized and optimized ({\tt -O1}) IR code:
\begin{itemize}[parsep=0pt]
\item {\tt
C}: C bench}: IR code
of the benchmark;
\item {\tt
bench.ll}: baseline codeQuality}: IR code
({\tt -O0}) with the hottest loop instrumented with a never-firing OSR;
\item {\tt
bench-O1.ll}: baseline finalAlwaysFire}: IR code
({\tt -O1}) with the hottest loop instrumented with an always-firing OSR.
\end{itemize}
\noindent Each experiment runs
a warm-up phase followed by 10 identical trials. We manually collected the figures from the console output and analyzed them, computing confidence intervals, etc.
We show how to run the code using {\tt fasta} as an example. For slow steps, we report the time required on our test platform (\mysection\ref{ss:bench-setup}).
\paragraph{Baseline.} The first step consists in generating figures for the baseline
(uninstrumented) version of each
benchmark. benchmark:
\begin{small}
\begin{verbatim}
tinyvm$ tinyvm shootout/scripts/bench/fasta
\end{verbatim}
\end{small}
\noindent Experiment duration $\approx1:30$ sec. Time per trial: $\approx9$ sec.
\paragraph{Question Q1.} The purpose of the experiment is assessing the impact on code quality due to the presence of OSR points.
diff --git a/experim.tex b/experim.tex
index 6c50dc7..4ea6c0c 100644
--- a/experim.tex
+++ b/experim.tex
...
\ifauthorea{\newline}{}
\subsection{Benchmarks and Setup}
\label{ss:bench-setup}
We address questions Q1-Q3 by analyzing the performance of \osrkit\ on a selection of the \shootout\ benchmarks~\cite{shootout} running in a proof-of-concept virtual machine we developed in LLVM. In particular, we focus on single-threaded benchmarks that do not rely on external libraries to perform their core computations. Benchmarks and their description are reported in \mytable\ref{tab:shootout}; four of them ({\tt b-trees}, {\tt mbrot}, {\tt n-body} and {\tt sp-norm}) are evaluated against two workloads of different size.
%In this section we present a preliminar experimental study of our OSR implementation in TinyVM. Our experiments are based on the \shootout\ test suite, also known as the Computer Language Benchmark Game~\cite{shootout}. In particular, we focus on single-threaded benchmarks that do not rely on external libraries to perform their core computations.