Camil Demetrescu  over 8 years ago

Commit id: be9664fde7848aad72d5c8c7eb7fc2340cf62281

deletions | additions      

       

% !TEX root = article.tex  \section{Conclusions}  \label{se:conclusions}  In this paper, we have discussed [...]. proposed an OSR framework that combines the advantages of different previous OSR techniques that no previous solution provides simultaneously.  %, without resorting to native code manipulation or special instrinsics of the intermediate level.  Our solution combines in a unifying framework the advantages of different previous OSR techniques that no previous solution provides simultaneously.  scheme for encoding OSR machinery using high-level language constructs.   %our continuation function is specialized for a given OSR landing pad and allows extensive optimizations.  \ifx\noauthorea\undefined  \paragraph{Acknowledgements.}         

For each benchmark we analyze CPU time performing 10 trials preceded by an initial warm-up iteration; reported confidence intervals are stated at 95\% confidence level.  % figure  \ifdefined\noauthorea  \begin{figure}[t]  \begin{center}  \includegraphics[width=0.95\columnwidth]{figures/code-quality-noBB/code-quality-noBB.eps}  \caption{\protect\input{figures/code-quality-noBB/caption}}  \end{center}  \end{figure}  \fi  % figure  \ifdefined\noauthorea  \begin{figure}[t]  \begin{center}  %\vspace{-0.55cm}  \includegraphics[width=0.95\columnwidth]{figures/code-quality-O1-noBB/code-quality-O1-noBB.eps}  \caption{\protect\input{figures/code-quality-O1-noBB/caption}}  \end{center}  \end{figure}  \fi  \subsection{Results}  \label{ss:experim-results}  %We now present the main results of our experimental evaluation, aimed at addressing the following questions: 

\end{table*}  \ifauthorea{\newline}{}  \paragraph{Q1: Impact on Code Quality.}  In order to measure how much a never-firing OSR point might impact code quality, we analyzed the source-code structure of each benchmark and profiled its run-time behavior to identify performance-critical sections for OSR point insertion. The distinction between open and resolved OSR points is nearly irrelevant in this context: we choose to focus on open OSR points, passing {\tt null} as the {\tt val} argument for the stub. 

%the same as in the experiments   reported in \mytable\ref{tab:sameFun}.  % figure  \ifdefined\noauthorea  \begin{figure}[bh]  \begin{center}  \includegraphics[width=0.95\columnwidth]{figures/code-quality-noBB/code-quality-noBB.eps}  \caption{\protect\input{figures/code-quality-noBB/caption}}  \end{center}  \end{figure}  \fi  % figure  \ifdefined\noauthorea  \begin{figure}[bh]  \begin{center}  %\vspace{-0.55cm}  \includegraphics[width=0.95\columnwidth]{figures/code-quality-O1-noBB/code-quality-O1-noBB.eps}  \caption{\protect\input{figures/code-quality-O1-noBB/caption}}  \end{center}  \end{figure}  \fi  \paragraph{Q2: Overhead of OSR Transitions.}  \mytable\ref{tab:sameFun} reports estimates of the average cost of performing an OSR transition to a clone of the running function. For each benchmark we compute the time difference between the scenarios in which an always-firing and a never-firing resolved OSR point is inserted in the code, respectively; we then normalize this difference against the number of fired OSR transitions.