Daniele Cono D'Elia edited experim.tex  over 8 years ago

Commit id: b90afd44d9558515c5dfade9478b3060ff3fc535

deletions | additions      

       

% !TEX root = article.tex \section{Experimental Evaluation}  \label{se:experiments} 

\begin{itemize}  \item {\bf Message 1}: how much does a never-firing OSR point impact code quality? We run a program with one or more OSR points, and we measure the slowdown given by factors such as cache effects (due to code bloat), register pressure, etc. due to the presence of the OSR points.  \item {\bf Message 2}: what is the overhead of an OSR transition to the same function? We run a program with a controlled OSR transition, e.g., with a counter that fires the OSR. Here we measure the impact of the actual OSR call [we already tried this with the repeated addition microbenchmark simple\_loop\_SSA.ll]. call.  We compute for each benchmark: 1) the average time per OSR transition; 2) the number of transferred live variables; 3) the total benchmark time with an always-firing OSR at each iteration of the hottest loop; 4) the total benchmark time without with a never-firing  OSR instrumentation at each iteration of the hottest loop  (baseline); 5) the number of iterations of the hottest loop (equals the number of OSR transitions). \item {\bf Message 3}: what is the overhead of the library for inserting OSR points? We compute for each benchmark: 1) time required by insertOpenOSR (OSR point insertion + creates stub); 2) time required by insertFinalizedOSR (OSR point insertion + generation of contination continuation  function); 3) time required for generating continuation function. \end{itemize}  \paragraph{Impact on code quality}