Camil Demetrescu edited experim.tex  over 8 years ago

Commit id: c9d25d75221c34a5cb4e7b5ff131e59ced0c279f

deletions | additions      

       

\begin{itemize}  \item {\bf Message 1}: how much does a never-firing OSR point impact code quality? We run a program with one or more OSR points, and we measure the slowdown given by factors such as cache effects (due to code bloat), register pressure, etc. due to the presence of the OSR points.  \item {\bf Message 2}: what is the overhead of an OSR transition to the same function? We run a program with a controlled OSR transition, e.g., with a counter that fires the OSR. Here we measure the impact of the actual OSR call [we already tried this with the repeated addition microbenchmark simple\_loop\_SSA.ll].  \item {\bf Message 3}: what is the overhead of the library for inserting OSR points? We compute: 1) the average time per OSR transition; 2) the number of transferred live variables; 3) the total benchmark time with an always-firing OSR at each iteration of the hottest loop; 4) the total benchmark time without OSR instrumentation (baseline); 5) the number of iterations of the hottest loop (equals the number of OSR transitions). TEST  \end{itemize}  \paragraph{Impact on code quality} 

For {\tt b-trees} - the only benchmark in our suite showing a recursive pattern - we insert an OSR point in the body of the method that accounts for the largest {\em self} execution time of the program. Such an OSR point might be useful to trigger recompilation of the code at a higher degree of optimization, or to enable some form of dynamic optimization (for instance, in a recursive search algorithm we might want to inline the comparator method provided by the user at the call).