Daniele Cono D'Elia edited experim.tex  over 8 years ago

Commit id: 0b0c1627b9effd9004d222d67e9a7fff265d04de

deletions | additions      

       

\end{itemize}  \paragraph{Impact on code quality}  In order to measure how much a never-firing OSR point impacts might impact  code quality,for each benchmark  we analyzed its code the source-code  structure of each benchmark  and profiled its run-time behaviorwith gprof  to determine the identify  performance-critical sections for OSR point insertion. For recursive benchmarks ({\tt binary-trees} and {\tt spectral-norm}), iterative benchmarks,  we insert an OSR point in the body of the their  hottest method (i.e., the one accounting loops. We classify a loop as hottest when its body is executed  for a very high cumulative number of iterations (e.g., from a few thousands up to billions) and it either calls  the largest method with the highest  {\em self} execution time) time in the program, or it performs the most computational-intensive operations for the program in its own body. These loops are natural candidates for OSR point insertion, as they can be used - as in the Jikes RVM - to enable more dynamic inlining opportunities, with the benefits from several control-flow (e.g., dead code elimination) and data-flow (e.g., constant propagation) optimizations based on the run-time values  of the program. live variables. In the shootout benchmarks, the number of such loops is typically 1 (2 for {\tt spectral-norm}).  For iterative recursive  benchmarks, we insert an OSR point on loops in the body of the method  that are responsible accounts  for the largest fraction {\em self} execution time  of cumulative invocations the program. Such an OSR point might enable recompilation  of the hottest method. invoked method at a higher degree of optimization, [...]  In the shootout benchmarks, ({\tt binary-trees} and {\tt spectral-norm}) show a recursive pattern.