Daniele Cono D'Elia edited case-study.tex  over 8 years ago

Commit id: f37b23d129d3b963b61d61ad595ed898ca96d5be

deletions | additions      

       

  Unfortunately, we are unable to compute direct performance metrics for the solution by Lameed and Hendren since its source code has not been released. Numbers in their paper~\cite{lameed2013feval} show that for these benchmarks the speed-up of the OSR-based approach is equal on average to a $30.1\%$ percentage of the speed-up from hand-coded calls, ranging from $9.2\%$ to $73.9\%$; for the JIT-based approach the percentage grows to $84.7\%$, ranging from $75.7\%$ to $96.5\%$.  Our optimization technique yields speed-ups that are very close to the upper bound from by-hand optimization. In particular, in the worst case we observe a $94.1\%$ percentage when the optimized code is JIT-compiled, which becomes $97.5\%$ when a cached version is available ({\tt odeRK4} benchmark). Compared to their OSR-based approach,better type specialization enabled by  the compensation entry block is indeed the a  key driver of improved performance. performance, as the benefits from a better type-specialized whole function body outweigh those from simply performing a direct call using boxed arguments and return values in place of the original \feval.