this is for holding javascript data
Camil Demetrescu
over 8 years ago
Commit id: 8dee8b5e734569ec8696a0d4abca9c84d9addeb5
deletions | additions
diff --git a/prev-eval-sol.tex b/prev-eval-sol.tex
index 286e616..8685ca6 100644
--- a/prev-eval-sol.tex
+++ b/prev-eval-sol.tex
...
This method overcomes the limitations of JIT-based specialization, supporting optimization of \feval$(g,...)$ calls in functions that do not receive $g$ as a parameter. However, it is substantially slower for two main reasons:
\begin{enumerate}
\item When a function $f$ is first compiled from MATLAB to IR by McVM, the functions it calls via \feval\ are unknown and the type inference engine is unable to infer the types of their returned values. Hence, these values must be kept boxed in heap-allocated objects and handled with slow generic instructions in the IR representation of $f$ (suitable for handling different types).
For this reason, These generic instructions are inherited by the optimized continuation
functions $f'$ function $f'$.% generated
at when OSR
points in McVM inherit the slow generic instructions of $f$. is fired.
\item Guard computation in $f'$ can be rather expensive, as it may require checking many parameters.
\end{enumerate}
\noindent Our approach combines the flexibility of OSR-based specialization with the efficiency of JIT-based specialization, answering an open question raised by Lameed and Hendren~\cite{lameed2013feval}. Similarly to OSR-based specialization, it does not place restrictions on the functions that can be optimized.
Furthermore, since On the
dynamic code optimization other hand, it works at IIR (rather than IR)
level, level as in JIT-based specialization, which makes it possible to perform type inference
on the specialized code. Working at IIR level eliminates the two main sources of inefficiency of OSR-based specialization: 1) we can replace generic istructions with specialized instructions,
removing and 2) the
main source of inefficiency types of
Lameed and Hendren's OSR-based specialization. $g$'s arguments do not need to be cached or guarded as they are statically inferred.
{\tt [Daniele --> text moved from case-study.tex]} Compared to %Furthermore, since the
dynamic code optimization works at IIR (rather than IR) level, type inference can replace generic istructions with specialized instructions. %, removing the main source of inefficiency of OSR-based
approach by Lameed and Hendren, specialization.
%Furthermore, our solution is cheaper because the types for the other arguments do not need to be cached or guarded: as we will see later on, the type inference engine will compute the most accurate yet sound type information in the analysis of the optimized IIR where direct calls are used.
\ifdefined\fullver
The first one is based on OSR: using the McOSR library~\cite{lameed2013modular}, \feval\ calls inside loops are instrumented with an OSR point and profiling code to cache the last-known types for the arguments of each \feval\ instruction. When an OSR is fired at run-time, a code generator modifies the original function by inserting a guard to choose between a fast path containing a direct call and a slow path with the original \feval\ call. The second technique is less general and uses value-based JIT compilation: when the first argument of an \feval\ call is an argument of the enclosing function, the compiler replaces each call to this function in all of its callers with a call to a special dispatcher. At run-time, the dispatcher evaluates the value of the argument to use for the \feval\ and executes either a previously compiled cached code or generates and JIT-compiles a version of the function optimized for the current value.