this is for holding javascript data
Antonio Coppola edited C++ Functions.tex
over 9 years ago
Commit id: 088bd87ca80f1affe934cf6bd169a4b2508ecd85
deletions | additions
diff --git a/C++ Functions.tex b/C++ Functions.tex
index 287d58e..2b55984 100644
--- a/C++ Functions.tex
+++ b/C++ Functions.tex
...
\section{Implementing the Objective and Gradient in C++}
The package supports the implementation of the objective and gradient functions in C++, which may yield significant speed improvements over the respective R implementations. The optimization routine's API accepts both R function objects and external pointers to compiled C++ functions. To perform optimization on the Rosenbrock function, we begin by defining the C++ implementations of the objective and of the gradient as character
strings: strings, using the RCPP library:
\begin{lstlisting}
objective.include <- 'Rcpp::NumericVector rosenbrock(SEXP xs) {
...
out[1] = g2;
return(out);
}'
\end{lstlisting}
Then we assign two character strings with the bodies of two functions to generate external pointers to the objective and the gradient:
\begin{lstlisting}
objective.body <- '
typedef Rcpp::NumericVector (*funcPtr)(SEXP);
return(XPtr(new funcPtr(&rosenbrock)));
'
gradient.body <- '
typedef Rcpp::NumericVector (*funcPtr)(SEXP);
return(XPtr(new funcPtr(&rosengrad)));
'
\end{lstlisting}
Finally, we compile this ensemble using the inline package:
\begin{lstlisting}
objective <- cxxfunction(signature(), body=objective.body,
inc=objective.include, plugin="Rcpp")
gradient <- cxxfunction(signature(), body=gradient.body,
inc=gradient.include, plugin="Rcpp")
\end{lstlisting}
The external pointers to the objective and the gradient generated by the two pointer-assigners can then be supplied to the lbfgs routine:
\begin{lstlisting}
out.CPP <- lbfgs(objective(), gradient(), c(-1.2,1), invisible=1)
\end{lstlisting}
We define the same functions in R for comparison purposes:
\begin{lstlisting}
objective.R <- function(x) {
100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2
}
gradient.R <- function(x) {
c(-400 * x[1] * (x[2] - x[1]^2) - 2 * (1 - x[1]),
200 * (x[2] - x[1]^2))
}
\end{lstlisting}
A quick microbenchmark comparison performed on OS X, Version 10.9.3, with a 2.9 GHx Intel Core i7 processor and 8 GB 1600 MHz DDR3 memory, reveals significant speed improvements:
\begin{lstlisting}
microbenchmark(out.CPP <- lbfgs(objective(), gradient(), c(-1.2,1), invisible=1),
out.R <- lbfgs(objective.R, gradient.R, c(-1.2,1), invisible=1))
\end{lstlisting}
The results are the following:
\begin{lstlisting}
Unit: microseconds
expr min lq
out.CPP <- lbfgs(objective(), gradient(), c(-1.2, 1), invisible = 1) 79.159 83.3620
out.R <- lbfgs(objective.R, gradient.R, c(-1.2, 1), invisible = 1) 275.400 285.6185
median uq max neval
88.1875 93.8975 189.992 100
303.1600 316.7580 1393.450 100
\end{lstlisting}