We implemented the 4th order Runge-Kutta method to solve chaotic ODE system and it turned out to be a power solver. However, one of its drawback is that the step size is fixed. When the solution happens to be smooth and change slightly, the number of steps could be reduced for efficiency. On the other hand, if the solution is changing sharply, fixed step size could fail to account for the abrupt variations and the errors might accumulate over time. As a result, for the purpose of higher efficiency and accuracy, adaptive step size Runge-Kutta method would be a better ODE solver and could handle more equations at the same time. As mentioned by the author in the Numerical Recipes for c++, ”many small steps should tiptoe through treacherous terrain while a few great strides through smooth uninteresting countryside.”(W. H. Press 2007)

The step size is controlled by the truncation errors between the fifth order and the fourth order values. If the error exceeds the tolerance, we should reduce the step size until the error is below the tolerance. On the other hand, if the error is small, it will be more efficient to increase the step size so that the error is about the same order of magnitude as the tolerance. The general form of the adaptive step size Runge-Kutta method is as follows. The \(t_{n}\) and \(y_{n}\) are the time and the vector containing N ODE values at the nth step.

\begin{equation}
\begin{split}\displaystyle k_{1}=hf(t_{n},y_{n})\\
k2=hf(x_{n}+c_{2}h,y_{n}+a_{21}k_{1})\\
\displaystyle...\\
\displaystyle k6=hf(t_{n}+c_{6}h,y_{n}+a_{61}h+...+a_{65}k5)\\
\end{split}\\
\end{equation}

\begin{aligned}
y_{n+1}=y_{n}+\sum_{i=1}^{6}b_{i}k_{i} \\
y^{*}_{n+1}=y_{n}+\sum_{i=1}^{6}b^{*}_{i}k_{i}\\
\end{aligned}

where \(a\)’s, \(b^{\prime}s\), \(b^{*}\)’s and \(c\)’s are the various constants found by Dormand and Prince(Dormand 1980) and all the values are tabulated in Chapter 17.2 of Numerical Recipes. The \(y^{*}_{n+1}\) is alculated to the fifth order so the error term can be written as

\begin{equation} \Delta=\left|y_{n+1}-y^{*}_{n+1}\right|\\ \end{equation}Note that here the \(||\) is the absolute value of each element rather than the norm of the N-element vector. By picking the maximum value in the \(\Delta\) vector, err, we can compare err to our tolerance err0. First of all, if err is larger than err0, we have to scale down the step size and recalculate again until the \(err<err0\). Secondly, for scaling the step size, whichever the case is, we can always apply the same scaling factor,

\begin{equation} scale=\left(\frac{err0}{err}\right)^{1/4}\times S\\ \end{equation}The \(S\) here is the safe factor with the requirement \(|S|<1\). We chose \(S=0.8\) in the program. The \(\frac{1}{4}\) power was used instead of \(\frac{1}{5}\) since it scales up/down the step size more efficiently and the function sqrt in c++ is easier and more accurate. Thirdly, we cannot scale the step size abruptly, like from \(h_{0}\) to \(0.01h_{0}\) or \(50h_{0}\). In such case, the adaptive step size Runge-Kutta method would have ignored some features in the \(50h_{0}\) step or become inefficient using \(0.01h_{0}\) step size. We have to put extreme scaling values maxscale and minscale. The re-scaled step size would be passed to the next run and so on. Finally, in order to avoid infinite while loops, we added a counter to keep track of the number of loops. If counter exceeds certain number (in the program \(100\)), we will break the while loop and print the message that the \(err\) was never reduced below \(err0\) at this step. However, this didn’t happen even though we’ve tried tens of cases with billions of steps. Usually, it only took less than 5 loops to reduce the error within our tolerance.

Let’s talk about the specified error tolerance \(err0\) and the number of steps to achieve certain final \(t\) value. The initial step size doesn’t make a difference because the step size will get scaled up/down depending upon the solution and system. Given the same number of steps, the final \(t\) value reached depends mainly on \(err0\). The smaller the tolerance is, the smaller the step size would be for each step. Therefore, the final \(t\) would end up far below the expected value. We can either increase the number of steps or the error tolerance which is a more efficient way than the former one. In the following testing and computations, this issue took place most of the time and it’s necessary to choose a good and aacceptable \(err0\) and number of steps to solve the ODE system. We found that \(err0=10^{-5}\) of the initial value would work well for most of the cases.

We can test this adaptive Runge-Kutta ODE solver, ark() on a SHM system.

\begin{equation} \begin{split}\displaystyle y^{\prime\prime}(t)+y(t)=0\\ y(0)=1\\ \displaystyle y^{\prime}(0)=0\end{split}\\ \end{equation}The solution which we have solved by ordinary Runge-Kutta is \(y(t)=\cos{t}\). We apply ark() to this system with \(err0=10^{-5}\), \(nstep=5\times 10^{5}\). The solution of both \(y(t)\) and \(y^{\prime}(t)\) is plotted in Fig. 1.