Macalester POTW 1201: Problem 1201. What Goes Up Might Not Come Down

AEP 4830 HW6 Boundary Value Problem and Relaxation

AEP 4830 HW5 Three or More Body Problem

# Adaptive Stepsize Runge-Kutta Algorithm

We implemented the 4th order Runge-Kutta method to solve chaotic ODE system and it turned out to be a power solver. However, one of its drawback is that the step size is fixed. When the solution happens to be smooth and change slightly, the number of steps could be reduced for efficiency. On the other hand, if the solution is changing sharply, fixed step size could fail to account for the abrupt variations and the errors might accumulate over time. As a result, for the purpose of higher efficiency and accuracy, adaptive step size Runge-Kutta method would be a better ODE solver and could handle more equations at the same time. As mentioned by the author in the Numerical Recipes for c++, “many small steps should tiptoe through treacherous terrain while a few great strides through smooth uninteresting countryside.”\cite{NumRec}

The step size is controlled by the truncation errors between the fifth order and the fourth order values. If the error exceeds the tolerance, we should reduce the step size until the error is below the tolerance. On the other hand, if the error is small, it will be more efficient to increase the step size so that the error is about the same order of magnitude as the tolerance. The general form of the adaptive step size Runge-Kutta method is as follows. The *t*_{n} and *y*_{n} are the time and the vector containing N ODE values at the nth step.

\begin{equation} \begin{split} k_1 = hf(t_n, y_n)\\ k2 = hf(x_n+c_2h, y_n+a_{21}k_1)\\ ...\\ k6 = hf(t_n+c_6h, y_n+a_{61}h+...+a_{65}k5)\\ \end{split} \end{equation}

\begin{eqnarray}
y_{n+1} = y_n+\sum_{i=1}^{6} b_ik_i \\
y^*_{n+1} = y_n+\sum_{i=1}^{6} b^*_ik_i
\end{eqnarray} where *a*’s, *b*′*s*, *b*^{*}’s and *c*’s are the various constants found by Dormand and Prince\cite{Dormand_1980} and all the values are tabulated in Chapter 17.2 of Numerical Recipes. The *y*_{n + 1}^{*} is alculated to the fifth order so the error term can be written as \begin{equation}
\Delta = \left| y_{n+1} - y^*_{n+1} \right|
\end{equation} Note that here the || is the absolute value of each element rather than the norm of the N-element vector. By picking the maximum value in the *Δ* vector, *err*, we can compare *err* to our tolerance *err0*. First of all, if *err* is larger than *err0*, we have to scale down the step size and recalculate again until the *e**r**r* < *e**r**r*0. Secondly, for scaling the step size, whichever the case is, we can always apply the same scaling factor, \begin{equation}
scale = \left( \frac{err0}{err} \right)^{1/4} \times S
\end{equation} The *S* here is the safe factor with the requirement |*S*|<1. We chose *S* = 0.8 in the program. The $\frac{1}{4}$ power was used instead of $\frac{1}{5}$ since it scales up/down the step size more efficiently and the function *sqrt* in c++ is easier and more accurate. Thirdly, we cannot scale the step size abruptly, like from *h*_{0} to 0.01*h*_{0} or 50*h*_{0}. In such case, the adaptive step size Runge-Kutta method would have ignored some features in the 50*h*_{0} step or become inefficient using 0.01*h*_{0} step size. We have to put extreme scaling values *maxscale* and *minscale*. The re-scaled step size would be passed to the next run and so on. Finally, in order to avoid infinite *while* loops, we added a *counter* to keep track of the number of loops. If *counter* exceeds certain number (in the program 100), we will break the while loop and print the message that the *e**r**r* was never reduced below *e**r**r*0 at this step. However, this didn’t happen even though we’ve tried tens of cases with billions of steps. Usually, it only took less than 5 loops to reduce the error within our tolerance.

Let’s talk about the specified error tolerance *e**r**r*0 and the number of steps to achieve certain final *t* value. The initial step size doesn’t make a difference because the step size will get scaled up/down depending upon the solution and system. Given the same number of steps, the final *t* value reached depends mainly on *e**r**r*0. The smaller the tolerance is, the smaller the step size would be for each step. Therefore, the final *t* would end up far below the expected value. We can either increase the number of steps or the error tolerance which is a more efficient way than the former one. In the following testing and computations, this issue took place most of the time and it’s necessary to choose a good and aacceptable *e**r**r*0 and number of steps to solve the ODE system. We found that *e**r**r*0 = 10^{−5} of the initial value would work well for most of the cases.

AEP 4830 HW4 Ordinary Differential Equations and Chaotic Systems

# 4th Order Runge Kutta ODE Solver

Suppose we have the a set of *N* ODEs with the following form.

\begin{equation}
\frac{d\vec{y}}{dt} = \vec{f}(t, \vec{y})
\end{equation} where $\vec{y}$ and $\vec{f}$ are N-dimensional vectors of functions. The 4th order Runge Kutta takes the following algorithm each step *h* forward and by calculating a large number of steps, we can have the numerical solution to the set of ODEs.\cite{NumRec}

\begin{equation}
\vec{y_{n}} = \vec{y}(t_n)
\end{equation} \begin{eqnarray}
\vec{k_1} = h\vec{f}(t_n,\vec{y_n})\\
\vec{k_2} = h\vec{f}(t_n+0.5h,\vec{y_n}+0.5\vec{k_1})\\
\vec{k_3} = h\vec{f}(t_n+0.5h,\vec{y_n}+0.5\vec{k_2})\\
\vec{k_4} = h\vec{f}(t_n+h,\vec{y_n}+\vec{k_3})
\end{eqnarray} The value for $\vec{y_{n+1}}$ one step away can be calculated.

\begin{equation}
\vec{y_{n+1}} = \vec{y_{n}}+\frac{1}{6}\left( \vec{k_1}+2\vec{k_2}+2\vec{k_3}+\vec{k_4} \right)
\end{equation} Finally, by starting from $\vec{y_{n+1}}$, the solution can be obtained.

PHYS6562 F2 Daily Quaetion

## Absorption boundary conditions

First observe that the probability density at the origin at any time is zero.

\begin{equation} \rho(0,t) = 0 \end{equation}

And the probability density on *x* = *x*′ at *t* = 0 is

\begin{equation} \rho(x,0) = \delta(x-x') \end{equation}

We can assume that there is a image probability density at *t* = 0 on *x* = −*x*′. Therefore the total probability at *t* = 0 should be

\begin{equation} \rho(x,0) = \delta(x-x')-\delta(x+x') \end{equation}

The Green’s function for each delta function is a time-dependent Gaussian distribution,

\begin{equation} G(x,t) = \frac{1}{\sqrt{4\pi Dt}} e^{-x^2/4Dt} \end{equation}

with the particle originally at the origin. For this case, both delta functions diffuse, yielding the same Gaussian distribution.

\begin{equation} G(x,t) = \frac{1}{\sqrt{4\pi Dt}} \left( e^{-(x-x')^2/4Dt} - e^{-(x+x')^2/4Dt}\right) \end{equation}

The time evolution probability distribution is shown in fig.1.

PHYS6562 M3 Daily Question

PHYS6562 W2 Daily Question

PHYS6562 W3 Daily Question

## Entropy maximization and temperature

First of all, *Ω*_{1}(*E*_{1})*δ**E*_{1} and *Ω*_{2}(*E*_{2})*δ**E*_{2} are the phase space volume for these two subsystems. Since most of the volume in phase space is near the surface, we can instead write *Ω*_{1}(*E*_{1}) and *Ω*_{2}(*E*_{2}) to denote the two volumes in phase space. Second, it’s clear that the probability density of subsystem 1 having energy *E*_{1} is proportional to the ’allowed’ volume product of the two subsystems.

\begin{equation}
\rho_1(E_1) \propto \Omega_1(E_1)\Omega_2(E_2)
\end{equation} where *E*_{2} = *E* − *E*_{1}. And finally, we impose the normalization condition:

\begin{equation}
\int \rho_1(E_1)dE_1 = A \int \Omega_1(E_1)\Omega_2(E-E_1)dE_1 = A \Omega(E) = 1
\end{equation} A is the proportional constant and equal to 1/*Ω*(*E*). Therefore,

\begin{equation} \rho_1(E_1) = \frac{\Omega_1(E_1)\Omega_2(E_2)}{\Omega(E)} \end{equation}

If we maximize the probability at *E*_{1}:

\begin{equation} \frac{d\rho_1}{dE_1} = 0 \end{equation}

\begin{equation} \frac{1}{\Omega_1}\frac{d\Omega_1}{dE_1} - \frac{1}{\Omega_2}\frac{d\Omega_2}{dE_2} = 0 \end{equation}

By plugging the definition of entropy *S* = *k*_{B}*l**o**g*(*Ω*),

\begin{equation} \frac{1}{k_B}\frac{dS_1}{dE_1} - \frac{1}{k_B}\frac{dS_2}{dE_2} = 0 \end{equation}

\begin{equation} \frac{dS_1}{dE_1} + \frac{dS_2}{dE_1} = \frac{d}{dE_1}(S(E_1)+S(E_2)) = 0 \end{equation} ,which implies the maximization of the sum of entropies of both subsystems.

For (6), we see that if the probability density *ρ*(*E*_{1}) is maximized,

\begin{equation} \frac{1}{T_1} = \frac{1}{T_2} \end{equation}

PHYS6562 F3 Daily Quaetion

## Undistinguished particles

For N distinguishable particles, if at certain instant they occupy N states in the phase space, there are *N*! different choices of such configuration. The total volume for this system in phase space is the product of the volume for each particle in its own sub phase space.

\begin{equation} \Omega_D = N! \space \omega_1 \omega_2 \omega_3 ... \omega_N \end{equation}

If the particles are now undistinguished, there is only one choice of such configuration without permutation.

\begin{equation} \Omega_U = \omega_1 \omega_2 \omega_3 ... \omega_N \end{equation}

Therefore, $\Omega_D = N!\space \Omega_U $.

For the mixing problem of black and white particles of the same pressure on both sides, if you cannot tell the difference between black and white ones, you cannot extract work from the mixing. Actully, you won’t know if they’ve mixed already or not. However, suppose the black and white particles are of different pressure, and you still cannot tell them apart. If there were a membrane which allows particles of lower pressure to penetrate through, this membrane then can tell the difference between the two sides without telling the particles apart. Then work can be extracted.

Now if a door can distinguish the black from the white particles and let through white ones, after very long time, the white particles will distribute themselves equally in the two sides, i.e. there will be $\frac{1}{4} N\space white + 0 \space black$ in the left and $\frac{1}{4} N \space white + \frac{1}{2} N \space black$ in the right. And we can extract work from it due to the pressure difference between the two sides.