Macalester POTW 1201: Problem 1201. What Goes Up Might Not Come Down

## Problem Statement

A random walk on the 2-dimensional integer lattice begins at the origin. At each step, the walker moves one unit either left, right, or up, each with probability $\frac13$. (No downward steps ever.) A walk is a success if it reaches the point (1, 1). What is the probability of success?

Note: One can vary the problem by varying the target point. Eg., use (1, 0) or (0, 1) instead. Perhaps there is a good method to resolve the general case of target (*a*, *b*).

Source: Bruce Torrence, Randolph-Macon College

AEP 4830 HW6 Boundary Value Problem and Relaxation

# Finite Difference Algorithm

We have used Runge-Kutta method to solve ODEs with initial conditions; however, many of the physical systems cannot be described by ODEs. The partial differential equations with boundary conditions are inevitable for physical systems such as time-independent Schrodinger equations and electromagnetic fields. Those equations involve partial differential in spatial coordinates with specified boundaries and/or sources. When trying to solve such system numerically, the finite difference algorithm is a straightforward and easy-to-program method.

We will use the 2D Laplace equation as an example to demonstrate the algorithm.

\begin{equation}
\nabla^2\phi(x,y) = 0
\end{equation} The finite difference algorithm requires to mesh-grid the space of interest and the value on each grid point (*i*, *j*) represents the *ϕ*(*x* = *i**h*_{1}, *y* = *j**h*_{2})=*ϕ*_{ij} where *h*_{1} and *h*_{2} are the spacing in *x* and *y* axes. It’s always required to set up another “flag” grid which determines whether certain combination of (*i*, *j*) is on the boundary or source (electrodes). Next, based on the finite difference of adjacent values to *ϕ*_{ij} and spacing, the second partial differential can be approximated. We can solve for *ϕ*_{ij} in terms of its neighbors for the first iteration.\cite{NumRec}

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{4}\left( \phi_{i-1,j}+\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i,j-1} \right)
\end{equation} The flag grid is like a mask, leaving all the specified boundary values unchanged during iterations. Similar to Runge-Kutta method, we evaluate the maximum error in each iteration.

\begin{equation}
\Delta\phi_{ij} = max | \phi_{ij}^{FD}-\phi_{i,j} |
\end{equation} If the maximum error is larger than the specified tolerance *Δ**ϕ*_{ij} > *ϵ*_{max}, we should pass the new values of *ϕ*_{ij} to Eq. (2) and repeat the calculation until the error is below the tolerance or the number of iterations exceeds certain upper bound. The new *ϕ*_{ij}^{new} can be the following form, which is the weighed average of *ϕ*_{ij} and *ϕ*_{ij}^{FD}.

\begin{equation}
\phi_{ij}^{new} = (1-\omega)\phi_{ij} + \omega\phi_{ij}^{FD}
\end{equation} where 0 < *ω* < 2. If 1 < *ω* < 2, the algorithm converges more quickly and this is known as the successive over-relaxation algorithm. The regime where 0 < *ω* < 1 corresponds to the under-relaxation where the solution converges relatively slowly.

Here, we will solve the Laplace equation in a cylinder with three disks of inner and outer radius. Instead of utilizing Eq. (2), we will use the finite difference in cylindrical coordinates and assume azimuthal symmetry.

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{4}\left( \phi_{i-1,j}+\phi_{i+1,j}+\phi_{i,j+1}+\phi_{i,j-1} \right)+\frac{1}{8j}\left( \phi_{i,j+1}-\phi_{i,j-1} \right)
\end{equation} ,where *z* = *i**h* and *r* = *j**h*. The above equation applies to points where *r* ≠ 0. On the central axis where *r* = 0, we should have

\begin{equation}
\phi_{ij}^{FD} = \frac{1}{6}\left( 4\phi_{i,j=1}+\phi_{i+1,j=0}+\phi_{i-1,j=0}\right)
\end{equation} In order to use finite difference algorithm, we need to set up matrices in c++. We used *arrayt.h* to generate and do the calculations for matrices.

# Over-Relaxation

First, we investigated the powerful role of *ω* here. We made the grid spacing *h* = 0.1 and error tolerance *ϵ*_{max} = 2 and calculated the number of iterations for the error to go below the tolerance. The results are shown in the following table.

ω |
1.0 | 1.2 | 1.4 | 1.6 | 1.8 | 1.9 |
---|---|---|---|---|---|---|

Number of iterations | 655 | 511 | 431 | 371 | 359 | 1077 |

The best choice of *ω* seems to be around *ω* = 1.8 and we would use this over-relaxation value for the following calculation.

# Solutions

For solving the electric potential inside the cylinder with three disk electrodes and boundaries, the grid spacing is the same *h* = 0.1 while the error tolerance is much smaller, *ϵ*_{max} = 0.01. This one percent error would increase the number of iterations significantly to *N* = 4231. The contour plot of the numerical solution in space is shown in Fig. 1, where the red line represents the middle disk electrode held at *ψ* = 2000*V* and the black line and rectangle are the two grounded electrodes *ψ* = 0*V*. The electric potential contour is distorted by the two grounded electrodes, compressing the potential lines in the z direction. The potential at about *r* = 10*m**m* is symmetric while that closer to the central axis is slightly asymmetric due to the mismatch in inner radii of the two grounded electrodes. The electric potential is mainly restricted among the disks and near the origin. Outside of those disks, the potential is negligibly zero everywhere. The configuration is seen in the electron guns of the electron microscopes to focus electrons by adjusting the fixed voltage of the center electrode.

AEP 4830 HW5 Three or More Body Problem

# Adaptive Stepsize Runge-Kutta Algorithm

We implemented the 4th order Runge-Kutta method to solve chaotic ODE system and it turned out to be a power solver. However, one of its drawback is that the step size is fixed. When the solution happens to be smooth and change slightly, the number of steps could be reduced for efficiency. On the other hand, if the solution is changing sharply, fixed step size could fail to account for the abrupt variations and the errors might accumulate over time. As a result, for the purpose of higher efficiency and accuracy, adaptive step size Runge-Kutta method would be a better ODE solver and could handle more equations at the same time. As mentioned by the author in the Numerical Recipes for c++, “many small steps should tiptoe through treacherous terrain while a few great strides through smooth uninteresting countryside.”\cite{NumRec}

The step size is controlled by the truncation errors between the fifth order and the fourth order values. If the error exceeds the tolerance, we should reduce the step size until the error is below the tolerance. On the other hand, if the error is small, it will be more efficient to increase the step size so that the error is about the same order of magnitude as the tolerance. The general form of the adaptive step size Runge-Kutta method is as follows. The *t*_{n} and *y*_{n} are the time and the vector containing N ODE values at the nth step.

\begin{equation} \begin{split} k_1 = hf(t_n, y_n)\\ k2 = hf(x_n+c_2h, y_n+a_{21}k_1)\\ ...\\ k6 = hf(t_n+c_6h, y_n+a_{61}h+...+a_{65}k5)\\ \end{split} \end{equation}

\begin{eqnarray}
y_{n+1} = y_n+\sum_{i=1}^{6} b_ik_i \\
y^*_{n+1} = y_n+\sum_{i=1}^{6} b^*_ik_i
\end{eqnarray} where *a*’s, *b*′*s*, *b*^{*}’s and *c*’s are the various constants found by Dormand and Prince\cite{Dormand_1980} and all the values are tabulated in Chapter 17.2 of Numerical Recipes. The *y*_{n + 1}^{*} is alculated to the fifth order so the error term can be written as \begin{equation}
\Delta = \left| y_{n+1} - y^*_{n+1} \right|
\end{equation} Note that here the || is the absolute value of each element rather than the norm of the N-element vector. By picking the maximum value in the *Δ* vector, *err*, we can compare *err* to our tolerance *err0*. First of all, if *err* is larger than *err0*, we have to scale down the step size and recalculate again until the *e**r**r* < *e**r**r*0. Secondly, for scaling the step size, whichever the case is, we can always apply the same scaling factor, \begin{equation}
scale = \left( \frac{err0}{err} \right)^{1/4} \times S
\end{equation} The *S* here is the safe factor with the requirement |*S*|<1. We chose *S* = 0.8 in the program. The $\frac{1}{4}$ power was used instead of $\frac{1}{5}$ since it scales up/down the step size more efficiently and the function *sqrt* in c++ is easier and more accurate. Thirdly, we cannot scale the step size abruptly, like from *h*_{0} to 0.01*h*_{0} or 50*h*_{0}. In such case, the adaptive step size Runge-Kutta method would have ignored some features in the 50*h*_{0} step or become inefficient using 0.01*h*_{0} step size. We have to put extreme scaling values *maxscale* and *minscale*. The re-scaled step size would be passed to the next run and so on. Finally, in order to avoid infinite *while* loops, we added a *counter* to keep track of the number of loops. If *counter* exceeds certain number (in the program 100), we will break the while loop and print the message that the *e**r**r* was never reduced below *e**r**r*0 at this step. However, this didn’t happen even though we’ve tried tens of cases with billions of steps. Usually, it only took less than 5 loops to reduce the error within our tolerance.

Let’s talk about the specified error tolerance *e**r**r*0 and the number of steps to achieve certain final *t* value. The initial step size doesn’t make a difference because the step size will get scaled up/down depending upon the solution and system. Given the same number of steps, the final *t* value reached depends mainly on *e**r**r*0. The smaller the tolerance is, the smaller the step size would be for each step. Therefore, the final *t* would end up far below the expected value. We can either increase the number of steps or the error tolerance which is a more efficient way than the former one. In the following testing and computations, this issue took place most of the time and it’s necessary to choose a good and aacceptable *e**r**r*0 and number of steps to solve the ODE system. We found that *e**r**r*0 = 10^{−5} of the initial value would work well for most of the cases.

AEP 4830 HW4 Ordinary Differential Equations and Chaotic Systems

# 4th Order Runge Kutta ODE Solver

Suppose we have the a set of *N* ODEs with the following form.

\begin{equation}
\frac{d\vec{y}}{dt} = \vec{f}(t, \vec{y})
\end{equation} where $\vec{y}$ and $\vec{f}$ are N-dimensional vectors of functions. The 4th order Runge Kutta takes the following algorithm each step *h* forward and by calculating a large number of steps, we can have the numerical solution to the set of ODEs.\cite{NumRec}

\begin{equation}
\vec{y_{n}} = \vec{y}(t_n)
\end{equation} \begin{eqnarray}
\vec{k_1} = h\vec{f}(t_n,\vec{y_n})\\
\vec{k_2} = h\vec{f}(t_n+0.5h,\vec{y_n}+0.5\vec{k_1})\\
\vec{k_3} = h\vec{f}(t_n+0.5h,\vec{y_n}+0.5\vec{k_2})\\
\vec{k_4} = h\vec{f}(t_n+h,\vec{y_n}+\vec{k_3})
\end{eqnarray} The value for $\vec{y_{n+1}}$ one step away can be calculated.

\begin{equation}
\vec{y_{n+1}} = \vec{y_{n}}+\frac{1}{6}\left( \vec{k_1}+2\vec{k_2}+2\vec{k_3}+\vec{k_4} \right)
\end{equation} Finally, by starting from $\vec{y_{n+1}}$, the solution can be obtained.

PHYS6562 F2 Daily Quaetion

## Absorption boundary conditions

First observe that the probability density at the origin at any time is zero.

\begin{equation} \rho(0,t) = 0 \end{equation}

And the probability density on *x* = *x*′ at *t* = 0 is

\begin{equation} \rho(x,0) = \delta(x-x') \end{equation}

We can assume that there is a image probability density at *t* = 0 on *x* = −*x*′. Therefore the total probability at *t* = 0 should be

\begin{equation} \rho(x,0) = \delta(x-x')-\delta(x+x') \end{equation}

The Green’s function for each delta function is a time-dependent Gaussian distribution,

\begin{equation} G(x,t) = \frac{1}{\sqrt{4\pi Dt}} e^{-x^2/4Dt} \end{equation}

with the particle originally at the origin. For this case, both delta functions diffuse, yielding the same Gaussian distribution.

\begin{equation} G(x,t) = \frac{1}{\sqrt{4\pi Dt}} \left( e^{-(x-x')^2/4Dt} - e^{-(x+x')^2/4Dt}\right) \end{equation}

The time evolution probability distribution is shown in fig.1.

PHYS6562 M3 Daily Question

## Weirdness in high dimensions

Most of the volume of high dimensional spheres is near the surface. As we can approximate the volume as *V* ≈ *r*^{3N} in 3N dimensions, near the center, *r* ≈ 0 and *V* ≈ 0. Near the surface, where *r* becomes larger and contributes to most of the volume.

For statistical mechanics, instead of taking the whole E sphere into account, which might involve high-dimensional integral, it’s easier to focus on the shell of E sphere. Then the momenta can be considered to be $p = \sqrt{2mE}$ instead of doing

\begin{equation} \int_0^{\sqrt{2mE}} (....)dp \end{equation}

PHYS6562 W2 Daily Question

PHYS6562 W3 Daily Question

## Entropy maximization and temperature

First of all, *Ω*_{1}(*E*_{1})*δ**E*_{1} and *Ω*_{2}(*E*_{2})*δ**E*_{2} are the phase space volume for these two subsystems. Since most of the volume in phase space is near the surface, we can instead write *Ω*_{1}(*E*_{1}) and *Ω*_{2}(*E*_{2}) to denote the two volumes in phase space. Second, it’s clear that the probability density of subsystem 1 having energy *E*_{1} is proportional to the ’allowed’ volume product of the two subsystems.

\begin{equation}
\rho_1(E_1) \propto \Omega_1(E_1)\Omega_2(E_2)
\end{equation} where *E*_{2} = *E* − *E*_{1}. And finally, we impose the normalization condition:

\begin{equation}
\int \rho_1(E_1)dE_1 = A \int \Omega_1(E_1)\Omega_2(E-E_1)dE_1 = A \Omega(E) = 1
\end{equation} A is the proportional constant and equal to 1/*Ω*(*E*). Therefore,

\begin{equation} \rho_1(E_1) = \frac{\Omega_1(E_1)\Omega_2(E_2)}{\Omega(E)} \end{equation}

If we maximize the probability at *E*_{1}:

\begin{equation} \frac{d\rho_1}{dE_1} = 0 \end{equation}

\begin{equation} \frac{1}{\Omega_1}\frac{d\Omega_1}{dE_1} - \frac{1}{\Omega_2}\frac{d\Omega_2}{dE_2} = 0 \end{equation}

By plugging the definition of entropy *S* = *k*_{B}*l**o**g*(*Ω*),

\begin{equation} \frac{1}{k_B}\frac{dS_1}{dE_1} - \frac{1}{k_B}\frac{dS_2}{dE_2} = 0 \end{equation}

\begin{equation} \frac{dS_1}{dE_1} + \frac{dS_2}{dE_1} = \frac{d}{dE_1}(S(E_1)+S(E_2)) = 0 \end{equation} ,which implies the maximization of the sum of entropies of both subsystems.

For (6), we see that if the probability density *ρ*(*E*_{1}) is maximized,

\begin{equation} \frac{1}{T_1} = \frac{1}{T_2} \end{equation}

PHYS6562 F3 Daily Quaetion

## Undistinguished particles

For N distinguishable particles, if at certain instant they occupy N states in the phase space, there are *N*! different choices of such configuration. The total volume for this system in phase space is the product of the volume for each particle in its own sub phase space.

\begin{equation} \Omega_D = N! \space \omega_1 \omega_2 \omega_3 ... \omega_N \end{equation}

If the particles are now undistinguished, there is only one choice of such configuration without permutation.

\begin{equation} \Omega_U = \omega_1 \omega_2 \omega_3 ... \omega_N \end{equation}

Therefore, $\Omega_D = N!\space \Omega_U $.

For the mixing problem of black and white particles of the same pressure on both sides, if you cannot tell the difference between black and white ones, you cannot extract work from the mixing. Actully, you won’t know if they’ve mixed already or not. However, suppose the black and white particles are of different pressure, and you still cannot tell them apart. If there were a membrane which allows particles of lower pressure to penetrate through, this membrane then can tell the difference between the two sides without telling the particles apart. Then work can be extracted.

Now if a door can distinguish the black from the white particles and let through white ones, after very long time, the white particles will distribute themselves equally in the two sides, i.e. there will be $\frac{1}{4} N\space white + 0 \space black$ in the left and $\frac{1}{4} N \space white + \frac{1}{2} N \space black$ in the right. And we can extract work from it due to the pressure difference between the two sides.