Macalester POTW 1201: Problem 1201. What Goes Up Might Not Come Down

Problem Statement

A random walk on the 2-dimensional integer lattice begins at the origin. At each step, the walker moves one unit either left, right, or up, each with probability \(\frac13\). (No downward steps ever.) A walk is a success if it reaches the point \((1,1)\). What is the probability of success?

Note: One can vary the problem by varying the target point. Eg., use \((1, 0)\) or \((0,1)\) instead. Perhaps there is a good method to resolve the general case of target \((a, b)\).

Source: Bruce Torrence, Randolph-Macon College


An easy relaxation

To solve this problem, let’s consider an easier class of questions first. Suppose that we have a one-dimensional number line, and you start at some arbitrary point \(n\). You can either move left, right, or up. What is the probability that you will hit the origin?

Now, if we’re at point \(n\), one of two things must happen. Either \(n = 0\), in which case we know that we can get to the origin with \(100\%\) probability, or \(n \ne 0\). Let \(P_n\) be the probability that we can reach the origin from \(n\), then inductively: \[P_n = \begin{cases} 1 & n = 0 \\ \frac13 \times 0 + \frac13 \times P_{n-1} + \frac13 \times P_{n+1} & \text{otherwise} \end{cases}\] This reduces to the system where \(P_0 = 1\) and \(P_n = 3P_{n-1} - P_{n-2}\). The characteristic of this recurrence (\(x^2 - 3x + 1\)) has roots \(\gamma,\bar\gamma = \frac{3 \pm \sqrt5}{2}\), which suggests that \(P_n\) will necessarily be in the form of \[P_n = \alpha \gamma^n + \beta \bar\gamma^n, ~~~~ P_0 = 1\] for some \(\alpha, \beta \in \mathbb{R}\).

Now, we have one initial condition, which is only enough to establish that \(\beta = 1 - \alpha\). We need another piece of information in order to solve this system. It turns out that we can use the argument that this probability system must be “stable in the limit” to find another boundary condition:

Lemma: (Stability)\[\lim_{n\to\infty} P_n = 3P_n - P_n \implies \lim_{n\to\infty} P_n = 0 ~~~~ \square\]

This suggests that \(\lim_{n\to\infty} \alpha \gamma^n + \beta \bar\gamma^n = 0\). However, \(|\gamma| > 1\) while \(|\bar\gamma| < 1\), so it must be the case that \(\alpha = 0\) otherwise \(P_n\) diverges. This means that \(\beta = 1 - \alpha = 1\), and \[P_n = \bar\gamma^n ~~~~ \square\]

Onto the real problem

Having seen how we can solve the relaxed toy problem, let’s now turn our attention to the full problem at hand. to make it easier to work with later on, let’s assume that we start off at some arbitrary point \((n,1)\) and we’re going down towards the origin. First, let \(P^{0}_n = \bar\gamma^n\) be the probability that a particle on the \(x\)-axis will be able to random walk to the origin. Next, let \(P^{1}_n\) be the probability that a particle on the next row at column \(n\) will be able to random walk to the origin \((0,0)\). Our problem reduces down to finding an expression for \(P^1_n\).

Now, let \(\hat k\) be the event that a particle walks to a final displacement of \(k\) horizontal steps and immediately takes a vertical step. and let \(s\) be the event that we successfully found the target point, then we know that starting from \(n\): \[P^1_n[s] = \sum_{k \in \mathbb{Z}} P^0[s \mid \hat k] \times P_n[\hat k]\] where \(P^1_n[s] = P^1_n\) is the likelihood of succeeding (towards the origin) starting at \((1,n)\); \(P^0[s \mid \hat k]\) is the likelihood of succeeding given that we climbed down to the point \((0,k)\) at some point; and \(P_n[\hat k]\) is the probability that a particle starting at \(n\) climbs down at \(k\) on the second row. It follows that \[P^0[s \mid \hat k] = P^0_{|k|} = \bar\gamma^{|k|}\] but what of \(P_n[\hat k]\)?

In order to describe the behavior of descent, let us again break it into a one-dimensional scenario. Suppose I start off at \(k\), and I want to know the probability \(\hat P_k\) that I will random-walk to the origin and climb down there. As per the relaxed problem, we still have \(\hat P_k = 3 \hat P_{k-1} - \hat P_{k-2}\) when \(k \ne 0\). However, if we’re at the origin, then we can either go down with probability \(\frac13\), or we can go to either the left or the right side. Notice that the probabilities are symmetric about the origin, so this gives us (\(k \ge 0\), recall that \(\hat P_{-k} = \hat P_k\)): \[\hat P_k = \begin{cases} \frac{1 + 2\hat P_1}{3} & \text{when }k = 0 \\ 3 \hat P_{k-1} - \hat P_{k-2} & \text{otherwise} \end{cases} ~~~~ \text{and} ~~~~ \lim_{k\to\infty} \hat P_k = 0\] This again suggests that \(\hat P_k = \nu \bar\gamma^k\) and that (I.H. the induction hypothesis) \[P_0 = \nu = \frac13 + \frac23\hat P_1 = \frac13 + \frac23\underbrace{\nu \bar\gamma}_{\text{I.H.}}\] so that \(\nu = \frac1{\sqrt{5}}\) and \(\hat P_k = \frac{\bar\gamma^n}{\sqrt{5}}\).

Finally, this means that \[\begin{aligned} P^1_n &= \sum_{k\in\mathbb{Z}} \hat P_{|n-k|} P^0_{|k|} \\ &=\sum_{k\in\mathbb{Z}} \frac{\bar\gamma^{|n-k|}\bar\gamma^{|k|}}{\sqrt{5}} \\ &= \boxed{\frac{\bar\gamma^n}{\sqrt{5}}\left(n + \frac{1 + \bar\gamma^2}{1 - \bar\gamma^2}\right)}\end{aligned}\]


The problem prompt asks for \(P^1_1 = \frac{\bar\gamma}{\sqrt{5}}\left(1 + \frac{1+\bar\gamma^2}{1 - \bar\gamma^2}\right) = \frac{1}{\sqrt{5}}\frac{2}{\sqrt{5}} = \boxed{\frac{2}{5}}\), the probability of succeeding if we start at \((1,1)\).

Similarly, we can also compute \(P_0^1 = \frac13 + \frac23 \frac25 = \boxed{\frac35}\) the probability of succeeding if we start at \((1,0)\). We can verify these values with some monte-carlos testing. For example, we can simulate a run using the following python code:

def simulate(x): if not x: return 1 y = random.randint(-1,1) if not y: return 0 return simulate(x+y) def simulate1(x): y = random.randint(-1,1) if not y: return simulate(x) return simulate1(x+y)

Doing a simple trial gives me statistics that converges to the values we’ve computed.