About the Simple Random Walk

This is the first blog post of a Summer Series. In those short entries, I explain in a very accessible way what my work regarding the modeling of electrical conductivity is about. This week, we start with some basic facts about the Simple Random Walk, a model which is both easy to grasp and extremely powerful, as it appears again and again in the study of more involved situations. The goal for today is just to get familiar with the object, to understand that it may be considered in different dimensions, and that its behavior can really change from one direction to the next. By the way, the simple random walk is a fundamental object in probability theory, and its studies, as well as the numerous refinement defined during the last century, has countless applications in physics, biology, finance…

The Random Walk in dimension 1


Let us start in dimension 1, that is, with a random walk on the line. Suppose you are walking along an extremely long street, and that you decide to try an experiment. You toss a coin, if you get head, you move forward for 100 meters, otherwise, if its tails, you go backward for the same distance. For convenience, let us say that your starting point is 0.

Let $1$ be the point 100 meters in front of you, $2$ for the one $200$ meters, $-1$ for the one $100$ meters behind you, and so on. In this one-dimensional random walk, an interesting concept called recurrence arises. Recurrence refers to the likelihood of returning to a specific point in the long run.

For instance, starting at point zero, if there are equal probabilities of taking a step forward or backward, will you eventually return to the starting point ? The answer is not only yes, it’s actually much more than that. You will come back to the starting point an infinite amount of time !

Here is an animation of $1000$ steps of the simple random walk. The counter represents the number of time the walk hit zero during its path. Of course, the random walk is random: if I re-run the animation, I might get a totally different path.


The Law of Large Numbers


One way to intuit this is to rely on the so-called Law of Large Numbers. You might have heard of it, but if that’s not the case, you’ve certainly experienced it. The Law of Large Numbers states that as we repeat an experiment or observation many times, the average outcome gets closer to the expected or true value. For instance, it tells us that the more times we flip a fair coin or roll a fair dice, the closer the proportion of heads or the average of the dice rolls gets to 50% or 3.5, respectively. The first of this example will actually help us a lot ! Let us say that to choose whether you move forward or backward, you toss a fair coin. Then your average displacement over the course of one step is… zero ($0.5$ chance of $+1$, $0.5$ chance of $-1$, so $0.5 \times 1 + 0.5 \times (-1) = 0$).


The Random Walk in Dimension 1 is Recurrent


Let us be slightly more formal to derive the proof. There is a very simple argument for this: we will compute the probability to be exactly at $0$ after $n$ steps, that we call $P_n$, for $n \ge 1$.

The probability of returning to the starting point after one step is 0, as the person needs at least two steps to return, so $P_1 = 0$. In fact, if you think at what happens after three steps, you quickly get that you can not be at zero either, just for parity reasons. So $P_n = 0$ for all odd values of $n$.

Let us now turn to the case $n=2$. There, you see that one needs to make exactly one step to the left, and one to the right. In mathematics, we have a convenient object for this kind of things: the binomial coefficients. The coefficient $\binom{n}{k}$ tells us exactly how many way there is to choose $k$ elements among $n$. Here, we thus have $\binom{2}{1} = \frac{2!}{(2-1)!1!}$ (the ‘’!’’ just means that you need to multiply the number with all its predecessor, for instance $4! = 4 \times 3 \times 2 \times 1 = 24.$) Okay, so the total number of path leading to $0$ after two steps is $\binom{2}{1} = 2$. The total number of path is $2^2 = 4$ (just because we have 2 choices for the first step, and two for the second one). Hence $P_2 = 2/4 = 0.5$.

Got it ? Let’s see. Can you guess $P_4$ from there ? To help you, the general formula is $\binom{n}{k} = \frac{n!}{(n-k)!k!}$.


Try to solve it for real, the answer is on the next line.

So, about this $P_4$. We get $\binom{4}{2} = 24/(2 \times 2) = 6$ path leading to zero. We have $2^4 = 16$ paths in total. So $P_4 = 6/16 = 3/8$.

Now for the general case: as mentioned before, we need $n$ to be even. So let us write $n = 2p$ for simplicity. We need exactly $p$ steps to the right (and thus exactly $p$ to the left), which gives $\binom{2p}{p}$ paths, among $2^{2p}$ possibilities. So there you have it ! $P_{2p} = \frac{\binom{2p}{p}}{2^{2p}}$.

To conclude, we will in fact show that the average number of return to zero is infinite. This expectation is simply given by $P_1 + P_2 + \dots + P_1000 + …$ so it is an infinite sum. We can write it in a more compact way $$ \sum_{k=1}^{\infty} P_k = \sum_{k=1}^\infty \frac{\binom{2k}{k}}{2^{2k}}. $$ It turns out that this sum is infinite: the sum of the first $10$ terms gives almost $2.7$, the first $100$ terms give $10.32$ something, and it keeps growing to infinity.

A theoretical proof of this can be obtained by Stirling’s formula, but this would take us too far here.

The one-dimension random walk is thus recurrent: it comes back to zero again and again throughout its trajectory.

Let me mention additional things about this simple random walk. The first one is quite trivial. So far, the walk was symmetric. However, this whole recurrent structure where we can count the paths completely vanishes if one considers an asymmetric random walk. Such a walk will have a preferred direction, that will clearly take the lead over the long run. For instance, if your probability to go forward at each step is $0.7$, and the one to go backward $0.3$, you will end up with a trajectory similar to the following one:

And to conclude, the most fascinating result. You might wonder what happens in superior dimensions. For instance, if you can also go left and right, instead of just forward and backward (that we will see as ‘up’ and ‘down’ for clarity). At first sight, you might think that this is just the same problem. In some sense, your position on the left-right axis performs a random walk, and your position in the up-down axis as well. But this means that you come back at position $0$ in the left-right axis infinitely often, and at the position $0$ in the up-down axis infinitely often. However those come back may be desynchronized, in which case you don’t come back to $(0,0)$ that often. So what is the answer ?

It turns out that you can reproduce a similar computation as the one we did in dimension 1 in this case, and you will find a probability to come back to zero that is smaller, but still enough for the sum of all those probabilities to go to infinity. So yes, in dimension $2$, you will come back to $(0,0)$ infinitely often. Here is a view of what happens, the counter increases by one every time the walk reaches $(0,0)$.

Now the crazy thing: if you were given a jetpack and could also go in the air, performing thus a random walk in dimension 3, you would not come back to zero infinitely often. Said differently, the synchronization that appears in 2d is too rare to be impactful in dimension 3, and zero will only be visited a finite number of times. Mathematician often summarize those behavior with the following sentence, which will be our conclusion for today:

A drunk man always end up finding his way home. A drunk bird gets lost forever.