
Random Walks: the mathematics in 1 dimension - outputchannel
http://www.mit.edu/~kardar/teaching/projects/chemotaxis(AndreaSchmidt)/random.htm
======
Xcelerate
One might ask the question: what is the probability that you will return to
your starting position over the course of an infinite random walk? On a 1
dimensional or 2 dimensional lattice, that probability is 1. What's crazy
though is that for a 3D lattice, the probability is _not_ 1 — it's about
0.3405.

~~~
iaw
Wow, do you have any proofs for this? I'm especially curious about the
generalized n-dimensional case.

~~~
hemmer
There is some more information and references here:
[http://mathworld.wolfram.com/PolyasRandomWalkConstants.html](http://mathworld.wolfram.com/PolyasRandomWalkConstants.html)

~~~
Ntrails
If you find it GP, I'd love to see where the integral constructed comes from,
since that's the clever part rather than the evaluation.

~~~
emfree
Here's a reference I found for one way to do it:
[http://www.math.nus.edu.sg/~matsr/ProbII/Lec6.pdf](http://www.math.nus.edu.sg/~matsr/ProbII/Lec6.pdf)
(Theorem 2.1). You define the Green's function G(x, y) = \sum_n Pr_x(S_n=y),
where x and y are 3-vectors and Pr_x(S_n=y) is the probability that an n-step
random walk starting at x ends up at y. If you have an infinite random walk
starting at 0, then G(0, 0) is the expected number of times that the walk
returns to 0. That's what the mathworld link calls u(3). You can use Fourier
inversion to compute G(0, 0) -- the link gives the gnarly details. It's pretty
cool.

~~~
Ntrails
You're a scholar and a gentleman, merci buckets

------
amelius
Random walks have been used also to numerically solve differential equations.
See e.g. [1]

[1] [http://www.jstor.org/stable/3612176](http://www.jstor.org/stable/3612176)
"A Proof of the Random-Walk Method for Solving Laplace's Equation in 2-D"

------
sdoering
Overview why the question on (and need for explanation of) random walks arise:

[http://www.mit.edu/~kardar/teaching/projects/chemotaxis(Andr...](http://www.mit.edu/~kardar/teaching/projects/chemotaxis\(AndreaSchmidt\)/home.htm)

~~~
agumonkey
How much of this led to early stages of life ?

------
awalGarg
Here is a related lecture from MIT
[https://youtu.be/56iFMY8QW2k](https://youtu.be/56iFMY8QW2k) which
mathematically proves how it is pretty much impossible to go "happy" from
gambling in a club even though intuition says otherwise.

~~~
Dylan16807
What does "happy" mean?

It's not hard to set up a bet that gives you an arbitrarily high chance of
gaining money, despite an expected value of less than 1.

------
zodiac
Isn't the expected distance (undirected) given by E[|d|], while sqrt(n) is the
value of sqrt(E[d^2])?

~~~
pash
They each measure the same thing, more or less, but it's easier to work
analytically with squares than absolute values. Similarly, we tend to work
with the variance rather than with expected absolute deviations, we calculate
sums of squares rather than sums of absolute values, etc.

More fundamentally, root-mean-square is the norm induced by the expectation
inner product in the space of random variables. Norms generalize the geometric
notion of length, so intuitively RMS is an appropriate measure of the
"stochastic distance" from the origin of a random walk after a set number of
steps. RMS can likewise be used as an analogue for geometric length for other
purposes in a stochastic context, e.g., in calculating the similarity
dimension of fractal stochastic processes like Brownian motion.

------
justifier
love this random walk..dance?.. video:

neutral dynamics

[https://www.youtube.com/watch?v=5P6Dihkrvus](https://www.youtube.com/watch?v=5P6Dihkrvus)

