
Introduction to Markov Processes - lettergram
http://austingwalters.com/introduction-to-markov-processes/
======
daniel-levin
Markov Chains are really cool. One of the many applications [0] being that you
can 'train' them on a text corpus, and then by repeatedly generating random
numbers, create sentences that are (mostly) gramatically sound but otherwise
absolute nonsense [1], [2].

[0]
[http://en.wikipedia.org/wiki/Markov_chain#Applications](http://en.wikipedia.org/wiki/Markov_chain#Applications)

[1]
[http://en.wikipedia.org/wiki/Mark_V_Shaney](http://en.wikipedia.org/wiki/Mark_V_Shaney)

[2]
[http://kingjamesprogramming.tumblr.com/](http://kingjamesprogramming.tumblr.com/)

------
kastnerkyle
Google search is the ultimate example (IMO) of an effective, simple, and
powerful Markov chain algorithm.

For anyone interested in this stuff, check out "The $25 Billion Dollar
Eigenvector"[0]. I did a simple page rank experiment in an IPython
notebook[1], demonstrating how PageRank works based on some UBC lectures.

Markov Chain Monte Carlo sampling is the basis of many Bayesian inference
techniques, and Markov chains also show up extensively in classic speech
recognition pipelines, under various forms of Hidden Markov Models (VB-HMM,
GMM-HMM). This stuff forms the foundation of statistical machine learning!

Great post - this simple example serves as a great introduction! Of course,
the rabbit hole is deep...

[0] [http://www.rose-
hulman.edu/~bryan/googleFinalVersionFixed.pd...](http://www.rose-
hulman.edu/~bryan/googleFinalVersionFixed.pdf)

[1] [http://kastnerkyle.github.io/blog/2014/04/16/simple-page-
ran...](http://kastnerkyle.github.io/blog/2014/04/16/simple-page-rank/)

~~~
lcedp
That's amazing.

One could imagine n(number of pages)-dimensional space in which any point
represent some possible pagerank distribution. We consider backlinks
information for each of n pages as an axis in new space, thus deriving
transformation matrix A. Out true pagerank would be the point(vector) in this
space which doesn't change its position after applying transform described by
A.

Don't know what to imply from that though :)

------
tlarkworthy
Markov chains can pop up _everywhere_. Not shown in this excellent intro is
that the stationary distribution can be calculated as the eigenvector with an
eigenvalue of 1.

I have used them in practice for load testing. Have thousands of bots randomly
take actions using a transition table. The aggregate behaviour can predicted
and converted into the frequency domain using the above trick and some other
stuff ([http://edinburghhacklab.com/2014/03/taming-randomized-
load-t...](http://edinburghhacklab.com/2014/03/taming-randomized-load-
testing-2/))

~~~
eoinmurray92
I think you mean eigenvalue of 0 for the stationary distribution.

~~~
tlarkworthy
no

[http://en.wikipedia.org/wiki/Markov_chain#Stationary_distrib...](http://en.wikipedia.org/wiki/Markov_chain#Stationary_distribution_relation_to_eigenvectors_and_simplexes)

~~~
eoinmurray92
You are correct, I was thinking of the master equation of a Markov process. In
that case the eigenvalue of 0 gives the stationary distribution under a
spectral expansion

~~~
tlarkworthy
Got any links on that? Sounds interesting. Does that calculate its dynamic
properties or something?

------
graycat
Hmm .... He neglected classification of states and asymptotic behavior. His
"statistical probabilities" is not good. He omitted the role of conditional
independence.

Instead read, say, Cinlar, 'Introduction to Stochastic Processes'.

~~~
aet
You are comparing a quick blog post to a monograph.

