Edit: I wish I'd had access to this article when I was going through that process. This is really well done.
It's not weird; the guy is brilliant and has had many large impacts to the community as a whole.
Also a great exercise in scientific computing.
Eventually I plan to open source the robotics curriculum that spawned this project, along with some other stuff related to starting a new school (something I'm working on with a team that was originally inspired by the XQ Superschool competition ). I want to run it through its paces with some real children first to make sure it's not completely off base though.
It sets up the discrete-time linear system and then uses a minimization principle to show what's going on. I can highly recommend it especially for people coming to Kalman filters from a math or optimization background.
For people wondering about application, one of the popular recent uses is for sensor fusion in virtual reality.
Thus, a really efficient Bayesian regression algorithm.
Can you summarize or link to this relationship :) ?
In Kalman filters, you assume the unobserved state is Gaussian-ish and it moves continuously according to linear-ish dynamics (depending on which flavor of Kalman filter is being used).
In HMMs, you assume the hidden state is one of a few classes, and the movement among these states uses a discrete Markov chain.
In my experience, the algorithms are often pretty different for these two cases, but the underlying idea is very similar.
It is just formulated a bit differently such that incremental update complexity depends of the dimensionality of the observation, not the dimensionality of the estimated state. Depending on the dimensions this can be a lot more efficient.