
The magic of the Kalman filter, in pictures - tbabb
http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/
======
RogerL
I'll be shameless and point you to my book on Kalman filtering which I wrote
in IPython Notebook, which allows you to experiment within your browser.

[https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-
Pyt...](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python)

~~~
bluesmoon
Nice. Julia version here:
[https://github.com/wkearn/Kalman.jl](https://github.com/wkearn/Kalman.jl)

~~~
humbledrone
What do you mean by "Julia version?" I thought you meant that you were linking
to a version of the ipython-notebook-based Kalman Filter textbook that had
been ported to Julia. But... you seem to only be linking to a Julia library
that implements Kalman Filters.

~~~
coldtea
Obviously the important thing (and the one being ported) is the Kalman filter,
which is what we're discussing in this post.

Not specifically iPython notebook versions of them.

------
cr4zy
Another cool resource for learning to apply and implement Kalman filters is
Udacity's AI for Robotics (focused on self driving cars) course by Sebastian
Thrun. Apparently Kalman filters are how Google's self driving cars predict
the velocity of other cars from their position.

[https://www.udacity.com/course/artificial-intelligence-
for-r...](https://www.udacity.com/course/artificial-intelligence-for-robotics
--cs373)

------
engi_nerd
Thank you to everyone who has posted resources in this thread. Just yesterday
I was talking to a younger engineer about how one of our GPS systems works. I
know the complete unit has a GPS and an IMU, and I knew _of_ the Kalman
filter, but was unable to explain it beyond "it combines the GPS and IMU
inputs to create a position and velocity solution with greater precision and
accuracy than can be achieved with either source separately". Now I have much
reading to do, and so does the young engineer. Thanks! This will help for the
long wait I have in the doctor's office tomorrow...

------
jefvader
"In other words, the new best estimate is a prediction made from previous best
estimate, plus a correction for known external influences.

And the new uncertainty is predicted from the old uncertainty, with some
additional uncertainty from the environment."

Crystal clear - great article, thanks!

I also recommend Ramsey Faragher's lecture notes on teaching the Kalman
Filter:
[http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20...](http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20Basis%20of%20the%20Kalman%20Filter.pdf)

------
sytelus
It's much easier to understand Kalman filtering in one dimension:
[http://credentiality2.blogspot.com/2010/08/simple-kalman-
fil...](http://credentiality2.blogspot.com/2010/08/simple-kalman-filter-
example.html)

~~~
boxfire
I find the Kalman filter explanation in terms of the Chokesky decomposition by
R. Eubank to be excellent. It has a prerequisite of knowing linear regression
(the linear algebra actually know what you are doing type), but made the
inference of what the Kalman filter does very clear. This is definitely not a
10 minute blog post explanation, but understanding how it functions on a deep
level is worth it.

[http://read.pudn.com/downloads158/ebook/707373/A_Kalman_Filt...](http://read.pudn.com/downloads158/ebook/707373/A_Kalman_Filter_Primer_-
_Eubank_-2006.pdf)

I don't know why people jump through hoops to start out with a
multidimensional Kalman filter, and even sometimes dive deep into complex
topics. The subject seems to suffer from the same terminology and confusion
gap as pre-Kolmogorov statistics.

------
jongraehl
I like particle filtering because it's easy to understand and implement -
[https://en.wikipedia.org/wiki/Monte_Carlo_localization](https://en.wikipedia.org/wiki/Monte_Carlo_localization)
\- and it's correct even for non-gaussian uncertainty.

Is Kalman filtering computationally more efficient (obviously particle
filtering is stochastic and so trades off accuracy for compute) or does it
have some other advantage?

~~~
thetwiceler
Firstly, Kalman filtering is optimal, that is, it produces exactly the correct
posterior distribution. As you mention, particle filters cannot achieve this.

It's a rough heuristic that to achieve a certain accuracy for a
linear/Gaussian system with a particle filter, you need a number of particles
exponential in the number of dimensions of the system. I feel like this could
probably be stated more formally and shown, but I don't think I've seen
anything in that vein. The Kalman filter, being simply matrix operations,
should scale as the number of dimensions cubed.

So yes, Kalman filtering is computationally more efficient, and (obviously)
more accurate.

I also wouldn't discount the fact that the Kalman filter is, in a sense,
simpler than the particle filter for a linear/Gaussian system; you don't need
to worry about resampling or setting a good number of particles, and you don't
need to compute estimates of the mean/covariance statistics (which are
sufficient since the posterior should be a Gaussian).

~~~
RogerL
Under the same conditions of linearity and Gaussian noise you can get
essentially optimal performance. Of course, why would you when the KF is so
much more efficient.

~~~
papaf
This is true but the assumption of Gaussian noise carries with it the
assumption that the model is correct.

Most models are not correct which is why particle filters perform much better
than people expect.

------
papaf
This appears to be a really nice writeup. However, at the end:

 _For nonlinear systems, we use the extended Kalman filter, which works by
simply linearizing the predictions and measurements about their mean._

I would recommend looking at an Unscented Kalman filter:

[http://www.control.aau.dk/~tb/ESIF/slides/ESIF_6.pdf](http://www.control.aau.dk/~tb/ESIF/slides/ESIF_6.pdf)

which sucks a lot less.

~~~
Qworg
Even better, use a Sigma Point Kalman Filter.

~~~
papaf
The Unscented Kalman Filter is a type of Sigma Point Kalman Filter -- I just
had to look this up to be sure, but it is kind of obvious.

~~~
Qworg
Certainly. However, if you look at the popular literature, UKF is more
represented than the SPKF by far, even given its better performance on highly
nonlinear systems.

------
qntty
A few months ago I was trying to wrap my head around Kalman filters and this
was the clearest explanation I found anywhere:

[http://www.cs.unc.edu/~welch/kalman/media/pdf/maybeck_ch1.pd...](http://www.cs.unc.edu/~welch/kalman/media/pdf/maybeck_ch1.pdf)

------
nilkn
The literature on Kalman filters has traditionally been horrendous to a degree
that is hard to believe, so this is a fantastic resource.

------
leni536
It was a surprisingly light read. I really liked the strong intuition based
reasoning in each step.

When I was playing with different compass implementations in F-droid I
recognized many of them uses Kalman filter for reducing the noise from the raw
sensor data. Some of them (maybe only one) had some problems near the angle 0,
where the sensor data jumps between almost 2pi and slightly above 0
frequently. The problem that the assumption that the measurement uncertanity
is gauss-distributed is breaking there badly, since it will be a half gauss
near 0 and an other half near 2pi. I don't know what is the general approach
to solve this. I would solve this with either:

\- Convert the angle into a unit vector and use that as measurement input.
Then predict the actual vector and use its orientation for the compass. \-
Move the periodic window boundaries with a slow relaxation. So if I hold my
compass in 0 angle direction, then all angle data is transformed into the
[-pi,pi) range. If I hold it to pi direction then the raw data transformed to
[0,2pi) range.

TL;DR: Be careful when applying a Karman filter on angles (or more generally
R/Z).

------
cshimmin
Perhaps this is a stupid question, but why is it called a _filter_? To me it
just seems like a (very clever) linear projection.

~~~
robotresearcher
As far as I know there is no deep reason, but it's traditional in the signal
processing community, where a filter is a function that modifies a signal.

[https://en.wikipedia.org/wiki/Filter_(signal_processing)](https://en.wikipedia.org/wiki/Filter_\(signal_processing\))

Kalman trained as an electrical engineer (MIT and Columbia) so that's the
lingo he was used to. There's a brief note in his Wikipedia page about having
trouble publishing the technique at first. I'd like to hear the story.

[https://en.wikipedia.org/wiki/Rudolf_E._Kálmán](https://en.wikipedia.org/wiki/Rudolf_E._Kálmán)

------
sharp11
This is great! Back in the '90s, I played with Kalman filters for a predictive
navigation system. I had a couple of textbooks, but it was a bear to make
sense of the math. Really wish I'd had this back then!

Nav applications are the ones you see most often; it would be interesting to
see an example from a completely different domain.

~~~
kmundnic
There are plenty of applications in system control, when you want to estimate
a variable within a system (sometimes, it's just too hard or expensive to set
a sensor).

I've been close to applications that estimate the state-of-health and state-
of-charge of lithium-ion batteries. I myself worked in volatility estimation
of financial returns using Particle Filter (very similar principle, but usable
with non-additive and/or non-Gaussian noise).

Here's some work on applications in Failure Prognosis:
[http://icsl.gatech.edu/aa/images/e/ed/Pfafdfp.pdf](http://icsl.gatech.edu/aa/images/e/ed/Pfafdfp.pdf)

------
acidburnNSA
We use these in the nuclear data community. Measurements of interaction
probabilities between neutrons and nuclides are really complex and uncertain,
especially for a few reactions like inelastic scattering. Kalman filters help
tie the nuclear models and experiments together.

------
hebdo
Awesome! Kind of similar to the Viterbi algorithm, except that Kalman is on-
line, while Viterbi works on the entire observed sequence at once, after it is
fully known.

~~~
kragen
Not only is it possible to run the Viterbi algorithm on-line, there are
probably half a dozen pieces of hardware _currently_ running the Viterbi
algorithm on-line within arm's reach of you:
[https://en.wikipedia.org/wiki/Viterbi_decoder](https://en.wikipedia.org/wiki/Viterbi_decoder)

------
joecomotion
In a past life, when I was mixing GPS, accelerometer, gyros, and tachometer
sensors, Aided Navigation by Jay A Farrel ([http://www.amazon.com/Aided-
Navigation-High-Rate-Sensors/dp/...](http://www.amazon.com/Aided-Navigation-
High-Rate-Sensors/dp/0071493298)) was super handy.

Optimal State Estimation by Dan Simon helped, too:
[http://www.amazon.com/Optimal-State-Estimation-Nonlinear-
App...](http://www.amazon.com/Optimal-State-Estimation-Nonlinear-
Approaches/dp/0471708585)

------
elliptic
Out of curiosity, how much better, typically, is the 'optimal' blending rule
than blending with some random or 'reasonable' weights? Never really thought
to ask myself that question...

------
pm90
I still find it astonishing how good of an abstraction Matrices are for
representing physical data. In CS terms, the use of matrix notation has
allowed the description of complex multidimensional systems to be scalable.
i.e. instead of adding an entry to the equation for every equation, you
represent ALL the variables in a matrix and change that one matrix instead.
So, this way of representation allows one to observe the relationships between
these numbers better.

Little wonder that beginner physicists spend so much time mastering matrices
and linear algebra.

------
codinghorror
This is a beautiful, well written, and clear explanation. The web needs much
more of this, please!

------
sajt
Invented by
[https://en.wikipedia.org/wiki/Rudolf_E._K%C3%A1lm%C3%A1n](https://en.wikipedia.org/wiki/Rudolf_E._K%C3%A1lm%C3%A1n)

------
monochromatic
This is the clearest description I've ever seen of a Kalman filter.

------
sebastianavina
as a mechatronics engineer, i didnt understand so well kalman filters

