
Particle Filter - keyboardman
https://leimao.github.io/project/Particle-Filter/
======
oehpr
Here's a short little video I found helpful in understanding the Particle
Filter algorithm:
[https://www.youtube.com/watch?v=aUkBa1zMKv4](https://www.youtube.com/watch?v=aUkBa1zMKv4)

------
jvanderbot
Ive never found a practical implementation of particle filters that was
deployed on a real system. PF are a poor choice for tracking a solution that
does not have wildly divergent measurements that can cause "lost robot" types
of situations. Even then, PF is a fine temporary and high cost stopgap to
converge to a "filterable" solution with only one predominant mode / a tight
cluster of oarticles. At that point, a closed form approximate filter makes
better sense in terms of cpu cost, memory, etc. Save the memory for feature
maps and the CPU cost for iterations or at worst a few extra hypotheses on
your primary filter.

~~~
galangalalgol
As you say, measurements with multimodal error, but also measurements with
horribly non linear observation or time models. It has to be pretty bad
though, otherwise a UKF would be better.

~~~
jvanderbot
UKF is, in my mind, a principled version of the particle filter. Its a really
clean way of operating and I cant emphasize enough the power of explicitly
conditioning on decision variables, i.e. multi hypothesis formulations of UKF,
EKF, or even Batch filters, all of which require orders of magnitude less CPU
/ mem.

(e.g. I'm not sure about this echo or that, so I'll propegate explicitly
conditioned on those unknowns two reasonable hypotheses that are effectively
unimodal rather than sampling the entire posterior probability space). The
exponential set of probabilities then allows you to take action to remove
hypotheses or trim by weight. And, this is only required when you encounter
huge uncertainties nonlinearities or multimodalities, so it is explicitly
adaptive to the situation. I've never seen a sampling or reweighting strategy
that provides a decent job of this. PF is throwing CPU and memory at the
problem, and it does reasonably well but for anything Ive worked on or dug
into, the CPU and Memory is alwayd better used elsewhere in the system.

~~~
galangalalgol
Agreed, by using PF you are throwing your hands up and admitting you have no
idea how to model your observations. If you can correctly model any aspect of
the problem that will be both better performing and more efficient. I use PF
to help me figure out a distribution mostly. Even UKF is usually overkill for
me. Anything less than 2n+1 sigma points gets beat by a 1st order ekf.

------
cf
This is a pointer that's a bit buried. For all the fanciness of different
particle filtering algorithms, you are likely not using enough particles.
Unless you having troubles related to running out of memory consider using
more particles. Even doing fancier things like rejuvenation and stratified
sampling benefit from more particles.

------
rgovostes
I can't quite understand from this post what a particle filter is or what the
project is. Is this a simulation of a robot using sensor readings to estimate
its position in a maze? How are particles emitted, and how does the sensor
measure them? What does the "probability" of a particle mean? Does it combine
any information about its previous position estimate? And how would this be
applied in the real world without a perfect model of the environment?

~~~
krisoft
> Is this a simulation of a robot using sensor readings to estimate its
> position in a maze?

Yes.

> How are particles emitted, and how does the sensor measure them?

I think I understand your question, and there might be a misunderstanding
here. The "particles" are not emitted in traditional sense. They are merely
records in an internal data structure inside the robot's head.

The problem: Your robot has a map of the scene, but doesn't know where it is.
You have distance measuring sensors on four sides of the robot, but they can't
see enough details to say where you are on the map.

What the "particle filter" aproach suggest is that you keep a list of
hypothesises of where the robot might be. Each hypothesis is a full
parameterisation (x, y, heading in 2d), and for historical reasons we call
them "particles".

At the beginning you initialise a bunch of these "particles" with random x, y,
h values. Then in a loop you take measurements with the sensors and calculate
how plausible each hypothesis is.

Maybe all your sensors can tell you is that there are walls a meter away at
the rear and to the sided but there is 3m free space forward. This would make
the "particles" which are in the middle of a big room very unlikely while any
particle which puts you in a cull-the-sack heading out would be more likely.

Next you would like to keep the more likely hypothesises while rejecting the
least likely ones. What the "particle filter" approach suggest is that you re-
sample your particles such a way that the probability of keeping a particle
equals with their calculated "plausibility".

In the next step you move with your robot using your motors. Usually you don't
have perfect actuation, but you have a probabilistic idea of how much you
might have traveled. You then update each particle by some movement sampled
from that probabilistic motion model. After that repeat from the sensing step.

What usually happens is that after a few iterations all the particles collapse
to a few well localised spots on the map. Very frequently to a single spot and
that means that the "kidnapped" robot has localised itself. This happens
because given enough data all the implausible theories can be eliminated and
what remains is where you are.

> What does the "probability" of a particle mean?

It gives a measure of how well the location represented by the particle (a
particular theory) fits with the last sensor measurements.

> Does it combine any information about its previous position estimate?

Indirectly. The re-sampling step guarantees that the particle is there because
it was at least somewhat plausible in the past, but you don't have to keep an
ever growing history explicitly.

> And how would this be applied in the real world without a perfect model of
> the environment?

You need some model of the environment (an occupancy map for example) but it
doesn't need to be perfect. Since everything is already probabilistic, you can
have a probabilistic map. Changes of how you calculate the plausibility
(probability) of a given hypothesis (particle) but doesn't change the whole
algorithm.

About how it is applied in the real world... well I will be frank with you I
have seen many robots localise in different ways and neither of them used
particle filters. Maybe we were all missing out on something cool. :) Thrun et
al. describe in Probabilistic Robotics as something which were useful for them
in the past. So you have that.

------
blauditore
The term "monte carlo methods" in statistics/probability bothers me, since it
literally just means trying out, or simulating. I can't help but think someone
started using it to just sound smart. Such shenanigans seem make science
harder to grasp especially for newcomers, so I think it would be better to
keep naming simple and obvious where possible. There are enough topics that
are too abstract, so it can't be done there.

~~~
n3ur0n
> just means trying out

this statement trivializes a very hard problem.

> literally just means trying out, or simulating

Simulating is an incredibly hard problem and MC methods and theory is an
incredibly rich area of study. Some tools I use for my work in probabilistic
machine learning models are MCMC techniques like HMC (Hamiltonian Monte
Carlo), variance reduction techniques (Rao Blackwellization). If you would
like to learn more, here is a great course:
[https://statweb.stanford.edu/~owen/mc/](https://statweb.stanford.edu/~owen/mc/)
\-- you can take a look at the syllabus. Also, Casella Berger is a standard MC
method book.

------
yash8141
does anyone have idea how to use particle filter for time series forecasting?

