Hacker News new | past | comments | ask | show | jobs | submit login
Robot Localization in Maze Using Particle Filter (github.com/leimao)
23 points by keyboardman on Feb 20, 2020 | hide | past | favorite | 6 comments




This comment[1] made it clear to me what exactly was going on. Also, on the author's GitHub, there's a link to a similar project,[2] which runs interactively in a browser, making the concept a bit more accessible.

Imagine you were dropped into a familiar setting, but were blindfolded. You know this place literally like the back of your hand, but you're only allowed to observe your location in a very limited sense -- say, every 10 seconds, you can reach out in the four cardinal directions to feel what, if anything, is around you -- but you can otherwise move around freely. Given enough observations, you can slowly establish your location precisely.

What was bugging me is the use of a maze. Mazes are meant to be "solved" and this doesn't solve the maze -- in the sense of reaching a goal point -- but rather only allows the robot to localise itself within it. Given that the robot necessarily has an internalisation of the maze (a map), it might have been nice to see it simultaneously solve it, while getting its bearings, rather than just wiggling about, seemingly at random.

[1] https://news.ycombinator.com/item?id=21878928

[2] http://www.claudeonthe.net/ai/particle_filter/particle_filte...


Thank you


I'm sure this is something cool but i don't understand it.


The problem that's being solved here is "I'm somewhere in a building, a building I know the exact layout of. Where am I in the building?".

The robot in this case can ask "roughly how far away is the wall in front of me, to the sides of me and behind me". Given that information it can rule out some places it might be, and figure out some more likely places. Then it moves a bit somewhere and says "OK, now how far away are the walls?".

You could imagine doing this in your house, blindfolded. You reach out and touch a wall in front of you - ok you know you are not in the middle of a room unless it's a small room. There's no wall to the sides of you reachable so you're not in a corner. You keep moving and measuring and you can work out where you probably are. If the walls either side of you and straight in front of you are far away, then you step forwards and the walls either side are suddenly touchable, you know you've just crossed from a room into a corridor. There's only one place in my house personally that can happen so I'd be able to say exactly where I was if that happened.

This is localisation. There's a harder problem called SLAM (simultaneous localisation and mapping) where you don't know what the world looks like or where you are and are trying to build up a model of the world and where you are in it.


This might be helpful to imagine what it is useful for in a real robot localization: https://veterobot.com/demos.html . Video for Unit3 nicely illustrates how fast (just in several steps) particles grouped up around real location. It is done with just four ultrasound range finders.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: