One trick I've used quite a bit in games I've written is to do a breadth-first search of the entire playfield, with no termination, resulting in data for how to get from any tile to the destination.
This has a few nice advantages:
* Breadth first is trivially broken up to iterate over multiple frames, amortizing the cost of visiting each tile
* It reduces the worst case as the number of enemies scales up but the destination counts are low.
* The implementation in general is very simple
* You can still early terminate if you keep track of the farthest distance a pathfinding layer needs to satisfy.
* No pathological, worst case situations where a playfield becomes very expensive to pathfind. An open field is the worst case.
I first used this in Robokill, a flash game which often had 20+ enemies on the same screen tracking the player. I estimated at the time that it cost about as much as doing A* on ~5 enemies.
In games, the worst (common) case is basically all that matters - a constant 60fps is significantly better than 100fps dropping to 30fps occasionally.
Yes, though I didn't know the proper name for it until now.
I knew it wasn't a novel technique (it's too basic for that), but had not seen it discussed before on HN, so I thought I'd share my experiences.
Edit: Reading the linked article, their particular algorithm is not incremental, and not breadth first, which in my view leaves a lot of value I was talking about above on the table.
It also uses a list lookup when marking the tiles - please don't do this in real world code! Though it was probably to simplify the blog post code.
How often did you need to do the pathfinding in Robokill? I've tried making a game where I did pathfinding with A* every few seconds with 20 enemies tracking a player and it locked up the browser for like a second every time.
Using the technique I outlined, the pathfinding is run continuously, stepping once per frame, processing tiles further and further out. When the player changes tile, it resets.
This results in a relatively small per frame cost, with a constant worst case (for a 21x21 tile grid, it would be at most 80 tiles processed in one frame, i.e. 21x4 - 4, the number of tiles in the outside border).
As outlined above, this is exactly what you want in a game, because it makes the sharp edge cases you're describing impossible. Throw a hundred enemies in there in a pathological maze environment, moving the player around continuously, breaking walls, and it (the pathfinding aspect) will run just fine.
Try to do that with A* in the browser, and my crystal ball predicts the addition of a well worn copy of Web Workers for Dummies on a book shelf near you :)
One gotcha is that if you don't stop moving, half of the screen that's too far away from the player will not be updated, but enemies in those squares just use the out of date information. In Robokill, I couldn't even notice the error this causes when I was testing.
Very nice. If the author is the submitter or sees this, could you please provide some details on what kind of libraries and techniques have gone into creating this? A blog post or something would be great.
Yes - I found this a couple months ago when I needed to do a pathfinding feature for a client, and I thought the same thing. It turns out browsers can do A* fast enough for live pathfinding on a fairly complex graph during mouse drag, which I wasn't sure about when I started.
In any case, playing with this visualization helped me poke holes in some of the simplified approaches I had in mind.
In my first bit of play, I just tried to find cases where Manhattan lost to the others. It seems like Manhattan is not as good when faced with a plausible path only to be thwarted at the end - is this right?
Can you show a few cases where Manhattan loses by a landslide?
Manhattan is a heuristic for measuring distance, not an algorithm to find a path, so I wouldn’t really say it “loses”, rather it “is less applicable”. Manhattan, Euclidean, and Chebyshev are all just different ways of measuring distance (https://en.wikipedia.org/wiki/Taxicab_geometry, https://en.wikipedia.org/wiki/Euclidean_distance, https://en.wikipedia.org/wiki/Chebyshev_distance). Each algorithm uses the heuristic to measure how promising each possible path is (how close each path is to the goal according to that definition of distance, ignoring walls), and prioritizes the most promising paths.
Based on those definitions, I would guess that “Chebyshev” distance is almost always the most accurate heuristic to use when you check “Allow Diagonal”, because it will always be perfectly accurate if there are no walls in the way, unlike the others. Similarly, when “Allow Diagonal” is unchecked, “Manhattan” distance is the most accurate. And “Euclidean” distance is always the least accurate in this demo, because in this demo, paths are always constrained to 45° or 90° turns so that they stay on the grid, whereas Euclidean distance assumes that it is possible to move at arbitrary angles like 18.2° right from east.
Because Manhattan distance isn't an admissible heuristic for an 8-connected grid I think it breaks the A*'s guarantee of optimality. I don't have an example, but it might be a place to start looking.
Of course it's a gray link... but wow, major enhancements since ~1 year ago when I was on my pathfinding binge. Good stuff! Never did implement full jump-point optimization, though I got halfway there by manipulating queue priorities.
The above pathfinder is much faster with a binary heap, https://github.com/bgrins/javascript-astar and can be heavily optimized if you know your engine. With SpiderMonkey I get fixed cost of around 1ms for initialization and it checks nearly 2000 nodes in 1ms on a 3Ghz Core Duo with a relative costly euclidean heuristic. So worst case on a 40x50 map is ~2ms. If worst case can be avoided upfront you'll always get a response within 2ms even with a 2000 nodes long path on most maps. It is amazing what one can do with JS nowadays.
None of these appear to work in an intuitive fashion. Are there algorithms that better resemble biological strategies? If I create a large walled area, I expect an entity to explore in one direction with a preference with external walls, miss areas and completely fail sometimes.
That sounds like a Depth-First-Search[1] (which wasn't included exactly). In DFS you'll go all the way out one way, until you reach the end, and then go back to when you last had a choice, and go out a new way, etc (a bit like the mouse did).
It is worth mentioning though, that this approach probably isn't so good if the map level is very large, since the BFS is way more likely to catch something around the player first.
This has a few nice advantages:
* Breadth first is trivially broken up to iterate over multiple frames, amortizing the cost of visiting each tile
* It reduces the worst case as the number of enemies scales up but the destination counts are low.
* The implementation in general is very simple
* You can still early terminate if you keep track of the farthest distance a pathfinding layer needs to satisfy.
* No pathological, worst case situations where a playfield becomes very expensive to pathfind. An open field is the worst case.
I first used this in Robokill, a flash game which often had 20+ enemies on the same screen tracking the player. I estimated at the time that it cost about as much as doing A* on ~5 enemies.
In games, the worst (common) case is basically all that matters - a constant 60fps is significantly better than 100fps dropping to 30fps occasionally.