Hacker News new | past | comments | ask | show | jobs | submit login
Ricochet Robots Solver (kevincox.ca)
43 points by todsacerdoti on Sept 7, 2023 | hide | past | favorite | 18 comments



Great game. Sadly, out of print at the moment but used copies are relatively easy to find. Definitely not for everyone, but most devs I've played it with have really enjoyed it.

Fortunately, another dev friendly game, the classic Robo Rally was just republished with improved rules and components compared to the last edition. It's similar to Ricochet Robots but different enough that I have both in my collection.


Fun game. I own it. The problem is the skillset to play is very high, and people who are not good at it are blocked out. I highly recommend as a casual thing, and also great for those who only observe.


It can be bad if you have someone super experienced and a first time player. But I've found that you can make it work ok.

1. Let the new player play a game on their own. This lets them get the feel of finding any solutions. (If you can start two people at once it is also more fun to do that instead.)

2. Play but have the new player win ties. Normally whoever says the lower number first wins it. But this way if they can match your number within the timer they still win. There are almost always a handful of "easy" puzzles per game so this will ensure that they get a handful of tokens each game and have some fun.

It isn't perfect but I think for most people who would really love the game this is enough to keep them interested while they ramp up to speed.


New version of Robo Rally? With expansion sets soon, too, looks like?

You made my day!


Yep, two map expansions came out at GenCon and more expansions are planned for next year. Good time to be a Robo Rally fan!


Such a treat to see this!

I only played Ricochet Robots once, and it was when a colleague who was a board game buff brought it for a regular after-hours board game night in one of the conference rooms. This was ages ago.

We were all developpers, and the first thought so many of us had after the rules were explained was that it would be a lot of fun to code up a solver.

So glad that someone worked it out, what a pleasure to read!


I discovered this game at a board game convention earlier this year and quickly fell in love. After coming home and realizing it was fairly hard to get a copy these days, I decided to make my own web version of it so I could play with my friends. I also tried to implement a solver just to be able to check if the solutions we found were the best, but I didn't spend much time on it, and I never went past a breath first search that can really only check about 5 levels deep in a realistic amount of time. This might inspire me to revisit it and try to improve my solver. So thank you for posting this!


Ah this is great.

I wrote my own solver in Python+Numba. You've probably seen Robot Reboot https://www.robotreboot.com/challenge. I had a parser that could take a screenshot of the daily puzzles and feed them into my engine. Your post might just give me the kick in the pants to put it up online.

Curious if think this made you better at RR. I started out absolutely terrible, and actually got to the point where I could occasionally match the best scores on Reboot (but not their times of course!).


I think it has made be better. At the very least it shows some techniques like moving forward and back and highlights some approaches that I missed.

I generally solve the puzzle "backwards", looking at how the robot can approach the goal. So if you miss a possible approach it can definitely result in me missing a solution. Filling these gaps is valuable.

It also provides some intuition about how long puzzles are in different scenarios which helps you decide how to look.

But overall I would say that it is mainly curiosity why I use the solver. It is interesting to know if we actually found the best solution or how often we miss it.


The "Chrome warning" is obnoxious spam.


Especially since I see it on a FOSS privacy-enhanced fork, Vanadium


Another cool solver for the same game https://www.michaelfogleman.com/projects/ricochet-robot/


I wrote one of these to solve the daily challenge at the now-defunct <https://www.robotreboot.com/challenge>. Four robots, no mirrors. I did not bother with an A* search or robot sorting, because each daily challenge contains five different goals searched for from the same starting board configuration. Instead I search for all of them at once with the same breadth-first search. Even without those algorithmic optimizations, it can find a 17-move solution---like the following puzzle from Nov. 25th, 2022 (the "r" square is the goal for the red robot)---in about 30 seconds on my laptop using less than 400 MB of memory. I'm sure it would be faster on a faster machine.

    +---+-----------------+---------+
    |   |                 |         |
    |   +   +-+       +   +         |
    |         |       |             |
    |         +       +-+       +-+ |
    |                             | |
    | +                           + |
    | |                             |
    | +-+                 +         |
    |    R                |         |
    |         +-+       +-+       +-+
    |         |                     |
    +-+     + +             +-+     |
    |       |               |r      |
    |     +-+     +---+     +       |
    |             |   |             |
    |             |   |             |
    |             |   |             |
    | +-+     +-+ +---+     +-+     |
    |Y  |  G  |               |     |
    |   +     +               +   +-+
    |                               |
    +-+                             |
    |      B                        |
    |                 +-+           |
    |                 |             |
    |             +   +         +   |
    |             |             |   |
    |   +       +-+         +   +-+ |
    |   |                   |       |
    |   +-+     +         +-+   +   |
    |           |               |   |
    +-----------+---------------+---+
Solving this board required examining 30,427,896 unique states.

One of the things I found that improved the speed the most was simply reducing the memory usage.

The "visited states" test is just a 2**32 bit (2**27 byte) table indexed by the 32-bit combined robot positions. For most problems, most of the pages in that table will never even get mapped, but at worst the whole thing is 128 MB. There are probably clever ways to permute the indices of the robot positions to get better cache coherency in the table lookups, but I did not bother.

The state in the breadth-first search queue is just the 32-bit combined robot positions. No back-pointers are stored to recover the solution(s). Instead, a side-table keeps track of the index in the queue where the number of moves increases (since all entries are ordered by the number of moves required to get there), and upon reaching a goal the solution is recovered by back-tracking one move at a time and searching the relevant portion of the queue for a state plus a move that gets us to the next state. This sounds slow (and it is! printing the solution now takes a non-negligible fraction of the runtime), but it is actually faster than storing back-pointers in the queue. The queue size grows exponentially with the number of moves¹, but you do not need to search for the last move in your solution, because you were just making it when you reached that goal. So you avoid having to read through a large portion of the queue just by skipping that last level, and you avoid having to write the extra data for the whole queue. Assuming your back pointer would have been a 32-bit queue index, that cuts the data written in half. You also don't have to check the "visited states" table during back-tracking, so you avoid lots of cache pressure that was present when filling the queue the first time.

¹: At least for long enough to reach any single robot goal... if you search for all reachable combinations of robot positions, eventually you run out of unvisited states and the number of newly reachable states starts shrinking again until it hits almost nothing, at somewhere approaching 100 moves. For a given board, roughly 2/3rds of the 3,937,437,000 possible combinations of robot positions are reachable, giving a maximum memory usage of about 10 GB.


> but at worst the whole thing is 128 MB.

I think your math is off a bit. 2^32 bytes is 4GiB so a bitmap would be 512MiB. This is probably feasible for 4 robots but IDK if I just want to rely on virtual memory for 5 robots as it would take 128TiB. It definitely wouldn't work in 32 bit WebAssembly. (Or is there a way to do external memory?) But maybe I can test it out on native to see if it is in face worth it.

However a downside of A* is that I need to store the depth in the visited map as they may be found out of order. I also store the move that got there to allow quick reconstruction. (It is within the same byte as the depth.)

This is a pretty hot area of performance though. So I'm always trying to think of a better way to manage the visited map. Using a hash map is twice as slow for large puzzles so something custom is required.


> 2^32 bytes is 4GiB so a bitmap would be 512MiB.

Yes, sorry, 2**27 32-bit words, so 512 MB.

>However a downside of A* is that I need to store the depth in the visited map as they may be found out of order. I also store the move that got there to allow quick reconstruction. (It is within the same byte as the depth.)

There is a 1:1 correspondence between visited configurations and queue entries, so for total RAM usage it actually doesn't matter what information you're storing in which one. Storing the moved robot and previous robot position (10 or 11 bits) will be more efficient than storing a 32-bit queue index for the back-pointer, though.

I agree the bitmap is harder to scale for 5 robots, but 256 * 0.5 GB is just 128 GB, not terabytes, and there are machines with that much RAM. Sorting and deduplicating non-goal robots should reduce this to around 5.6 GB. The challenge is that the actual reachable states are quite dense (2/3rds of all possible states with 4 robots and no deduplication, and probably an even higher percentage with 5 robots and deduplication), so there is not actually a lot of room to make this smaller if you want to allow searching the full (~100 move) configuration space. That is not necessary to reach any single-robot goal, of course. The worst 4-robot single-goal solution I'm aware of is 27 moves. See this example:

    +---+-----------------+---------+
    |   |                 |         |
    |   + +-+         +   +         |
    |     |           |             |
    |     +           +-+       +-+ |
    |                             |G|
    |             +               + |
    |             |                 |
    | +         +-+       +         |
    | |                   |         |
    | +-+               +-+       +-+
    |                               |
    +-+     +-+             +-+     |
    |         |             |       |
    |         +   +---+     +       |
    |             |   |             |
    |             |   |             |
    |             |   |             |
    |     +-+     +---+         +-+ |
    |       |                   |   |
    |       +             +     +   |
    |                     |         |
    | +                   +-+     +-+
    | |                             |
    | +-+         +   +-+           |
    |             |    g|           |
    |           +-+     +           |
    |                               |
    +-+ +-+                     +   |
    |Y R|                       |   |
    |   +       +         +   +-+   |
    |B          |         |         |
    +-----------+---------+---------+
My solver on my laptop finds a solution in 67 seconds using 666 MB of RAM after examining 82,872,901 configurations, so just 2.1% of the possible configuration space. Only 316 MB is needed for the queue, so that means it was using ~350 MB for the bitmap, which is not particularly efficient (~35.4 bits of RAM mapped per visited state).

This example was found by Pieter Wuille, who wrote a reverse-state-space explorer (given a board layout and a goal, it finds the optimal number of moves to the goal for every initial robot position combination in about a minute). www.robotreboot.com allowed for 20,736 board configurations and 20 possible goal locations per board. With four robots there are 656,239,500 possible robot configurations after sorting/deduplication. Using rotational symmetry, and storing four bits per board/goal/configuration combination to say what robot to move and in which direction, he estimates you could store the solutions for all possible games in around 31 TB with about a year of CPU time. That won't fit in RAM, but it is certainly achievable.

Another fun property of solutions to consider is what we term the solution "complexity", defined as the minimal number of times you must switch from moving one robot to moving a different one (plus one, since originally you are not moving any robot). The "must" there is important, since if two robots do not interact, you could just alternate moving between them, but the goal of complexity is to measure the amount of interaction. A related property is the "causality": the amount by which a solution's complexity exceeds the number of robots it moves.

Optimizing for minimum complexity is pretty straightforward, but of course the "fun" solutions are the ones with maximal complexity/causality...


> There is a 1:1 correspondence between visited configurations and queue entries, so for total RAM usage it actually doesn't matter what information you're storing in which one.

It does if you are allocating the visited state map up front. The queue grows as you explore the states. As you approach every possible state then it will become 1:1 with the visited states. But until then it will be smaller.

> but 256 * 0.5 GB is just 128 GB, not terabytes

You are right. My math was off this time. I can actually fit this on my desktop but WebAssembly is an important target for me and browsers only support 32 bit wasm. So I would probably need an alternative implementation there.

I converted your puzzles to my solver out of curiosity:

17: https://ricochetrobots.kevincox.ca/#s=EBBBCBECA4CBAAEAIQIBQI... 1.3s

27: https://ricochetrobots.kevincox.ca/#s=EBBBQCEQA0ABAkEAAQARII... 2.4s.

Thanks for these examples, they are good to add to my benchmarks. Right now the slowest puzzle I am aware of is this 20 move with mirrors. https://ricochetrobots.kevincox.ca/#s=EBAJIAEAEQQhAAEAQYAJAI... It takes about 5.4s.

Mirrors make it much harder because robots are not interchangeable, the slowest puzzle without mirrors I have is 25 moves https://ricochetrobots.kevincox.ca/#s=EBAhQBEQBUABAwEAQQARII.... It takes 4.5s

All times were in the wasm build. The native build is faster but it varies a lot by puzzle, not sure why that is. Between 10 and 50% faster.


Thanks for running these examples. If there were timings for the final algorithm in the article I missed them, so it was hard to tell how much A* was really speeding things up. You have convinced me that it is worth specializing for a particular goal vs. solving all goals simultaneously, at least when the number of goals is 5.


A friend of mine used their very fast solver to determine the number of moves required to solve a worst case configuration (and, of course, also to determine one of those worst case configurations). Pretty slick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: