Hacker News new | past | comments | ask | show | jobs | submit login
Solving NP-hard puzzles with the oldest trick in the book (davidkoloski.me)
458 points by taintegral 4 days ago | hide | past | favorite | 56 comments





Very nicely written. In addition, the example chosen was itself lovely to play with.

Explicit-state model checkers do this at scale. Readers may be interested in the internals of the TLA+ model checker, esp. the encoding of the state and dealing with disk.

Model Checking TLA+ Specifications

by Yuan Yu, Panagiotis Manolios, and Leslie Lamport (1999)

https://lamport.azurewebsites.net/pubs/yuanyu-model-checking...


Nice article! I used similar techniques to find solutions for a game called “DStar”… including the choice to write the solver in Rust. You can play the game and see the optimal solutions on the web (not tested on mobile):

https://www.moria.us/games/dstar/play

A state in this game is:

  enum Active {
    Ball,
    Block,
  }

  struct State {
    coins: u32,     // bitmask of which coins remain
    active: Active, // which player is active
    ball: Point,    // location of ball
    block: Point,   // location of block
  }
I thought about writing an A* solver for this, but a simple BFS found all the solutions quickly enough. With a single-threaded solver, each level could be solved in 40s or less. The longest solutions are around 100 moves long, and the entire set of 25 levels is solved with 2.5 minutes of CPU time.

This is a good article. It deserves a lot more attention than it got.

Thank you! As long as some people got to see it, that's enough for me. :)

We'll put it in the second-chance pool (https://news.ycombinator.com/pool, explained at https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page.

p.s. if you don't mind, could you please put your email address in your profile? That way we can send you a repost invite in the future, which is the way we do this if the post is older than a few days. And even if it's not that old, we sometimes still email a heads-up.


The frequency of using that workaround is telling how much valuable content gets under the radar of the HN hivemind. If a submission is 1) not about FAANG, entrepreneurship, programming language du jour, current events, or HN's idols or 2) is posted from the wrong time zone - it's often dead on arrival. I mean, there are counterexamples on the frontpage, but it's only a tip of the iceberg compared to what you can get in /new after filtering all the spam and fluff.

HN implicitly positions itself as "the smarter Reddit" but in my experience most subreddits of value don't have strong time zone bias. Anything that doesn't force me to post in the "SV programmers are slacking off" time window and compete for attention with a bazillion other posts would be welcome.


You're certainly right that the submission stream here includes a lot of gems that get overlooked, and that the second-chance pool is a workaround for that—and far from a complete solution. But I think you're overinterpreting the reasons for this ("not about FAANG", "wrong time zone", "SV slacking off", etc.) - one could come up with all sorts of possible such reasons and without data they're all basically just-so stories.

In the absence of specific evidence about specific factors, the simplest explanation is that it's just the way the medium works. By "the medium" I mean the large open internet forum, which HN is an instance of. Stuff routinely gets overlooked. To do something about this, we need countervailing mechanisms. The second-chance pool is the most successful one we've tried so far. I still want to extend the review process to the community at large, and I'm still not sure how quite to do that.


As a long-time HNer, I concur with dang.

This is a tech/business site operated by a startup incubator. The things in the first category (entrepreneurship, new/popular technologies, prominent persons) come with the territory. They are the subjects that the primary audience wants (and has historically wanted) to discuss and keep apprised of.

What I find frustrating is the influx of people who come to a tech/business site with the primary goal of arguing politics. By my recollection, this started to get bad five(ish) years ago, and got out of control with the onset of the pandemic.


Arguments about HN getting too political go back basically to the start of the site.

https://news.ycombinator.com/item?id=17014869

Perceptions about this get distorted by hindsight bias a lot, or whatever the bias is that makes it feel like things are always getting worse. From my perspective the mix is not so different than it used to be, and most of the differences have to do with the ocean we're all swimming in (i.e. the world at large) rather than HN itself.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Done! Thanks for the boost, I appreciate it very much. :)

This is the kind of gem that keeps me coming back to HN. Thanks!

It is a lovely piece, though I had an advantage of knowing ricochet robots already (which I immediately started thinking about solving).

This reminds me of the problems we did 15-20 ago at IOI or ACM ICPC. We did these in pure C then, sometimes C++.

I would have kept the states in a different way. Instead of making a vector/array of actors, I would make a pair of bitvectors the size of the grid. 1 is set if there is a blue (resp. red) actor at that position. No sorting is needed and it seems that for more practical puzzles this gives smaller state. All move operations are still easy to implement.


That would definitely work, and I’d be interested in the performance impact. This was written so that the state size would scale with the number of actors rather than the size of the grid. There is a degenerate case where a massive mostly empty grid becomes difficult not only to store in memory, but also to transition on move. The transition function would take time proportional to the size of the grid rather than the number of actors.

Very well written. Unfortunately I already tried all these kind of tricks in some competitions but others were still able to beat my code. Are there additional resources for improving it even further?

Cache + avoid dynamic allocations like the plague. For example, high-perf search algos implement reversible transitions: instead of creating & pushing new states around all the time, modify just one state, in-place. And then apply the same transformation in reverse when backtracking.

If you design your data structures well – to reflect the required transitions and query operations, rather than what the problem looks like to a human when drawn on a piece of paper – the forward/backward transition is nearly a no-op. Just some binary bit fiddling over data that's already in a CPU cache. And there's NO DYNAMIC ALLOCATIONS at all. Your search will fly!

The OP also mentions another great "caching" technique, under "Entropy reduction". This is really hard but basically try to find symmetries in the search space which allow you to prune away entire subspaces apriori, without searching at all. Often it'll be something like "rotating by 90 degrees leads to the same position", mirror positions, invariance to color, time… the symmetry types and their runtime benefits are problem-specific, so you need to sit and think hard about what makes a solution unique.

In the limit, you may be lucky enough to prune away so much of the solution space that there's only a single state left. Congratulations: you've solved the problem analytically :)


Depending on the type of puzzle, it might be possible to work backwards. For example here, if you can compute reverse-transitions you could run one or two steps of that, and put all states from which the goal can be reached in two moves into a HashMap. Then you run the forward search and stop as soon as you hit anything in that HashMap. In theory, you can reduce the runtime to find an n-move solution from O(bf^n) to O(bf^{n/2}), at the cost of more memory use. This can also be combined with A*.

Low-level optimization can be worth it:

* You can try to pack the game state into integers and use bitwise operations. An 8×8 board can be stored as a 64 bit vector, so a `u64`. If you know the edges of the board are never occupied, then moving around can be as simple as a bit shift (probably not for this game).

* A smaller state representation also means that HashMap lookups will be faster.

* Instead of using a pair of integers to represent a position, use a single integer and save a multiplication for every lookup into a grid.

* Add a ring of impassible cells around the board, instead of checking for the edges of the board each time.


This is a great idea, and the low-level optimization tips are all excellent ones I have used in the past. I want to talk a little bit more about using bidirectional A* though, because I think it's very interesting. It's a great strategy in general, but this may be a case where it doesn't do as well.

Working backwards for this particular puzzle is very difficult because on each turn an actor may or may not move. This effectively increases the branching factor from 4 (one for each direction) to 4 * 2^n (for each of four directions, each actor may or may not have moved). In practice it would be lower than that upper bound, but it could still be significantly higher than the forward branching factor. A nice visualization for this to think of your start and end states as points in space, and your A* searches as cones emitting from one point and growing toward the other. The angle of the cone would be roughly approximate of your branching factor, and when your cones meet each other or a point the search is done. If your branching factor is the same forwards and backwards, you can travel through much less space by searching forwards and backwards simultaneously. However, if your backwards branching factor is higher then the cone from the end state will be much broader. This could travel through much more space than just doing a forward search.

This kind of behavior is very evocative one-way functions, and makes me think it might be related to NP-hardness in some way. I'm really not qualified to prove these kinds of statements though. Maybe someone else can offer a more rigorous mathematical perspective?


For the quite similar puzzle Atomix, it also seems like the branching factor would be much higher for backward search because upper bounds are weaker, but you can show that on average the branching factor is actually the same [1]. I wonder if the same argument would work here.

[1] http://hueffner.de/falk/hueffner-studienarbeit-atomix.pdf Section 5.5


I wonder if it would make sense to do backward search, even if the forward and backward branching factors are very different. For example if the branching factor for forward search is 10 vs. 100 for backwards search, wouldn’t it make sense to do one step of backward search for every two steps of forward search? Or more generally log(b)/log(f) backward search steps for every forward search step, where the forward branching factor is f and backward branching factor is b?

This is all based on spontaneous intuitive ideas of mine and very superficial reasoning (and probably not even new).


Haven't looked at this particular game closely but had a similar experience to OP with an Othello AI competition back in college. There were four killer features for me that allowed me to win (and to apparently keep winning for years after I left, according to my prof)

1. having a really good and creative heuristic. The one I used ended up taking 6 different ideas I had for heuristics and combining them together in a weighted average based on their performance in a randomized trial I conducted between the 6. My vague recollection is that slightly over-valuing 4-corners positions performs unexpectedly well in Othello, but there was a lot more to it than that. The actual effectiveness of various heuristics changes over time as the game goes on, though I never modeled or attempted to exploit this.

2. Knowing the exact memory and execution time bounds on my prof's machine and setting things up so that I can terminate exactly when the time is ~5ms away from running out. We were limited to exactly 1 second per turn.

3. Caching. This was especially important in my case since I was technically using 6 different heuristics. I actually pre-generated a cache of the 100 most popular gamestates I encountered during my randomized trials, and this vastly increased the average depth I was able to explore in the allotted calculation time for one turn (1 second), especially during early game.

4. This is a continuation of 3, but it's super important if you have a turn based game with execution time limits to not throw away your work between turns. If you can modify your search so that it is pausible / resumable (which you can do with some rather simple multi-threading), and then define a simple routine that lets you resume a previous search by quickly modifying the tree and then resuming instead of starting an entirely new one, you are going to explore much much more. This optimization even with a crappy heuristic is going to win 99% of the time against opponents who don't use it.

One thing I didn't explore but wish I had was trying to predict which heuristic in my library of heuristics is closest to that of my opponent, and then opting for a strategy that is most likely to beat that heuristic. This would look something like you calculate each turn what the most likely opponent heuristic is based on their moves so far, and then have a pre-computed table of each heuristic's "foil". Maybe this would only kick in after several turns. An even better version of this would probably be to just use the probabilities for each heuristic as the weighted importance of each respective foil, and use all the foils together in a weighted average.

Fun fact: this was all in Java at the time. I can only imagine what havoc one could wreck with this sort of approach in Rust.


Caltech, right? Were you the author of Flippy? There's a new generation of AlphaZero-style ML bots that's managed to finally dethrone it :)

I've been working on and off on a Rust Othello bot aiming to combine AlphaZero in the midgame with a fast endgame solver [1]. Probably the coolest feature that's currently finished is that valid moves are generated and executed with SIMD instructions, so searching a new position only takes a few clocks on a modern cpu.

[1] https://github.com/maxwells-daemons/reason


That's pretty cool. In my case this was Dickinson College actually.

Java is generally not too bad for competitive programming. Of course C++ will easily beat it, but generally only by a factor of 2x or so. Unlike Python, which can easily be 5-10x worse…

My experience from these kind o tournaments as well. The order of magnitude difference meant that someone using java/c++ could search one or two moves deeper than those usin python, winning even with suboptimal implementations/heuristics.

In my experience, the only way to make meaningful progress on performance from here on out is to:

- Squeeze out more entropy (for example, rotating states for symmetric boards)

- Make the heuristic function smarter (for example, by calculating the assignment bottleneck)

I wrote a Carcassonne solver once and found many little optimization opportunities by detecting fail states early for example. Avoiding dead ends saves a massive amount of time.


I think it would be super neat to also compare this to a generic SAT solver based solution.

or CP solver

Or an Integer Programming solver

The transitions can often be sped up by not cloning and modifying state, but instead keeping track of all transitions and rollbacking. Not sure if it's doable here, since undoing a LEFT cannot know if the box was already the wall. So might need some extra bookkeeping. But for instance when solving 8-queens, sudoku or similar for huge grids, just walking back up the tree of transitions and undoing and reapplying stuff yields an immense speedup.

Edit: I see Radim mentions the same in a response to someone else.


How is this puzzle NP-hard? Genuine question, because of the number of pieces is bounded, it's solvable with a shortest path in a graph with a polynomial number of nodes.

Bounding the maximum number of actors is just an optimization for the cases we want to solve. Of course if you wanted to really solve any case you would need infinite space, and that’s not achievable either. If you desired, you can also just omit that particular optimization. :) I think a better way to phrase it is that we are writing a solver for a reasonable subset of inputs to an NP-hard problem.

Wouldn't the size of the graph grow something like $n^k / k!$ ? This is not polynomial in $k$.

lim (k -> 0) (n^k / (k!)) is actually 0. k! grows faster than n^k for any constant n.

I should have been a bit more careful I suppose, but in practice you can't increase k indefinitely without also increasing n anyway. In particular n >= k, so at worst it would be something like k^k / k!.

Really not sure why the state space would only grow as n^k / k!. As well as what n and k are in this case. Adding more tiles or more actors would both dramatically increase the size of the state space for that input.

That question should go to the parent comment. I only corrected the assumption.

On your comment though, I don't think there's much of "drama" in increasing the state space. Really it is just under 2 bits per cell by width by height. I would say it grows exponentially to the size of the board.


Very nice article. I´ve spent quite some time optimizing a solver for https://www.pathery.com/, but due the insanely large search space in the larger puzzles its quite not as easy. Can recommend that for anyone looking for a challenge.

Really great write up, some comment in it why in A* we need the heuristic to be the way it is. A* is a generic algorithm but given other problem knowledge i.e. the flat game map, an alternative algorithm selection and testing process would be a great addition. Overall great though.

This is an interesting puzzle. It not obviously in NP. My guess would be that it’s PSPACE-complete. Have you thought about the complexity at all?

I gave the thought some idle time, but it's been so long since I've constructed a proper hardness proof. If I do, I'll definitely make a post about it!

One small suggestion, I initially skimmed the first part of the article and didn't know the game worked on mobile. You could add an overlay to the game before you tap/click it to display the controls.

I'm going to go back to reading it now :)


  git clone --branch start https://github.com/djkoloski/anima_solver
  Cloning into 'anima_solver'...
  fatal: Remote branch start not found in upstream origin

This is fixed now.

very nice, enjoyed playing around the game examples.

Is there a repository where such kind of puzzle games come from?


I request the moderator of hn to please flag this post. This fraud is claiming to solve np hard problem in title and some jokers at hn make it to the front page of hn

I didn't read the title that way. You can have a (heuristic) solver that finds solutions to some instances of an NP-hard problem in reasonable time. That's not the same as claiming to have an algorithm that, in the strict theoretical CS sense, would solve that problem (i.e. all instances of it) in polynomial time. The latter claim would be highly suspicious by default; the former need not be, as it doesn't imply anything extraordinary.

Quote from OP in another comment:

> I think a better way to phrase it is that we are writing a solver for a reasonable subset of inputs to an NP-hard problem.


Many expert spend life on approximation of np hard problem. For the name of holy God if you don’t understand something then please don’t trivialize it.

Was the headline changed from "problem" to "puzzle"?

Is this even a NP-hard problem?

I would really enjoy a proof that it is not!

Could you be more specific about the fraud? Optimizing your code to use appropriate data structures, and using state reduction to remove symmetries and duplicate states seems like a perfectly cromulent way of reducing a problem to its true core, which may be small enough to bruteforce.

Claiming to solve np hard problem is a fraud. Just put optimizing code in title.

> Claiming to solve np hard problem is a fraud.

Claiming to solve an np hard problem in polynomial time on all inputs would be either a fraud or a breakthrough. This is not such a claim. The algorithm is organized to perform well on many but not all inputs--its worst case is exponential time and it doesn't pretend to be otherwise. If you play chess against a chess engine like Stockfish, the exact same thing is going on, and in fact the algorithms involved are closely related to the one in the article.


I'd suggest that maybe "puzzle" has a slightly different connotation than "problem".

I think a "puzzle" is a concrete instance of a more formal and abstract "problem".

So I don't think claiming to solve an instance (or many!) of a problem is inappropriate.


People solve specific np-hard problems all the time, every day. They just don't solve THE np-hard problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: