Hacker News new | past | comments | ask | show | jobs | submit login

The claim here isn't that computers couldn't solve this, just that they currently don't. In general, computers and humans play the beginning and ending of chess games with some strong heuristics -- there are too many possible moves to enumerate, but there's a very limited number of worthwhile moves to consider.

In this puzzle, we have an extremely weird end-game. Humans can come up with a new heuristic on the fly, but current chess programs find all their existing heuristics fail and have to resort to brute-force search or really dumb heuristics and neither is helpful.

Obviously, it would be a matter of minutes to code up a new heuristic to detect this case. The interested question is what the humans are doing that lets us solve the problem so rapidly.




Scott Aaronson (himself a harsh Penrose critic on these matters!) had made this point several years ago, but in the context of 3SAT (an NP-complete problem). His "human favoring instance" was a 3SAT problem that encodes a violation of the pigeonhole principle.

Then [1], as now, I think that was still too generous to humans -- both Penrose and Aaronson have spent so much time around smart people that they've forgotten what the average person is like. The average person isn't clever enough, especially when even 4 pigeon/3 hole instances blow up to an absurd number of clauses.

Either way, I think the scaling problem rears its head:

- Only an exponentially small fraction of problems is pathological like this.

- Only an exponentially small fraction of humans can derive these theorems on the fly that simplify the problem.

- You can get everyone can solve it, but only by providing an exponentially precise hint. (The "good heuristic oracle" in the linked thread.)

Plus, I suspect there are general heuristics that avoid much of these "dumb" searches, for example if you represent the problem in a graph and check its symmetries to avoid loops of "hm, but what if I put pigeons 2, 3, and 4 in the holes... blast, that doesn't work either."

[1] https://philtcs.wordpress.com/2011/10/06/class-4-the-p-vs-np...


Yes, most decent modern SAT solvers have no trouble with the pigeonhole problem.


You touch on something interesting that I don't see often mentioned with NP-hard problems, and that's the actual inputs for where they're hard.

Packing a knapsack with many small items or items large relative to the knapsack make it relatively trivial.

It's often the case that NP real world problems actually have their inputs fall in the easier to solve ranges.


>Packing a knapsack with many small items or items large relative to the knapsack make it relatively trivial.

Interestingly, that's the basis of the (now broken) Knapsack cryptosystem -- your private key is an easy knapsack, and you convert it to the public key -- a hard knapsack -- via modular multiplication. You encode your message by your choice of which items from the hard knapsack to add to the sum, which becomes the ciphertext.

https://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsac...


Have you checked out lattice based crypto? It's the spiritual successor to merkel-hellman, based on the hardness of subset sum. I've got a rough presentation from when I talked about it at a reading group: https://docs.google.com/presentation/d/1_kLJ7M_7HKrzN0auz0-z...


Interesting! I didn't know they were related. Thanks!


Nit: "exponentially" does not mean "really", as in "really small". It means increasing (or decreasing) at a far faster rate than what it's exponential with. Often time, but any metric can be used.


That ("has exp(n) scaling with respect to input size") is how I was using it, and that usage is a critical part of my point, that the involvement of humans does not change the problem's difficulty class.

If you have a reason to think that it is not literally exponential, I'd love to discuss that! (I have good reasons to believe this is the actual behavior on the first and third effects but not necessarily the second.)


No, it actually doesn't mean anything about far faster. It means the rate of increase is linearly related to the current value.


The interested question is what the humans are doing that lets us solve the problem so rapidly.

The need for strategy is eliminated with tactically perfect play... Since computers are not yet to the point of having "solved" chess to tactical perfection, there will always be scenarios where strategy wins.

IMO this is the maximum win for human-machine interaction... humans define the strategy, and computers aid in the tactical execution of that strategy. I'm not really a big believer that AI is anywhere close to beating humans at being human, especially when you step outside the bounds of a simplistic game with a narrowed rule-set.

Like with cars and planes, you won't ever see an autonomous vehicle winning a World Rally Championship, and you won't ever see a computer figuring out how to make a Hudson river emergency landing. But computers can greatly assist with the braking, shifting, adjusting flaps, engine management, etc...


> you won't ever see an autonomous vehicle winning a World Rally Championship

Autonomous vehicles will surpass human rally teams. I've competed in rally here in the US.

The vision problems involved are difficult but don't underestimate how much humans struggle with sun and dust as well. With everything else the computer has the advantage.

WRC as an org may not ever run an autonomous class, but I would expect to see a robo rally event of some sort to show up, perhaps first as a spectacle at Pikes Peak or Isle of Man.


> you won't ever see an autonomous vehicle winning a World Rally Championship

Never say never. These guys seem to be on the right track. https://www.youtube.com/watch?v=1AR2-OHCxsQ


> you won't ever see ...

People have said that about mostly everything computers now do and is taken for granted. I won't be having discussions with my computer any time soon (before I die) but winning that rally doesn't seem 30-50 years out.


The example in the article isn't strategic, it's tactical. White cannot reach a losing position if he doesn't move his pawns, I don't have to invoke any wishy washy reasoning to claim that.


It might be possible (I haven't checked thoroughly) to draw a parallel: "Humans can come up with a new heuristic on the fly" for this chess problem <=> "Identify the incomplete part of this system -- in the sense of Gödel's incompleteness theorem".

So it might just be that humans process the game at a semantic level where we have developed the semantic tokens that can express the incompleteness, and from that produce a hypothesis, a heuristic.

Computers do not, generally, apply semantic reasoning to problem solving. At least, not yet. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: