This is why I dislike most science reporters.
In this puzzle, we have an extremely weird end-game. Humans can come up with a new heuristic on the fly, but current chess programs find all their existing heuristics fail and have to resort to brute-force search or really dumb heuristics and neither is helpful.
Obviously, it would be a matter of minutes to code up a new heuristic to detect this case. The interested question is what the humans are doing that lets us solve the problem so rapidly.
Then , as now, I think that was still too generous to humans -- both Penrose and Aaronson have spent so much time around smart people that they've forgotten what the average person is like. The average person isn't clever enough, especially when even 4 pigeon/3 hole instances blow up to an absurd number of clauses.
Either way, I think the scaling problem rears its head:
- Only an exponentially small fraction of problems is pathological like this.
- Only an exponentially small fraction of humans can derive these theorems on the fly that simplify the problem.
- You can get everyone can solve it, but only by providing an exponentially precise hint. (The "good heuristic oracle" in the linked thread.)
Plus, I suspect there are general heuristics that avoid much of these "dumb" searches, for example if you represent the problem in a graph and check its symmetries to avoid loops of "hm, but what if I put pigeons 2, 3, and 4 in the holes... blast, that doesn't work either."
Packing a knapsack with many small items or items large relative to the knapsack make it relatively trivial.
It's often the case that NP real world problems actually have their inputs fall in the easier to solve ranges.
Interestingly, that's the basis of the (now broken) Knapsack cryptosystem -- your private key is an easy knapsack, and you convert it to the public key -- a hard knapsack -- via modular multiplication. You encode your message by your choice of which items from the hard knapsack to add to the sum, which becomes the ciphertext.
If you have a reason to think that it is not literally exponential, I'd love to discuss that! (I have good reasons to believe this is the actual behavior on the first and third effects but not necessarily the second.)
The need for strategy is eliminated with tactically perfect play... Since computers are not yet to the point of having "solved" chess to tactical perfection, there will always be scenarios where strategy wins.
IMO this is the maximum win for human-machine interaction... humans define the strategy, and computers aid in the tactical execution of that strategy. I'm not really a big believer that AI is anywhere close to beating humans at being human, especially when you step outside the bounds of a simplistic game with a narrowed rule-set.
Like with cars and planes, you won't ever see an autonomous vehicle winning a World Rally Championship, and you won't ever see a computer figuring out how to make a Hudson river emergency landing. But computers can greatly assist with the braking, shifting, adjusting flaps, engine management, etc...
Autonomous vehicles will surpass human rally teams. I've competed in rally here in the US.
The vision problems involved are difficult but don't underestimate how much humans struggle with sun and dust as well. With everything else the computer has the advantage.
WRC as an org may not ever run an autonomous class, but I would expect to see a robo rally event of some sort to show up, perhaps first as a spectacle at Pikes Peak or Isle of Man.
Never say never. These guys seem to be on the right track.
People have said that about mostly everything computers now do and is taken for granted. I won't be having discussions with my computer any time soon (before I die) but winning that rally doesn't seem 30-50 years out.
So it might just be that humans process the game at a semantic level where we have developed the semantic tokens that can express the incompleteness, and from that produce a hypothesis, a heuristic.
Computers do not, generally, apply semantic reasoning to problem solving. At least, not yet. :)
I do think that a computer will play this correctly for both black and white; for white, a capture will result in a checkmate within a few moves, which the computer will be able to see. So this may result in incorrect computer evaluation, but not in incorrect play.
Anyway this position reminds me of the fact that if there is any kind of situation where computers struggle it's exactly these kinds : where everything is closed apart from a few pieces. IIRC that was exploited by grandmaster Hikaru Nakamura few years ago for his handicap match against chess engine Komodo.
But yes, given a chance to get into this position, sure it will - e.g. given https://en.lichess.org/analysis/8/p7/kpP5/qrp1b3/rpP2b2/p5b1..., it will correctly play b3 and force the original position, since everything else results in a mate that it can see. All it needs for correct play is to rank the alternatives in the right order, and mate ranks below playing on with a material disadvantage.
here's a preceding position which stockfish completely fails to play correctly.
how I play the position as black is Ne7 followed by Bd6. if white plays Qh4, I can play nd4+ followed by nf5 and nd6, and my king is safe. and I can slowly, safely, extricate myself and win.
To me it looks like a board transformation occurs, resulting in a left-to-right movement as opposed to a up-to-down movement.
d and e pawns captured pieces to get to a and b.
Two of the three remaining pawns promoted to a bishop.
Why couldn't black move the topmost pawn forward?
So black pawns have no legal moves, while the white pawns could move (the topmost pawn) or capture either of the black rooks, although this would be disastrous for white, as the black pieces would then be able to leave their position and would checkmate white in a few moves.
The orientation of board is white started on bottom and black on top.
Quick thing to also remember is that the bottom right square should always be white.
I take issue with that statement. All chess engines I'm aware of will correctly play out the draw, so it doesn't defeat them. The engines will _evaluate_ a material advantage for black.
Frankly, this isn't really a surprise.
Incidentally I once won money on Betfair (a P2P betting site) by evaluating a position in a Kasparov vs computer game that looked like a win to computers for the computer, but that I knew Kasparov would prevail in. My fellow gamblers were trusting the computer evaluations :)
stockfish loses here, but could easily draw.
Crazy enough, every chess player in the world can see immediately it's a draw from the very beginning, unless white concedes a helpmate, yet stockfish doesn't get it...
I was wondering if it would repeat that 47 moves later, but it didn't. Settled for the draw.
Once I'd got this far with the book, I put it down.
I don't see how mate is possible. In order to mate, you need to advance the white pawn to promotion, but it's impossible to advance the white pawn without one of the 3 bishops killing it.
So the white king just needs to stay on a white square to stay safe.
The so-called "aha!" moment was remembering that bishops always stay on the same color. I can't claim any great insight, it's just something I read somewhere (so it could certainly be programmed into a computer).
I had to check that there was no way for the other pieces to escape. I'm not good at chess, so I just did a very quick check and then assumed it was probably fine -- given the nature of the puzzle it's unlikely there'd be a tricky edge case to cover there.
The thesis seems to be that a computer could never develop and use general theorems that let you short-circuit the search process. That's probably true for chess engines (and also mostly irrelevant to real chess matches, rather than puzzles). Definitely an interesting question to explore. My guess would be that real computer creativity is possible, but may require some new techniques.
Yes and no. You can easily encode any specific heuristic into a computer. And brute force (add more hardware) can ensure you hit a lot of the heuristics.
But the hard part is ensuring that your search of heuristics is intelligent, and only needs to do a small number of steps before it hits a good one; in your case, what caused the "same square" heuristic to reach the top of your mind so quickly? That's what we want to reproduce.
(To be sure, the "bishops stay on same squares" heuristic is simple, but once you have a lot of heuristcs/theorems to build off of, it's just one fish in a very big sea, so you need a scalable, general approach.)
Is this possible outside the context of purposely making bad moves to humiliate your opponent? I thought you got to choose whatever officer you wanted, which would mean bishops are strictly dominated by queens.
They are trying to understand what it is about humans that makes them instantly able to figure out how to figure out the answer.
Humans can learn, computer can't. They can train on data, but so far none can teach themself how to train.
Once you're on D7 the pawn can be captured or kept. If black moves to capture the pawn with a bishop, you move the king into pawn's old spot. You capture the black rook with the other pawn for the win.
If there are no more bishops on the diagonal, or if they move their king to the B column, then you get a Queen and finish them off.
Am I missing something (I'm a very bad player)?
If you capture the black rook, black will capture your pawn with the queen (and proceed to mate you now that their queen can move).
The mistake black makes is trying to get its king out of the morass as soon as the pawn moves, instead of just capturing the pawn with its bishops.
The way we solve this problem is not by thinking in term of pure chess logic. We solve it by thinking "what did the puzzle creator thought". So we recall from memory what kind of patterns are suitable for such a problem and start a tree search. It is similar the what a chess AI does but nodes are abstract patterns rather than just chess moves.
This puzzle follows a mental pathway that is common for most humans.
We could also design problems that can easily be solved by specific chess AIs but not by humans or other chess AIs : just make it so that the AI naturally moves in the right direction, for example by making specific heuristics always right.
So while this problem is interesting for getting insight into the way humans solve puzzles, it doesn't make the human brain special.
Reading other comments, I see you'd also need to use the white king to guard the white pawn at c7 - I missed that - but even with that, you'd still need to hope that all three black bishops would go off that diagonal.
Anyway, this seemed mostly about visual pattern recognition and thinking geometrically, visualizing vectors, seeing unblocked pathways. That's not how computers do it.
Practically, this isn't really an issue. In fact, running the position through the GarboChess JS engine (http://analysis.cpuchess.com), playing as white, it just moves the king around — exactly what a human would do to draw the game.
We are we wasting brain and heuristics solving problems that would never exist in real chess play? Seems like an inefficiency to me.
An artifact from our lack of ability to play chess as intelligently as machines.
Just like an inferior machine might be able to do something like burn outs because it doesn't have antilock braking. (I don't get cars insert real example here)
Doesn't seem that useful a demonstration, except for people who don't know much about computers. Is this a bit like Hawking on AI?
black Qa4 (queen is forced because otherwise a4xb5 mate)
Computers basically know nothing--which is good and bad. It's good because they don't require much convincing before performing boring, repetitive tasks. They simply don't know or care that a given task is repetitive or boring, or even useful or not useful, so they will keep chugging along until they can return their output. The downside of this is that if they are to know anything at all, they must be programmed to know it.
They can also learn through evolutionary iterations once programmed conceptively. One could conceive of a computer program that would spit out random logical relationships to describe a large set of data, (what humans might refer to as "life experiences") and prune the logical statements until they eventually returned accurate logic describing the data. But this would require large quantities of generations for pruning, and careful programming of heuristics for logical evaluation at inception.
We can observe that our brains are born out of a system of evolution that has already undergone such processes. We have had a powerful survival heuristic on our genetic program for developing a broad set of skills, including logical reasoning, and it has been running for billions of years, pruning countless generations to give rise to our inherent level of intelligence that can discover logical shorcuts like the one in this problem. The evolutionary process also gives rise to possibly illogical but robust sets of skills, since survival and procreation are hueristically more valuable than being logically perfect in reasoning.
Sure, theoretically, and given sufficient computing resources and time, a man made computer system could approximately match and even exceed the human capacity for discovering logical shortcuts. But when paired with a competent human programmer, or team of programmers, the programmers can vastly reduce the time and resources required to achieve this process of intelligent evolution, and produce intelligent programs much more quickly than a computer program left to simply evolve it's intelligence on its own.
It took us humans quite a while to get here. Why would we assume that a computer could evolve faster than us, autonomously, without being programmed by us to do so? It is possible I suppose, but it's even more unlikely than the evolution of human intelligence, which sure, it happened autonomously, but over a course of billions of years.
There are chess-like problems which invoke the halting problem, but they involve infinite boards with infinite numbers of pieces on them.
It's even possible to write non-halting algorithms in non-Turing-complete systems for which the Halting problem doesn't hold.