
Is Chess with Queen Odds a Provable Win? - randomwalker
http://arvindn.livejournal.com/129555.html
======
Dove
The links from the article are fascinating. I had never heard of Arimaa
before, and am quite intrigued.

And from the linked article by Kasparov
([http://www.nybooks.com/articles/archives/2010/feb/11/the-
che...](http://www.nybooks.com/articles/archives/2010/feb/11/the-chess-master-
and-the-computer/)) comes this gem:

    
    
       In what Rasskin-Gutman explains as Moravec’s Paradox, in chess, as in so many 
       things, what computers are good at is where humans are weak, and vice versa. 
       This gave me an idea for an experiment. What if instead of human versus machine
       we played as partners? My brainchild saw the light of day in a match in 1998 
       in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand 
       running the chess software of his choice during the game. The idea was to 
       create the highest level of chess ever played, a synthesis of the best of man 
       and machine.
    
       ....
    
       In 2005, the online chess-playing site Playchess.com hosted what it called a 
       “freestyle” chess tournament in which anyone could compete in teams with other 
       players or computers. Normally, “anti-cheating” algorithms are employed by 
       online sites to prevent, or at least discourage, players from cheating with 
       computer assistance. (I wonder if these detection algorithms, which employ 
       diagnostic analysis of moves and calculate probabilities, are any less 
       “intelligent” than the playing programs they detect.)
    
       Lured by the substantial prize money, several groups of strong grandmasters 
       working with several computers at the same time entered the competition. At 
       first, the results seemed predictable. The teams of human plus machine 
       dominated even the strongest computers. The chess machine Hydra, which is a 
       chess-specific supercomputer like Deep Blue, was no match for a strong human 
       player using a relatively weak laptop. Human strategic guidance combined with 
       the tactical acuity of a computer was overwhelming.
    
       The surprise came at the conclusion of the event. The winner was revealed to be 
       not a grandmaster with a state-of-the-art PC but a pair of amateur American 
       chess players using three computers at the same time. Their skill at 
       manipulating and “coaching” their computers to look very deeply into positions
        effectively counteracted the superior chess understanding of their grandmaster 
       opponents and the greater computational power of other participants. Weak human 
       + machine + better process was superior to a strong computer alone and, more 
       remarkably, superior to a strong human + machine + inferior process.

~~~
gjm11
Moravec's "Paradox" isn't really paradoxical at all. What makes us say that
computers are "strong" in some area? Answer: the fact that they are good there
_relative to our prior expectations_. Where do those expectations come from?
From looking at ourselves. Similarly: what determines where we say we're
"weak"? Answer: seeing other things being strong relative to us. Computers,
for instance.

Now, of course there are other points of reference for deciding where we're
"weak". Doesn't that invalidate my argument? Why, no, because actually if we
use those other points of reference the paradox rather goes away. For
instance: Computers do arithmetic very, very fast. Is that an area where we're
weak? Only if we compare ourselves against computers; we do arithmetic much
better than chickens or tigers do.

------
nas
> Indeed, if you start from a closed position where strategy dominates and
> tactics are of relatively little use, top human players can still trounce
> computers

I haven't been following computer chess as much as I used to be I suspect this
is actually untrue. Computer speed has been increasing as always but there has
also been massive gains in chess program strength (e.g. compared to old
programs when running on the same hardware). For example, Rybka 4 would
absolutely crush programs from 10 years ago.

Those combined gains make computer chess programs so strong that I doubt even
top players can hold them off even with the position looks quiet. Rybka can
defeat GMs even when they have pawn plus first move odds[1]. In one match
Rybka had a tiny opening book so the GM had every opportunity to steer the
game in a strategic direction.

1\.
<[http://en.wikipedia.org/wiki/Rybka#Odds_matches_versus_grand...](http://en.wikipedia.org/wiki/Rybka#Odds_matches_versus_grandmasters>);

~~~
randomwalker
Good point. When I wrote that what I had in mind is puzzle-like positions
where strategic planning is required. For example see
[http://en.wikipedia.org/wiki/Fortress_(chess)#Defense_perime...](http://en.wikipedia.org/wiki/Fortress_\(chess\)#Defense_perimeter_.28pawn_fortress.29)
(Petrosian vs Hazai).

My real point here is that no chess program incorporates strategy the way
humans do, and that it is unfortunate that this goal isn't being pursued.

~~~
jacquesm
Strategy is just our way of compensating for low clockspeeds.

~~~
Confusion
Which is why we will _always_ win some contests, if they don't turn to trying
their hand at strategy. Clockspeeds will always be too low for some problems.

------
xenophanes
If anyone is serious about proving things about chess I would suggest they
start by proving things about simpler chess variants.

The variant of chess "wild 5" where your pawns start on the 7th rank and
pieces on the 8th (so pawns promote in one move) has far fewer choices than in
chess. It's a simpler game. It has a significant advantage for white, much
larger than in normal chess. Humans came somewhat near to proving it's a win
for white just by learning the (relatively few) openings out to 20 or so
moves. At each step there's usually only a couple moves that aren't terrible.
For the first six moves there is a _single_ way of playing which is considered
best for both sides.

Yet even in this much simpler game which humans are near cracking, I think a
pure math type approach would have a very hard time getting anywhere.

If you can't do anything there, you could always try an even simpler game.
There is a game called pawns where you start with only your pawns. If you
promote you win instantly. Math ought to be able to solve that one. If you
crack that, move on to little chess (normal chess but only with pawns and
kings). Little chess should no doubt be a draw (you can waste moves with your
king unlike in pawns where it's less clear) but proving that would be a good
accomplishment I think.

------
amalcon
Game researchers don't ignore _all_ Chess-like games that are more difficult
for computers. It's just that they've met with very, very limited success in
Go. Combined with Go's relative unpopularity outside of southeast Asia, this
means you don't hear about it very much.

For example, according to Wikipedia, a computer actually beat a professional
Go player for the first time in 2008.

~~~
mquander
Sure, perhaps with a huge handicap or on a 9x9 board. There exists no computer
that could ever beat any professional go player in an even 19x19 game, even if
you gave the professional as many beers as he could drink.

~~~
amalcon
Well, that's what it says on Wikipedia -- though as it happens, the citations
are both dead links, so your guess is as good as mine.

edit: Trivial search turned this up:
[http://www.sciencedaily.com/releases/2009/05/090514083931.ht...](http://www.sciencedaily.com/releases/2009/05/090514083931.htm)

Wikipedia was apparently talking about a handicap match. Still, it's very
telling that 2008 was the first time a computer beat a pro _even in a handicap
match_.

~~~
monkeypizza
well, handicaps can be arbitrarily large. so a bot could have beaten a pro
with handi even 40 years ago, as long as you make the handi large enough.

recently bots have gotten a lot better at go - they just passed 3dan rank,
which is above where most amateurs ever reach.
[http://www.lifein19x19.com/forum/viewtopic.php?f=18&t=13...](http://www.lifein19x19.com/forum/viewtopic.php?f=18&t=1344&start=0)

But they still have a ways to go.

Here is a (fake money) future prediction market on whether a bot will beat a
pro in an even game by 2020 - running at 20% right now.
<http://www.ideosphere.com/fx-bin/Claim?claim=GoCh>

~~~
amalcon
In this case, they were fairly reasonable handicaps -- six to eight stones,
though certainly large, is not unheard of. You could see that kind of
difference between a low-level and a high-level amateur, for example.

~~~
mquander
Sure, it's not unheard of, but it's still huge! It's extremely hard to become
six or eight stones better from that level, and by all appearances, it
represents an _enormous_ gap in ability. The most talented and smartest humans
take years of constant study and practice in their teens and 20s (at the
height of their cognitive power) to go from 2d or 3d amateur -- the strongest
programs -- to become a high-dan professional. Further perspective: Various
professionals have remarked that they believe the best humans are only about 3
handicap stones weaker than God.

Consider this. Perhaps you play chess; David Levy's famous chess AI bet was
that no computer by 1978 could beat him, a chess international master, in a
match (I don't know his exact strength, but presumably 2400-2500 Elo -- much,
much stronger than an expert!) Now, in 1978 the best computer could not beat
him in a match. But it did win a game, and draw a game. However, it took
nearly another 20 years from there before a computer could beat the world
champion.

But Go computers, relatively speaking are not even as good at Go as the 1978
computer was at chess. The equivalent of a 2d amateur at Go might be a 2100
Elo human at chess, a strong expert; certainly not an IM! So it's not obvious
to me that Go computers will be at a world-class level in a shorter timeframe
than 20 years. Indeed, I believe more effort has gone into computer Go circa
2010 than had gone into computer chess circa 1978, so it might be reasonable
to expect progress to continue even slower.

~~~
amalcon
_So it's not obvious to me that Go computers will be at a world-class level in
a shorter timeframe than 20 years._

Don't get me wrong: I agree with you 100% on this. I'd more likely go further
and wager _against_ that happening. Go has an absolutely silly branching
factor, no solid method of pruning, and no straightforward position evaluation
function. Material doesn't work nearly as well as it does in Chess, and the
sheer amount of information contained in the board makes the "mean value of
this board position in past grandmaster games" approach completely infeasible.

My only point is that researchers are, indeed, making progress. It's just very
limited.

------
ig1
+1 for funding issues.

When I was working on my undergraduate thesis on applying machine learning
techniques to board games, I developed the basis of a new method of analyzing
board games (using graphs of games and graph isomorphism) but I wasn't able to
find anyone who could finance me doing a research masters in it. Although I
think part of my issue was that it wasn't obviously Computer Science, nor was
it obviously Mathematics.

~~~
troutwine
The old saw still rings true: turn your research problem into a military
application and the DoD will likely fund it. Perhaps instead modeling a chess
board as a graph you model passable routes in a mountainous terrain. Instead
of chess pieces, maybe the forces you're modeling are small groups with
different movement characteristics?

~~~
jrockway
Which chess piece is the guys with guns, and which chess piece is the
airstrike?

~~~
troutwine
"In this paper we assume all forces are ground-based and beyond any hope of
air-support or resupply. Historical analysis of battle-field circumstance will
reveal numerous instances--even in the modern period--of such occurrences. A
more general result is beyond the scope or intention of this paper."

------
jderick
There is another closely related field that has plenty of real applications:
formal verification. Essentially the task is to ensure no reachable state
violates some property. The problem is the same as in chess, the reachable
state space is enormous. There is probably more funding available for this
take on the problem than chess, but it has been studied for decades and there
are no easy answers.

~~~
bane
Out of curiosity, I wonder how long it'll be before we can simply compute
every possible valid game, inserting every valid move into some type of tree
structure, and have a computer simply walk the tree structure always aiming
for a "win" leaf.

Anybody have any ideas on how big the data for that would be? (I suspect there
could be some optimizations as some games that start differently could end up
in the same state as some other games -- i.e. optimize for board state
collisions).

~~~
reitzensteinm
It's possible we never will. Although we take Moore's law for granted,
eventually we'll hit the natural limits of computation and storage per m^3,
and solving chess completely may well be beyond that (but maybe not, as well).

From Wikipedia (<http://en.wikipedia.org/wiki/Shannon_number>):

"Allis also estimated the game-tree complexity to be at least 10^123, "based
on an average branching factor of 35 and an average game length of 80". As a
comparison, the number of atoms in the observable universe, to which it is
often compared, is estimated to be between 4×10^79 and 10^81."

~~~
bane
I'm wondering if that number is for every possible game valid or invalid, or
just valid games.

In case anybody is interested

<http://en.wikipedia.org/wiki/Solved_game>

------
shinkansen
> What if we make things easier for the machine? It is obvious to a rank
> beginner that a perfect game with a rook handicap is a win for the side with
> the material advantage. No, make it a queen! Surely that must be a provable
> win?

Hm, I'm sure it must be. Although I don't know how you go about proving it,
it's a simple matter to force equal trades; black cannot avoid the exchange of
pieces forever, and if white plays a perfect game he will always win, without
doubt.

> Not so fast. Even against a crushing asymmetry in material, it is not too
> hard to avoid mate for a couple of dozen moves, which means that calculating
> all the way to the end of the game is beyond the reach of search-based
> algorithms.

Okay, just calculate the moves it would take to force the equal exchange of
material from a given position. Generally as the game progresses and the board
opens up it becomes inescapable.

After a certain point, when enough material has been removed from the board,
looking for mate becomes trivial. Esp if you operate with such a commanding
advantage as a queen...assuming you can force the equal exchange of all other
material, it is possible to calculate checkmate within a couple of moves.

~~~
randomwalker
I'm sorry, but that is simply incorrect.

 _it's a simple matter to force equal trades; black cannot avoid the exchange
of pieces forever, and if white plays a perfect game he will always win,
without doubt._

Yes, that's pretty much the human intuition for why no one doubts that White
will win. But it is very, very far from a mathematical proof.

 _Okay, just calculate the moves it would take to force the equal exchange of
material from a given position._

Let me remind you that each _ply_ has a branching factor of about 20, which
means each move has a branching factor of several hundred. In most positions
you'd be lucky to be able to calculate even one or two forced exchanges, let
alone all the way to the end of the game.

A program attempting to prove victory operates in a very different context
from a normal chess playing program — it is not allowed to prune any positions
at all. We haven't even found the status of all seven piece endings yet! That
is despite intense effort. See
<http://en.wikipedia.org/wiki/Endgame_tablebase>

It is utterly, utterly inconceivable that a search-based approach will ever
prove victory with Queen odds.

~~~
shinkansen
> Yes, that's pretty much the human intuition for why no one doubts that White
> will win. But it is very, very far from a mathematical proof.

White will win if he plays perfectly: this is mathematically assured because
white has an advantage of nine points. The only way for white to lose or to
draw is by extreme error. The article assumes a perfect game, so I too assume
that white will play a perfect game.

> Let me remind you that each ply has a branching factor of about 20, which
> means each move has a branching factor of several hundred. In most positions
> you'd be lucky to be able to calculate even one or two forced exchanges, let
> alone all the way to the end of the game.

As I recall, Deep Blue was calculating at a depth of more than eight moves...

> The Deep Blue chess computer which defeated Kasparov in 1997 would typically
> search to a depth of between six and eight moves to a maximum of twenty or
> even more moves in some situations. --
> <http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)>

I find it difficult to believe that this wouldn't have improved or could not
be improved upon, since this was based on the technology available in 1997.
It's safe to say we've made progress since then.

While you may not be able to calculate an entire game, you could certainly
calculate all the exchanges required to reach a properly winnable and
calculable endgame. This is pretty damn close to being able to calculate a
whole game...and you're assured victory.

As I said, once you get to a point of two kings and a queen, it's trivial.

~~~
_flag
I think one important point you're missing is that a chess engine looking to
mathematically prove a victory is very different from the ones that play
against grand masters. Deep Blue may have been able to calculate 6 to 8 moves
in advance, but that was after it pruned all the moves that were obviously
incorrect. When you're trying to prove something mathematically however, you
have to assume that even something as silly as sacrificing a queen for a pawn
with no obvious positional gains is a valid move and calculate all possible
branches taken from that move until checkmate some 30 moves down the road.

For example, checkers is a vastly simpler game than chess. Yet, it took 18
years of constant computation to solve it [1]. Even a game as simple as tic-
tac-toe has 255,168 possible games [2]. The estimated number of chess games is
10^10^50 [3].

1\. [http://www.newscientist.com/article/dn12296-checkers-
solved-...](http://www.newscientist.com/article/dn12296-checkers-solved-after-
years-of-number-crunching.html)

2\. <http://en.wikipedia.org/wiki/Tic-tac-toe>

3\. <http://mathworld.wolfram.com/Chess.html>

