
Why chess programs find good moves, but barely understand chess - soundsop
http://www.zenpawn.com/chessblog/2008/07/chess-programs-are-not-smart/
======
brentr
I am not competent enough to comment on the design of computer chess programs,
however, as a chess player, I can state that computers perform well in very
tactical positions. When the positions are more strategic, such as a closed
position arising from a large number of queen pawn openings (1.d4) and such
openings like the Ruy Lopez Closed, the human's ability to take into account
very subtle strategic ideas usually give the human the upper hand. That is why
Kasparov tried to keep most of his games against Deep Blue closed.

What was interesting is when Kasparov played Fritz and Deep Fritz, he actually
chose some very tactical openings and didn't perform too poorly. Then again,
it's Kasparov. Kramnik had equal results.

------
dominik
We don't understand how humans "understand" chess, either.

From what I've read, the latest research seems to indicate top chess players
train their brain's facial recognition system to recognize tactical board
positions. But it's not clear how a grandmaster finds the right move,
especially considering simultaneous games where a GM plays on 20 or 30 boards
at once against club players, often winning all of the games while only
spending a few seconds, if that, on each move.

~~~
brentr
Much of chess is based on patterns. Grandmasters spend hours a day going over
line after line in various openings. The real game doesn't usually start till
around move 15. Every once in a while, an opening novelty is found before move
10 that dramatically changes opening theory. When the novelties finally are
published (i.e., it was finally played in a game), it just becomes part of the
theory that anyone who plays that line at the top levels must know. They will
take that line and run it through Fritz, Shredder, Junior, and any other chess
program they might have and work our preliminary strategies. Any promising
lines will then be assessed by a team of weaker GM's helping a much stronger
GM train for some important match.

------
smanek
This reminds me of Searle's Chinese Room paradox.

Imagine you had an English speaking person providing responses to mandarin
stimuli in accordance with a function that mapped every input of mandarin
words to a mandarin output. Most people would agree that he doesn't really
'understand' mandarin, even though he can 'converse' fluently in it.

------
rw
Too speciesist.

------
qqq
Chess programs that only look 1 move ahead and make all their moves in under a
second are better than most human players. Their evaluation function to decide
what is a good position is actually quite smart and has a lot of knowledge of
chess.

Most of the article consists of saying that chess programs have _certain
weaknesses_ that humans do not have. That's definitely true, but it doesn't
mean that computers "barely understand chess".

The article's author doesn't seem to be a serious chess player. His 19 move
"forced line" contains no branches to show what happens with other defenses
and, at a glance, looks like black used some moves inefficiently and it'd be
hard to be confident there was nothing better without seeing a lot more
analysis. I'd only trust it either with extensive branches and notes, or if he
checked it with a strong computer...

~~~
dfox
I think that the main problem here lies in meaning of "understand chess".

You can say that computers understand chess because they are able to
consistently beat best human players. Or you could say that they understand it
because they are able to play it at all.

Or you can say that they do not understand chess, because they use some
combination of hard coded heuristics (and the evaluation function is nothing
but that) and brute force, which is at first glance too trivial approach to be
called understanding.

~~~
qqq
The eval function contains more accurate knowledge of what matters in chess
than most humans.

