

Chess Intuition and Computer AI - cwan
http://scienceblogs.com/cortex/2010/01/chess_intuition.php

======
dustingetz
"The software allows him to play more chess, which allows him to make more
mistakes, which allows him to accumulate experience at a prodigious pace."

The same can be said about the young generation online poker pros of the last
5 years, who regularly decimate the old school pros who learned live poker
first.

~~~
clevercode
I wonder how much of this is simply due to poker's recent surge in popularity,
thus creating a much larger pool of potential talent to draw from.

------
ludwig
This piece stands in stark contrast with how computers are received in Go.
Past the beginner level, you are discouraged from playing a computer player at
all, lest you pick up bad habits that will hinder your progress later on.

Although, admittedly, this is because Go AI hasn't progressed to the level
where it can defeat professionals at all, much less consistently. Once this
happens, I can only begin to imagine the level that will be attained by some
future kid with plenty of time to practice against such a formidable opponent.

~~~
dutchflyboy
Disclaimer: I don't play a lot of Go, I could be completely wrong.

That's probably due to the fact that in Go there are less intermidiary goals.
In chess capturing a piece, defending a piece or simply making your position
better by putting a bisshop on a diagonal can be seen as a step towards
winning. Thus a chess AI doesn't need to go very deep before knowing which
strategies are bad. For example, chess AI's written in JS often don't go much
deeper than two steps ahead but still play quite decently.

But with Go you need to build chains made of several (often > 10) stones. And
that sort of calculation is very heavy for brute-force style AI, which is
about all a computer can resort to at this moment.

~~~
kd5bjo
I don't play a lot of chess, but it seems like there are more intermediate
goals in go, but they're also more nebulous. Gaining center influence,
threatening an opponent's weak group, cutting an opposing group into two
groups, defending your own territory, making eyes, and reducing your
opponent's territory are all important goals.

The best moves are ones that further two or more of these at the same time;
it's very easy to lose if you're only concerned about one at a time.

~~~
lmkg
I don't play a lot of Go either, but I've read some on the topic.

There are a few problems with modeling Go in AI. One of the biggest is that in
Chess, you can more easily evaluate intermediary positions by material
advantage. Positioning still matters, and there's plenty of times where a
sacrifice is worth it, but material advantage is a serviceable enough
quantitative measure. Go does not see captures as commonly, and emphasizes
positioning more, which makes it harder to quantify.

The other problem is that (if I remember right) Go has more possibilities, so
it's harder to look ahead as far. All grid intersections on the board are
possible moves, and the board is bigger, while for chess the range of moves is
more constrained. But really, this isn't the important part, because the
forecast horizon of both games are finite, which means in both games at some
point you have to stop looking at possibilities and evaluate board position of
leaf nodes. And chess has a less-imperfect method for that.

~~~
kd5bjo
_But really, this isn't the important part, because the forecast horizon of
both games are finite, which means in both games at some point you have to
stop looking at possibilities and evaluate board position of leaf nodes. And
chess has a less-imperfect method for that._

Actually, that makes a big difference. Because you can look ahead farther in
chess, the intermediate scoring can be less accurate, as you have a real
chance of looking far enough ahead to have turned your positional advantage
into a material advantage.

Using the size of the board as a proxy for the number of legal moves, looking
ahead N moves in chess requires evaluating approximately 64^N intermediate
states, and go requires 361^N. That means that with equivalent computing
power, looking ahead N moves in chess will let you look about N^0.177 in go.
Thus, a computer that can look ahead 40 moves in chess wouldn't quite be able
to look 2 moves ahead in go, without very aggressive pruning of the moves that
it considers.

~~~
gjm11
Right idea, wrong execution. If 64^N = 361^M then N log 64 = M log 361, so M =
(log 64 / log 361) N ~= 0.71N. So if you could look 12 moves ahead in chess,
this estimate would say you could manage 8 or 9 moves in go.

Unfortunately, it's worse than this. In practice, the effective branching
factor in a chess tree-search is more like 3 than 64, roughly because lots of
moves can quickly be found to be bad. I would be very surprised if the
corresponding figure for go were less than, say, 30. So now your factor is log
3 / log 30, or more like 1/3.

On top of that, go games are much longer than chess games, so your search
depth in go is a much smaller fraction of the whole game, so the search is a
worse proxy for the final result. Slightly related to that: there's no nice
simple measurement of how well you're doing that's comparable to measuring
material in chess. (Your ultimate goal is to build territory, but it takes
quite a while before any sort of quantitative estimate of how much territory
each player has is feasible, especially if you want it to be anywhere near as
cheap as a material count in chess.)

------
tdmackey
Sadly the article over-exaggerates the amount of computer chess Magnus plays.
Few grandmasters do. Chess software is good for analysis and study of the
board but not so much for actually playing against. For direct quotations from
Magnus regarding this issue see
<http://www.nationalpost.com/life/story.html?id=2392166>

~~~
sireat
It is true, there are other aspects to Magnuss chess proficiency. Adgestein's
coaching when Magnus was in his early teens, must have been a factor. It is
hard for a teen with other pressures to transition from a promising player
(with much hype one must add) into a real premier player.

Notice though, that Magnus mentions that he is not even sure where his real
chess board is. His chess time is spent at the computer.

Personally, I have trouble playing on a regular board after playing tons of
computer chess, but then again I am an old patzer(2400FIDE).

------
kiba
I have this sort of experience in programming as well.

For the whole 5 years of my programming hobby and career, I wrote primary in
ruby(now, starting to do javascript), struggling with my own problem and bugs
and generally learning how to write good code. I didn't think I was learning
much, since I was doing this whole thing in ruby and rubygame.

It took a beginner programmer who asked for help to see the extent of my
knowledge. From there, I efficiently dispatch each problems one by one,
pointing out the simple errors that the newbie made. He was so thankful that
he gave me 10 bucks.

Even so, I still felt that I wasn't so smart given the vast domain of
knowledge that exists in programming and computer science. Surely, there is
someone my age who is way smarter than me and programmed all sort of cool
stuff. Surely there is someone who can code a tetris clone with unfathomable
beauty.

I reckons that each programmer in the world only solve a fraction of the
problem space in the area that they're interested in.

------
joe_the_user
It would be nice if there more meat in the discussion of what intuition really
is.

 _This is a truism of expertise. Although we tend to think of experts as being
weighted down by information, their intelligence dependent on a vast set of
facts, experts are actually profoundly intuitive. When experts evaluate a
situation, they don't systematically compare all the available options or
consciously analyze the relevant information._

References? Studies? It sounds very plausible but...

It would help if the one link the article wasn't inside a paywall too.

~~~
lmkg
Hofstadter's book GEB talked about chess and intelligence a bit, and it
mentioned some studies. There's one I most definitely remember. The study
involved taking two groups of people, normal people and chess pros, showing
them a chess board for like 5 seconds, and having them reconstruct as much of
the board position as they could.

The normal people did exactly like you thought they would, placing a few
pieces around, with a lot of off-by-one errors. The chess pros, on the other
hand, got board positions comparatively "more wrong" than the beginners.
Multiple pieces were radically moved across the board. However, when these
board positions were looked at by other pros, the responses were almost
universally along the lines of "well, the positions are different, but
strategically they're actually kinda similar..."

It's qualitative feedback, to be sure, and difficult to verify and certainly
not be trusted in a scientific sense. Nonetheless, it does reveal some of the
"intuition" going on. Pro chess players are good at seeing high-level patterns
in the positioning of the pieces, related to strategies and advantages and
relative positioning. They see these things so well that they stop seeing the
actual pieces, to an extent. It reminds me of that scene in the first Matrix
movie where the one guy (Cypher?) is pointing to the screen with numbers
trailing down, saying "I don't see the numbers anymore, it's just blonde,
brunette, redhead, ..."

This seems to relate more to perception than to intuition as it's
traditionally thought of, but I would argue that they two are more strongly
coupled than most people realize. One type of intuition is perceiving emergent
phenomena directly without noticing the underlying layers.

~~~
jerf
My memory isn't all that great, so I hate to go out on a limb here, but what I
think was discussed in GEB was that if shown a sensible board layout that
could be reached in real play, the pros could nail it in five seconds, whereas
normal people could do only tolerably well in five seconds. Show the pros a
nonsense board that could never come up in real play, and the normal people do
the same as they did before, but the pros crash and burn. This is meant to
show that the mental conception of a board is different between a pro and a
novice.

~~~
dutchflyboy
Hmm, so this would mean that a normal player sees the game as just a bunch of
pieces, whereas a pro looks at the position (strategy, advantages for one
player, etc). Well, it would explain some things about pros, but what does
this mean when you train with an AI? I mean, yes, you get more experience, but
a typical game for a computer is not a typical game for a computer at all. It
would be nice to know which AI Magnus Carlsen used to train and compare it's
playstyle to that of pros. Maybe his strategy is just different enough to
catch people with something they don't expect/know and give him a slight
advantage.

~~~
mquander
This is not really accurate. The strongest chess programs (mostly Rybka,
Shredder, and Fritz) are commercially available, and all top grandmasters are
using more or less the same ones.

The advantage of training with an AI is that the AI will happily play you for
ten or twelve hours a day, from any position you please, without tiring; you
can ask it for the evaluation of any position, and it will generally provide
an extremely accurate one; it will immediately reveal errors in your
calculations and suggest tactical possibilities that few humans would see; and
it will play perfect (for small sets of pieces) and near-perfect endgames
against you.

Last, but not least, computer chess databases are like nothing available 20
years ago. Chess professionals (and amateurs) have access to all the games
they have ever played, hundreds of games played by any potential rival, and
hundreds or thousands of games played by IMs and GMs in any potential opening
line.

