
Google achieves AI 'breakthrough' by beating Go champion - xianshou
http://www.bbc.co.uk/news/technology-35420579
======
Inufu
Our paper:
[http://www.nature.com/nature/journal/v529/n7587/full/nature1...](http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html)

Video from Nature:
[https://www.youtube.com/watch?v=g-dKXOlsf98&feature=youtu.be](https://www.youtube.com/watch?v=g-dKXOlsf98&feature=youtu.be)

Video from us at DeepMind:
[https://www.youtube.com/watch?v=SUbqykXVx0A](https://www.youtube.com/watch?v=SUbqykXVx0A)

edit: For those saying it's still a long way to beat the strongest player - we
are playing Lee Sedol, probably the strongest Go player, in March:
[http://deepmind.com/alpha-go.html](http://deepmind.com/alpha-go.html). That
site also has a link to the paper, scroll down to "Read about AlphaGo here".

If you want to view the sgfs in a browser, they are in my blog:
[http://www.furidamu.org/blog/2016/01/26/mastering-the-
game-o...](http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go-
with-deep-neural-networks-and-tree-search/)

~~~
tzs
This is a great advancement, and will be even more so if you can beat Lee
Sedol.

There are two interesting areas in computer game playing that I have not seem
much research on. I'm curious if your group or anyone you know have looked
into either of these.

1\. How to play well at a level below full strength. In chess, for instance,
it is no fun for most humans to play Stockfish or Komodo (the two strongest
chess programs), because those programs will completely stomp them. It is most
fun for a human to play someone around their own level. Most chess programs
intended for playing (as opposed to just for analysis) let you set what level
you want to play, but the results feel unnatural.

What I mean by unnatural is that when a human who is around, say, a 1800 USCF
rating asks a program to play at such a level, what typically happens is that
the program plays most moves as if it were a super-GM, with a few terrible
moves tossed in. A real 1800 will be more steady. He won't make any super-GM
moves, but also won't make a lot of terrible moves.

2\. How to explain to humans why a move is good. When I have Stockfish analyze
a chess game, it can tell me that one move is better than another, but I often
cannot figure out why. For instance, suppose it says that I should trade a
knight for an enemy bishop. A human GM who tells me that is the right move
would be able to tell me that it is because that leaves me with the two
bishops, and tell me that because of the pawn structure we have in this
specific game they will be very strong, and knights not so much. The GM could
put it all in terms of various strategic considerations and goals, and from
him I could learn things that let me figure out in the future such moves on my
own.

All Stockfish tells me is that it looked freaking far into the future and it
makes any other move the opponent can force it into lines that won't come out
as well. That gives me no insight into why that is better. With a lot of
experimenting, trying out alternative lines and ideas against Stockfish, you
can sometimes tease out what the strategic considerations and features of the
position are that make it so that is the right move.

~~~
jameshart
For your second question, of course, a grandmaster can only tell you why they
_think_ they made the move. It may be a postrationalization for the fact that
they feel, based on deep learning they have acquired over years of practice,
that that move is the right one to make.

It doesn't seem too difficult to have an AI let you know which other strong
moves it rejected, and to dig into its forecasts for how those moves play out
compared to the chosen move to tell you why it makes particular scenarios more
likely. But that would just be postrationalization too...

~~~
kelseydh
I play competitive chess, and I assure you most moves are made because of
players proving in their minds that a move is objectively good.

The reasons for why the player may think the move is objectively good can
vary, but they are almost always linked to rational considerations. E.g. that
the move increases their piece count, their control of center squares, their
attacking opportunities, or is tactically necessary due to the position.

My point being that when grandmaster players play chess, they are just as
interested in finding objectively right moves as a computer is. Unless it's
speed chess it's rarely a "I suspect this might be good" sort of thing.

(That said, many grandmasters do avoid lines of play they consider too
dangerous. World Champion Magnus Carlsen's "nettlesomeness" \- his willingness
to force games into difficult positions - has been one explanation for why he
beats other Grandmasters.)

~~~
jameshart
If the move's objectively good, there would be no variation in moves between
players. Since there is variation, I assume different players apply different
heuristics for 'good'. And whether the move increases their piece count is a
fine justification, but why are you privileging increasing your piece count at
this point in this game against this opponent? At some point the answer
becomes 'because I learned to'.

~~~
kelseydh
Well almost every computer playing chess algorithm uses piece counts to
evaluate the quality of chess positions, because barring an amazing tactical
combination (which can usually be computationally eliminated past 5 moves) or
a crushing positional advantage, a loss of pieces will mean the victory of the
person with more pieces.

I would argue you see far more pattern recognition at play in chess than you
do of heuristics. Heuristics is more common at lower levels of play.

When Grandmaster's rely on pattern recognition, they are using their vast
repertoire of remembered positions as a way to identify opportunities of play.
It's not that they think the move looks right, it's that they played a lot of
tactical puzzles, and because of this pattern recognition, they are now
_capable of identifying_ decisive attacks that can then be objectively
calculated within the brain to be seen as leading to checkmate or a piece
advantage.

They don't make the move because of the pattern or heuristic. They make the
move because the pattern allowed them to see the objective advantage in making
that move.

\------

As for your point about a move being objectively good: Unless you completely
solve the game of chess, there will never be always one move in every
situation that's objectively the best. In many games (and you will see this in
computer analysis), 2 or 3 moves will hold high promise, while others will
hold less. From an objective standpoint all these three moves could be
objectively better than all others, but it could be hard to justify that one
is necessarily better than another.

The reason for this is partly because between two objectively 'equal' moves,
there may be a rational reason for me to justify one over the other based on
personal considerations (e.g. because I am familiar with the opening, because
I played and analyzed many games similar to this line, because I can play this
end game well, etc.) Decisions based on those considerations are not what I
would call heuristics, because they are based on objective reasons even if
heuristics may have contributed to their formation within the mind.

~~~
jibalt
"Well almost every computer playing chess algorithm uses piece counts to
evaluate the quality of chess positions"

This is quite wrong. They use a score that material is only one (although a
major) factor of.

"because barring an amazing tactical combination (which can usually be
computationally eliminated past 5 moves) or a crushing positional advantage, a
loss of pieces will mean the victory of the person with more pieces."

Again, this simply isn't true. For one thing, talk of "piece counts" and even
"increasing piece counts", rather than material, is very odd coming from a
serious chessplayer. Aside from that, time, space, piece mobility and
coordination, king safety, pawn structure, including passed pawns, how far
pawns are advanced, and numerous other factors play a role. All of these can
provide _counterplay_ against a material advantage ... it need not be
"crushing", merely _adequate_. And tactical combinations need not be
"amazing", merely _adequate_. And whether these factors are adequate requires
more than 5 moves of lookahead because chess playing programs are only able to
do static analysis and have no "grasp" of positions. All of which adds up to
the need for move tree scores to be made up of far more than "piece counts".

~~~
kelseydh
You're right that material is the correct term. I was trying to use language
appropriate for someone thinking about programming a chess machine.

I perhaps resorted to hyperbole in my original description for the sack of
emphasis. You are correct that at higher levels of play, positional
considerations matter far more than material considerations. The advantage
does not need to be amazing, but adequate. However, as material begins to
accumulate the advantage one must have in position in order to justify the
loss will increasingly require a position that moves into the realm of
"amazing" and "crushing".

You are right that objectively calculating the positional strength of a
position is very difficult to do without immense brute forcing, and likely
needs more than 5 moves ahead of insight. When I said that I was really
referring quite strictly to tactical combinations where the vast majority of
tactical mistakes can be caught quickly.

------
onra87
Nice timing ! :)

A few hours ago, Zuckerberg said:

The ancient Chinese game of Go is one of the last games where the best human
players can still beat the best artificial intelligence players. Last year,
the Facebook AI Research team started creating an AI that can learn to play
Go. Scientists have been trying to teach computers to win at Go for 20 years.
We're getting close, and in the past six months we've built an AI that can
make moves in as fast as 0.1 seconds and still be as good as previous systems
that took years to build.

[https://www.facebook.com/zuck/posts/10102619979696481](https://www.facebook.com/zuck/posts/10102619979696481)

~~~
marvel_boy
So Facebook versus Google is the next sound thing.

~~~
guelo
I can imagine Google and Facebook having competing armies in the coming robot
wars.

~~~
ph0rque
Speaking of which, what would a robot war look like? I imagine a large portion
of the effort would be to hack/otherwise persuade the enemy robots to switch
sides.

~~~
pgeorgi
If their intelligence is worth anything, they'll meet, setup a common picnic,
no humans allowed.

~~~
jonathantm
[(Somewhat) Relevant xkcd]([https://xkcd.com/1626/](https://xkcd.com/1626/)).

------
tzs
I understand that training this thing requires massive amount of computation,
so has to be done on a massive cluster to do in reasonable time. Once they
have it trained, though, what are the computational requirements like? Would
it be feasible to run it on an ordinary PC?

Speaking of Go programs and computation requirements, one of the best
performance hacks ever was done by Dave Fotland, the author of the program
"Many Faces of Go", which was one of the top computer go programs from the
'80s through at least around 2010.

He donated code from MFoG to SPEC, which incorporated it into the SPECInt
benchmark. So, the better a given processor was at running Fotland's go code,
the better it scored on SPECInt. Since SPEC was one of the most widely
reported benchmarks, Intel and AMD and the other processor makers put a lot of
effort into making their processors run these benchmarks as fast as they code.

Net result, Fotland had the processor makers working hard to make their
processors run his code faster!

~~~
mappingbabeljc
Hiya. Reporter here. On the press briefing call Hassabis said that the single
node version won 494 out of 495 games against an array of closed- and open-
source Go programs. The distributed version was used in the match against the
human and will be used in March. Distributed alphago used in the October match
was about 170GPUs and 1200CPUs, he said.

~~~
cakeface
How much did that cost? If you price on AWS or GCC the equivalent and multiply
by the length of the match was it like $20k?

------
rshm
AlphaGo Full Text - PDF : [https://storage.googleapis.com/deepmind-
data/assets/papers/d...](https://storage.googleapis.com/deepmind-
data/assets/papers/deepmind-mastering-go.pdf)

Facebook Paper - PDF
[http://arxiv.org/pdf/1511.06410v2.pdf](http://arxiv.org/pdf/1511.06410v2.pdf)

Pachi Original Paper - PDF [http://pasky.or.cz/go/pachi-
tr.pdf](http://pasky.or.cz/go/pachi-tr.pdf)

~~~
mlindner
Pachi isn't in the same league as AlphaGo. Pachi can't win at all against
AlphaGo without handicap. With a 4 stone handicap Pachi only wins 1% of the
time against AlphaGo.

------
cobaltblue
Oh man overblown article.

Fan Hui is 2p, so very skilled, but the ranking system goes up to 9p. To give
a sense of how large a gap that is, there is and has only ever been one
Westerner to achieve that rank, Michael Redmond. The article states they plan
to face off against Lee Sedol 9p, and if they beat him in a no-handicap game,
_that_ will be as impressive as Deep Blue against Kasparov.

You'll want to watch this year's Computer Go UEC Cup[0] in March, with Zen and
CrazyStone being the typical victors. (CrazyStone in particular has sustained
a 6d rating on KGS, a popular Go server, but lately has been at 5d. In the
past it has beaten a 9p with a 4 stone handicap, which is impressive, but that
handicap is huge and it hasn't yet won with 3 stones.) Of interest this year
is that Facebook is competing, and AFAIK they seem to take a similar approach
as Google by training the AI using deep learning techniques and then
strengthening it further with MCTS. In their public disclosures they claim to
beat Pachi pretty often, which puts their bot around 4d-6d, it'll be
interesting to see how it fairs against Zen and CrazyStone in the Cup and if
it wins against a 9p.

[0] [http://jsb.cs.uec.ac.jp/~igo/eng/](http://jsb.cs.uec.ac.jp/~igo/eng/)

~~~
ewanmcteagle
This is not correct. In the professional ranks 1p is often no weaker than 9p
because often the young players start out at 1p and are very strong because
selection pressure is much greater. The 9p rank in Japan also came partly out
of just playing a lot for a lot of years so that accomplishment was not as
great as it seems. In any case, active professional players for the most part
are not dramatically stronger than one another. The range for the most part is
about 2 stones, maybe 3.

Even though Fan Hui is not an active professional in the traditional sense
this is an absolutely huge accomplishment and leap in playing ability by
computers. BTW, Michael Redmond is not particularly strong by professional
standards.

(Edit: The correlation between strength and rank now as noted below is due to
promotions more often coming from acheivements: If you win at X you get
promoted to 7p immediately, if you win at Y you get promoted to 9p. You cannot
win a big tournament and keep your low rank. Here is an example of someone who
went from 3p to 9p in one match:
[https://en.wikipedia.org/wiki/Fan_Tingyu](https://en.wikipedia.org/wiki/Fan_Tingyu).
But professionals cannot give each other 6 stone handicaps when one is 9p and
the other is 3p)

~~~
cobaltblue
This is not quite correct either. 1p is of course generally much stronger than
an arbitrary amateur dan (even some (many? most?) 9d amateurs) and you can
advance up the pro ranks quickly by winning certain games, but you can see the
histograms yourself that while there's a clump of 9ps everywhere but China
there's still a distribution.
[http://senseis.xmp.net/?ProfessionalRankHistograms](http://senseis.xmp.net/?ProfessionalRankHistograms)
Another problem is you don't lose your 9p rank once you earn it. It would be
nice if there was an international Elo system tracking all the 9ps of various
countries to rank them properly... Maybe someone's tried to calculate rankings
independently? Still, I think it's pretty uncontroversial that someone who's
been a lower-rank pro for longer than a few years is going to be significantly
weaker than their higher-rank peers.

~~~
hyperpape
There are two major attempts at international ratings today. Dr. Bae Taeil
does ratings for Korea, and Remi Coulom produces ratings independently, based
on the database at go4go.net ([http://goratings.org](http://goratings.org) is
the site).

Taeil's method seems unusual, though it may well be justified . I think he's
using a relatively complete database of games, but I don't know for sure.
Coulom has a very well regarded mathematical model, but we know that there are
some gaps in the database, which a) may skew international comparisons, and b)
may result in inaccurate ratings for players with few games in the database
(but those players are usually not top players in the world).

See my comment below: there are very few 1p players near the top,
unsurprisingly.

------
ilyanep
Related question: Has anyone ever experimented with taking a really strong
engine for one of these games and then simulating the sort of mistakes that
humans make (overlooking certain positions, looking ahead fewer steps, etc) to
try and simulate different skill levels of opponents?

I don't play Go, but I dabbled a little bit in Chess, and it seems like the
engines with "difficulty settings" don't really make the sort of mistakes that
humans make. They can, of course, just over-prune the look-ahead tree, but
that's not how the human brain works.

~~~
bazzargh
I recall in ~1990 I read a paper which described exactly this (attempt to make
the same mistakes as humans). The argument for doing so in the paper was, IIRC
that computers beating humans at chess was just a matter of time, but no
longer interesting for _AI_ research.

It seems quite likely that the book that paper was in was _Computers, Chess,
and Cognition_
[https://www.springer.com/us/book/9781461390824](https://www.springer.com/us/book/9781461390824)
(revised contributions from the WCCC 1989 Workshop New Directions in Game-Tree
Search, May 29-30, 1989)...and was probably the one by John McCarthy (yes,
that McCarthy)

Edited to add: Found it! The paper I recall reading was 'Artifical Stupidity'
by William Hartson, in Advances in Computer Chess 4 (1986)
[https://chessprogramming.wikispaces.com/Advances+in+Computer...](https://chessprogramming.wikispaces.com/Advances+in+Computer+Chess+4)
... perhaps nothing concrete came of this though - Hartson was a player and
author, not an AI researcher.

------
pmontra
Some info from the Nature paper.

AlphaGo played Fan Hui using 1202 CPUs and 176 GPUs.

It's strenght was assessed to be 3140 ELO on the scale used by
www.goratings.org/ (BTW, this is different from the European ELO system)

That would put it at #279 in the world. Fan Hui is number 633 at 2916.
AlphaGo's next opponent will be Lee Sedol, #5 at 3515.

The single computer version of AlphaGo is estimated to be 2890 ELO, which
would be #679 in the world. It's is closer to what we might be playing against
on our laptops and phones but still 48 CPUs and 8 GPUs.

~~~
tim333
Doing some arithmetic on that the increase in ELO score with number of
processors seems to roughly match 60 points per doubling of computer speed
suggested in Wikipedia's chess article[1]. As Sedol is 375 points ahead that
would suggest they'd need about 80 times as many processors to equal him with
current software. Maybe they'll improve the software. Or crank up a lot of
machines.

[1]
[https://en.wikipedia.org/wiki/Computer_chess#Playing_strengt...](https://en.wikipedia.org/wiki/Computer_chess#Playing_strength_versus_computer_speed)

------
mark_l_watson
This is a wonderful achievement! I wrote and sold a Go playing program for the
Apple II in the late 1970s, and I have had the privilege of playing the
woman's world champion and the champion of South Korea -- so I have some
experience to base my praise on.

I never thought that I would see a professional level AI Go player in my
lifetime. I am taking the singularity more seriously.

------
wyldfire
I'd used minimax before but not heard of Monte Carlo tree search. It sounds
interesting.

From [1]:

> Although it has been proven that the evaluation of moves in MCTS converges
> to the minimax evaluation, the basic version of MCTS can converge to it
> after enormous time. Besides this disadvantage (partially cancelled by the
> improvements described below), MCTS has some advantages compared to
> alpha–beta pruning and similar algorithms. Unlike them, MCTS works without
> an explicit evaluation function. It is enough to implement game mechanics,
> i.e. the generating of allowed moves in a given position and the game-end
> conditions. Thanks to this, MCTS can be applied in games without a developed
> theory or even in general game playing.

> The game tree in MCTS grows asymmetrically: the method concentrates on
> searching its more promising parts. Thanks to this, it achieves better
> results than classical algorithms in games with a high branching factor.

> Moreover, MCTS can be interrupted at any time, yielding the move it
> considers the most promising.

[1]
[https://en.wikipedia.org/wiki/Monte_Carlo_tree_search#Advant...](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search#Advantages_and_disadvantages)

~~~
Terribledactyl
I wrote a MCTS backgammon move recommendation engine/AI for a class in
college. It was a lot of fun.

It was easy to ramp up to many many cores.

I have no idea how to play the game well, never learned, but because the
win/lose rules are so clear and using this method, no need to build in
game/move theory. Still got great results compared to "advanced" players'
games.

~~~
putterson
I wrote a MCTS Ultimate tic-tac-toe engine[0] over the last little while, and
likewise don't know how to play the game well. Once thing I have been mulling
over in my mind but haven't really explored is training an NN on the game tree
produced by long searches and somehow extracting strategies a la DeepDream. I
don't have much experience with NNs though so no idea if it even possible or
what the extracts would look like.

If anyone would find this interesting work to collaborate on feel free to
contact me. The MCTS part is done.

0\. [https://github.com/putterson/uxobot](https://github.com/putterson/uxobot)

------
dang
There is also a substantive blog post at
[http://googleresearch.blogspot.com/2016/01/alphago-
mastering...](http://googleresearch.blogspot.com/2016/01/alphago-mastering-
ancient-game-of-go.html), via
[https://news.ycombinator.com/item?id=10981729](https://news.ycombinator.com/item?id=10981729).

------
gambler
When reading such news I always get the feeling that the game is stacked in
favor of the computer. Not in the sense of rules, but in the sense of media
coverage and interpreting the results. It seems like a lot of people in AI and
IT are pretty desperate for validation of their notions about human
(un)intelligence.

Watch 'Game Over: Kasparov and the Machine' for a good description of what I
mean. And yes, I bet many people here hate that movie. That's exactly what I'm
talking about.

There are a lot of relatively simple chess programs that can beat most amateur
players. Personally, I cannot beat the Go program I have on my cellphone. (I
am pretty horrible at the game.) None of that seems newsworthy. And yet after
5 wins against a pretty strong (but not world's best) Go player, and it's
suddenly a "breakthrough" that changes everything.

If some new chess player have beat Deep Blue 4 to 2, would we revert the
perceived status of AI to what it was before that match with Kasparov?

I'm not saying this advance in AI is unimpressive. I just don't see a fair
evaluation of what it means. There is too much one-directional hype.

~~~
sanderjd
> If some new chess player have beat Deep Blue 4 to 2, would we revert the
> perceived status of AI to what it was before that match with Kasparov?

Not saying I disagree with much of your comment, but I think the answer to
that question is actually, "yes". As a middling chess player, I think it would
have been impossible for "a new chess player" to have beaten Deep Blue 4 to 2,
and these days, it requires near-best-in-the-world prowess to pick up any
games at all off the best chess computers, which I think proves that it wasn't
hype-y or unfair to claim that the machines have beaten us at this.

~~~
gambler
Hype is hype, whether it turns out to be justified or not. Not all people can
beat even a trivial Go program.

You may be missing my larger point. It is not about whether humans can beat
the best chess or Go computers. It is about a very strong bias in favor of
algorithms in general. The dynamics around human vs computer chess matches
simply demonstrated this bias in an easily percievable form. Who cared about
Kasparov's later victories and draws against other state of the art programs?
They received very little coverage.

Chess is just a game. The important thing to consider is what will happen when
algorithms start to competre with people in more complex and ambiguous areas.

~~~
sanderjd
I understand your larger point, but that's just how news works: you report on
the new stuff, not the old stuff. Humans beating computers at chess was old
news when Deep Blue happened, so it wasn't worth reporting when that happened.
Nobody talks about computers beating humans at chess anymore, because that's
old news now. It would be (really big) news again if a human started beating
the best chess computers again.

It is old news for humans to beat computers at Go, but the reverse is still
new news.

I agree with you that people who infer an impending AI takeover of the world
from computer success in specific games are just being silly.

------
hyperpape
This is a more than 200 point ELO jump relative to previous results (see Fan
Hui and Franz-Josef Dickhut
[http://europeangodatabase.eu/EGD/createalleuro3.php?country=...](http://europeangodatabase.eu/EGD/createalleuro3.php?country=**&dgob=false)).

We don't know how big it is, because they beat Fan 5-0, so that only places a
lower bound on how good the bot is.

------
xianshou
Here is Nature's video, which also explains how AlphaGo works:
[https://www.youtube.com/watch?v=g-dKXOlsf98](https://www.youtube.com/watch?v=g-dKXOlsf98)

A few neural nets, a lot of MCTS, and a little touch of ingenuity.

~~~
pmarreck
MCTS = Monte Carlo Tree Search?

~~~
mlindner
Yes.

------
arcanus
Remarkable achievement, much more impressive than Chess, imo.

~~~
gnarbarian
Not just in your opinion. The search space is exponentially larger in go
because you can place a piece on any unoccupied square.

"The search space for Go's game tree is both wider and deeper than that of
chess. It has been estimated to be as big as 10^170 compared to 10^50 for
chess, making the normal brute-force game tree search algorithms much less
effective. "

[http://ai-depot.com/LogicGames/Go-Complexity.html](http://ai-
depot.com/LogicGames/Go-Complexity.html)

To provide a little more perspective. The estimated number of atoms in the
observable universe is about 4*10^80. So we have no hope of ever being capable
of "solving" go by mapping out every possible game state AKA "brute forcing"
the game.

[https://en.m.wikipedia.org/wiki/Shannon_number](https://en.m.wikipedia.org/wiki/Shannon_number)

~~~
hackinthebochs
Personally I think its not nearly as impressive as state-space comparisons
would have you believe. The initial difficulty in cracking Go was because we
were using standard game AI techniques that works on games with a much smaller
state space. Once we developed methods specifically for Go, we started making
progress in leaps and bounds. But this doesn't represent fundamental progress
in the AI of games, but rather a recognition of using the wrong tool for the
job.

The difference between Chess and Go is critical here. I don't know if there's
any formal analysis of this sort, but the difference seems to be that in Go
multiple paths to the same board state are very similar in evaluation. And so
sets of moves can be evaluated as a batch or stochastically. I think it was an
algorithm that exploited this property that saw one of the first significant
jumps in computer Go strength. Contrast this with chess where different paths
to a given board state vary so widely in evaluation that you must evaluate
them all. I suspect it is this property that allows Google's technique to be
effective in evaluation positions. But I don't see it extending to games that
aren't similar to Go in this regard, like chess.

~~~
mjmaher
> in Go multiple paths to the same board state are very similar in evaluation

Not at all, different paths to a board state also have widely varying
evaluations in Go. Joseki is a great example of this, especially since these
sequences have been thoroughly analyzed. In most joseki playing any move out
of order will leave weakness and a skilled opponent will not just continue to
play the pattern out of sequence.

------
plainOldText
Too bad Marvin Minsky did not get a chance to see this. I think he would have
enjoyed it deeply.

~~~
peter303
This did not happen overnight. The big computer companies have been building
AI labs for years, decades in case of MicroSoft. They've been revisiting Go
for the past few years too. One of Marvin's early students advises Googles
R&D. These companies have raided every major university. If Marvin was still
alert in his final years, he would have known some about this.

------
andreyk
Don't think many have said it here, so just fyi, to me this does not seem like
a big 'breakthrough' in AI so much as a good demonstration of smartly
combining existing current ideas. Quoting from a developer of a monte-carlo
tree based GO bot ([http://www.sciencemag.org/news/2016/01/huge-leap-forward-
com...](http://www.sciencemag.org/news/2016/01/huge-leap-forward-computer-
mimics-human-brain-beats-professional-game-go)): "Coulom agrees, but he notes
that there isn't one key new invention that makes the whole program work.
"It's more like a great engineering achievement," he says. "The way they put
all the pieces together is really innovative.""

Not to disparage the result, but just saying it seems inline with the progress
in recent years of using deep neural nets for reinforcement learning with
games.

~~~
eli_gottlieb
>Don't think many have said it here, so just fyi, to me this does not seem
like a big 'breakthrough' in AI so much as a good demonstration of smartly
combining existing current ideas.

Well yes. But good engineering is still impressive.

~~~
andreyk
Indeed, the approach and result are without argument impressive.

------
Eliezer
People occasionally ask me about signs that the remaining timeline might be
short. It's _very_ easy for nonprofessionals to take too much alarm too
easily. Deep Blue beating Kasparov at chess was _not_ such a sign. Robotic
cars are _not_ such a sign.

This is.

"Here we introduce a new approach to computer Go that uses ‘value networks’ to
evaluate board positions and ‘policy networks’ to select moves... Without any
lookahead search, the neural networks play Go at the level of state-of-the-art
Monte Carlo tree search programs that simulate thousands of random games of
self-play. We also introduce a new search algorithm that combines Monte Carlo
simulation with value and policy networks. Using this search algorithm, our
program AlphaGo achieved a 99.8% winning rate against other Go programs, and
defeated the human European Go champion by 5 games to 0."

This matches something I've previously named in private conversation as a
warning sign - sharply above-trend performance at Go from a neural algorithm.
What this indicates is not that deep learning in particular is going to be the
Game Over algorithm. Rather, the background variables are looking more like
"Human neural intelligence is not that complicated and current algorithms are
touching on keystone, foundational aspects of it." What's alarming is not this
particular breakthrough, but what it implies about the general background
settings of the computational universe.

Go is a game that is very computationally difficult for traditional chess-
style techniques. Human masters learn to play Go very intuitively, because the
human cortical algorithm turns out to generalize well. If deep learning can do
something similar, _plus_ (a previous real sign) have a single network
architecture learn to play loads of different old computer games, that may
indicate we're starting to get into the range of "neural algorithms that
generalize well, the way that the human cortical algorithm generalizes well".

A number of commenters are talking about how the human professional beat was
well below the world champion. This is entirely missing the point. Beating the
_best_ human is an entirely arbitrary threshold, which is why Deep Blue vs.
Kasparov wasn't a great sign per se. There's probably nothing computationally
distinguished about the very best human versus a very good human - the world
champion isn't using a basically different algorithm. What matters is the
discontinuous jump, how it was done, and the absolute level of human-style
competence achieved.

This result also supports that "Everything always stays on a smooth
exponential trend, you don't get discontinuous competence boosts from new
algorithmic insights" is false even for the non-recursive case, but that was
already obvious from my perspective. Evidence that's more easily interpreted
by a wider set of eyes is always helpful, I guess.

I hope that everyone in 2010 who tried to eyeball the AI alignment problem,
and concluded with their own eyeballs that we had until 2050 to start really
worrying about it, enjoyed their use of whatever resources they decided not to
devote to the problem at that time.

~~~
andreyk
I don't think I agree with the conclusions here.

""neural algorithms that generalize well, the way that the human cortical
algorithm generalizes well"."

I think we have already been seeing this for years with image recognition,
speech recognition, and other pattern recognition problems. As with those
problems, playing Go is one of those things you can easily get heaps of data
for and formulate it as a nice supervised learning task. The task is still
spotting patterns on raw data with learned features.

However, the current deep learning methods don't (seem to) generalize well to
all that our brains do - most of all learning to do many different things with
online small input of data. I have not seen any research into large scale
heterogenous unsupervised or semi-supervised learning with small batches of
input - these big neural nets are still used within larger engineered systems
to accomplish single specific tasks that require tons of data and computing
power. Plus, the approach here still uses Monte Carlo Search in a way that is
fairly specific to game playing - not general reasoning.

Clearly this is another demonstration Deep Learning can be used to accomplish
some very hard AI tasks. But I don't think this result merits thinking the
current approaches will scale to 'real' AI (though perhaps a simple variation
or extension will).

~~~
Eliezer
It seems to me that images and sounds are 'alike' in a way that doesn't (on
its obvious face) expand to include Atari game strategies and evaluating Go
positions. In which case generalizing across the latter gap is more impressive
than a single algorithm working well for both images and sounds.

The difference isn't easy to describe, but one such difference would be that a
single extra stone can change a Go position value much more than a single
pixel changes an image classification.

~~~
syllogism
I think his point is that it's very easy to create a lossless input
representation of the Go board, and the ultimate loss function is obvious.
We're then left with a large sequential prediction task. Previous learning
algorithms were stumped by the non-linearities, but this is exactly the
situation where deep learning shines.

The problem changes dramatically when the AI is supposed to take arbitrary
input from the world. Then the AI needs to determine what input to collect,
and the path length connecting its decisions to its reward grows enormously.

I still agree with your take though: there's an important milestone here.

------
arsenide
If anyone wants to play a game of go, regardless if you are a seasoned amateur
or a complete beginner, please send me a message on [http://online-
go.com](http://online-go.com) (username mongorians)!

If you have no prior experience I would be happy to teach you the basics -- it
is truly an incredible game.

------
nsxwolf
Does it bother anyone that humans will eventually no longer be good at
anything compared to a computer?

That can't be good for the human psyche.

~~~
danharaj
Only those human psyches that insist on being at the top of some hierarchy.

~~~
nsxwolf
This comment seems dismissive. I will never be good at chess or go, let alone
the best at either. That in itself doesn't bother me. I'm fine with that. But
to have your entire species demoted to second place? That has to have a
negative psychological effect, and not just on the individuals with raging
egos.

~~~
danharaj
i don't think it has to be a raging ego. Everyone's like this to a degree:
we're trained to think this way from birth. Why can't one enjoy their virtues
without comparing them to others? i study mathematics for fun; i was half as
good as the kids who ended up going to grad school in my class; only a few of
them will end up professors. The fact that professors of mathematics
completely outclass me does not impact my enjoyment of my own mathematical
ability at all. In fact, i am thrilled when i get to talk to an expert in the
fields i'm interested in.

Being second place in a cooperative system is not bad. People don't mind that.
Being second place in a competitive system is bad. People are scared that
machine intelligence will be in competition with them; i think it's better to
just reframe the relationship.

------
Madmallard
If the bot is trained through human play and trains against itself, would it
not only be as good as the best human? This doesn't seem like computerized
intelligence, this seems like advanced human intelligence. If you get the
thing to play hundreds of games against 9P dans it will probably eventually
beat them yes. But the theoretical limit to skill in that game is still
probably a big margin above 9P. How will we make it hit that mark?

------
fsiefken
Fortunately humans are creative and intelligent enough to devise games where
they still have the upper hand.

* Arimaa 17281 branching factor * Connect6 46000 branching factor * Nymbat 10^9 branching factor * Shogi * Crazyhouse chess

More info: [http://en.chessbase.com/post/computer-resistant-chess-
varian...](http://en.chessbase.com/post/computer-resistant-chess-variants)
[http://arimaa.com/arimaa/](http://arimaa.com/arimaa/)
[http://www.cjgames.com/index.php?page=nymbat](http://www.cjgames.com/index.php?page=nymbat)
[https://en.wikipedia.org/wiki/Game_complexity](https://en.wikipedia.org/wiki/Game_complexity)
[http://www.littlegolem.net/jsp/games/](http://www.littlegolem.net/jsp/games/)
[http://senseis.xmp.net/?OtherGamesConsideredUnprogrammable](http://senseis.xmp.net/?OtherGamesConsideredUnprogrammable)

~~~
hyperpape
Last year a machine beat the best humans in Arimaa. Until this announcement,
shogi was about as close as go, iirc.

------
jofer
Wow! As a casual (and very bad) go player, this is astounding!

Winning 5 games in a row at even strength against a 2-dan professional really
is huge. This is not something that I was under the impression would be
technically achievable for many years.

------
esturk
Does anyone know how strong FanHui is? I know Google ranks him as professional
2Dan but the rank may not mean the same thing as it is in China/Korea. I know
he's the European Champion, but I've heard there is a large gap between the
Asian professionals and their counterparts elsewhere. The ranking in general
is also very skewed because it depends on the number of tournaments people win
and which specific tournaments. Its a lot like Tennis where some tournaments
have a rating of 1000, while others only have 500.

~~~
igravious
So the thing is, there really are only Asian pros. (Having said that, here is
a list[1] of Western pros). It's only in the last few years that the American
Go Association could start certifying[2] Go players as pro. And to the best of
my knowledge the European Go Association still does not have this power. Go,
amazing game though it is, was relatively unknown outside CJK (China, Japan,
Korea) until post-WW2. But globalisation has meant that the rest of the world
is finally hearing about this beautiful game. So it migrates to the US via
Japan (it's called iGo^ in Japan, which is why it is called Go in the West)
and to Europe via Russia and the US. The US is a bit in advance of Europe but
not by a huge amount. Arguably Eastern Europe is stronger than Western, as in
Chess.

So basically Fan Hui[3] is 2p Chinese rank which makes him pretty solidly 2p.
He's the three time European champion which should give you an idea of the
strength of players in Europe. The top European players like Alexander
Dinerchtein who is roughly the same strength as Fan Hui has an official 3p
rank, Korean I believe. So Fan Hui is some ways off the top 9p players in CJK
but he was convincingly beaten by AlphaGo so I'd be hesitant to try to infer
its rank from this one performance …

    
    
       [1] http://learnbaduk.com/western-go-professionals.html
       [2] http://www.usgo.org/aga-professional-system
       [3] http://senseis.xmp.net/?FanHui
    

^ Baduk in Korea, Weiqi in China

~~~
xcombelle
There are european pro, which are kind of sponsored by a chinese company. (Fan
Hui rank is a chinese professional rank however)

------
superforecaster
AlphaGo tackling Lee Sedol next. If you want a forum to make a prediction on
it, check out GJ Open.

[https://www.gjopen.com/questions/133-will-google-s-
alphago-b...](https://www.gjopen.com/questions/133-will-google-s-alphago-beat-
world-champion-lee-sedol-in-the-five-game-go-match-planned-for-march-2016)

Forecasters will get scored, so you can see who's right and who's not.
Hopefully we'll get some good technical debate on there too (disclosure: I'm
affiliated with GJ Open site)

------
nickhuh
For people interested in the use of neural nets to evaluate board position, it
might be worthwhile reading up on TD-gammon which this work builds Off of

------
imh
If this trounces the leading human this spring, I'll be really excited if the
AI experts start working on physical games. The RoboCup (robot soccer) hasn't
had nearly the the improvements that more logical games have (e.g. chess and
Go). I hope that combining game playing with physical movement with vision,
and communication is the next frontier.

------
everly
Zuck just posted (literally yesterday) about FB's attempt to create an AI that
could beat humans in Go.

[https://m.facebook.com/story.php?story_fbid=1010261997969648...](https://m.facebook.com/story.php?story_fbid=10102619979696481&id=4)

------
pavpanchekha
Very impressive work. When the neural nets for go paper came out a year ago I
played around and implemented it (but never got good results), and was very
inspired by using neural networks for Go. As an amateur fan of the game, I was
particularly intrigued by the neural network's more human-like understanding
of the game (if you look at earlier layers, you see patterns that make sense
to a Go player).

That said, Fan Hui, who Google just beat, is a 2p player, which while
incredibly strong is still far away from Lee Sedol or Gu Li. But given the
pace of NN progress in recent years, and the incredible resources Google has
been devoting to it, I wouldn't give them more than a decade or two.

------
tgb
So the team beat Fan Hui, a rank 2 dan professional player. To put that in
perspective, the highest ranked professionals are 9 dan corresponding to many
hundreds higher score in ELO. Still a long way to go to topple the world
champion!

~~~
karussell
I wonder how much they paid him as I'm not sure if this looks good in the
carrier to be the first professional Go player loosing against an AI (?)

~~~
Vraxx
I don't see how it should affect the perception of his skill. Presumably
AlphaGo could beat any player ranked at his skill level or lower, in which
case AlphaGo beating him says no more about his skill level than his ranking
does.

------
julianozen
relevant xkcd -
[https://www.explainxkcd.com/wiki/index.php/1002:_Game_AIs](https://www.explainxkcd.com/wiki/index.php/1002:_Game_AIs)

~~~
waqf
Now that you mention it, Google should work on an AI to play championship-
standard Seven Minutes In Heaven.

------
Slippery_John
Please tell me this was written in Go

------
vancan1ty
If Google really wants the March matchup with Lee Sedol to be fair, it should
make available a number of AlphaGo's training games for Sedol to evaluate.
Otherwise, we have a similar situation to Deep Blue vs Kasparov, where Deep
Blue had access to hundreds of Kasparov's games but Kasparov did not have
access to Deep Blue's past games to evaluate. This is a major disadvantage for
the human player.

~~~
xcombelle
knowing the previous game of a player is not such an advantage at least in go

------
bitcointicker
This is a good place as any to ask this. Do hackernews readers think that
computers will eventually have a "mind" and "consciousness" ? Or will it just
be simulated?

As described in the chinese room thought experiment:
[https://en.wikipedia.org/wiki/Chinese_room](https://en.wikipedia.org/wiki/Chinese_room)

~~~
kazinator
Will _what_ be simulated, is the question. When we speak about consciousness,
we are referring to something internal which we intuitively know about
ourselves, but cannot directly observe in others. When we have a good
artificial consciousness, the external behavior will be convincing. Then we
can ask: is the visible behavior just some simulation (like a mindless script
being followed) or is there an internal reality which is the same as our
consciousness.

Answers to the question will probably come from the convergence of two fronts:
understanding better what the brain does, and correlating that with an
understanding of what is going on in the machine. That is, maybe it will be
shown that the kinds of states and state changes in the two are very similar
(or can be mapped to each other in some way). In other words, we externalize
the structures and states as much as possible and identify an equivalence.
Then we can proclaim that what is going on in the machine isn't just some
elaborate script; the states are actually human like: the external behavior is
underpinned by apparently the same stuff that makes people tick.

We will also simply be convinced on an emotional level, due to the machines
leading complex lives, demonstrating traits such as shame, regret and self-
loathing, as well as joy that looks genuine. There will be depressed AI's that
require therapy (possibly from humans), and ones that choose to terminate
themselves. People who are not computer scientists or philosophers will be
convinced that the machines are conscious, by the ways in which their lives
intertwine with those of the machines and the relationships they form; only a
dwindling group of skeptics will remain, even long after there is no space on
the field where the goal-posts can be moved any farther.

~~~
bitcointicker
If this is indeed the way it does play out, and what you have said all sounds
very plausible, it's going to raise all sorts of difficult questions for
future generations.

Who will decide how many of these AI's can be created? Will AI's ultimately be
able to create their own offspring? We already have limited resources, this
will only increase the demand :-) I see trouble ahead... as well huge
opportunities for advancing our understanding of the Universe.

~~~
mlindner
Many of these questions have been explored in AI fiction for many decades.
Even right down to the titles, like the book "Do Androids Dream of Electric
Sheep" which became "Blade Runner" for the movie version. Or "I, Robot" by
Isacc Asimov.

------
mathgenius
I'm wondering if strong players will learn to beat AlphaGo by playing
unconventional openings, so as to confuse the neural net, and get the upper
hand before the MCTS can take over in the middle game.

This could be good for the game of Go in general as top players are forced to
find new ways to play.

Just speculating as I am not a particularly strong player myself.

------
graycat
But, but, but, ..., we should ask where did the AI software come from? Three
guesses, the first two don't count, and the third is, may I have the envelope
please, [drum roll], and the winner is, humans! So, it's not that AI beat a
human. Instead, a human with a machine beat a human without one!

------
new_hackers
I worked on a Go player using neural networks in college 15 years ago. Very
interesting project and fun. Mine wasn't very good cause we didn't have a lot
of training. But it would make legal moves and it was exciting to see when it
made a "good" move (i.e. a capture or block).

------
ragebol
What makes humans so much more intelligent that they can learn to be as good
as this game when playing only up to 1000 games per year and be just as good
as a something that can train on a million games per day?

I want computers that can learn on small data instead of big data, like humans
do.

------
dmix
Is there video available of the matches between AlphaGo vs Fan Hui? I'd love
to watch them play.

~~~
wrsh07
You can see replays on this blog post:
[http://www.furidamu.org/blog/2016/01/26/mastering-the-
game-o...](http://www.furidamu.org/blog/2016/01/26/mastering-the-game-of-go-
with-deep-neural-networks-and-tree-search/)

------
TulliusCicero
I'd be really interested to see DeepMind tackle a strategy game with a more
fluid state space like Starcraft. It may be much more difficult to plot out
decisions when they're not bucketed into discrete turns, and you can
potentially move everything at once.

~~~
mortehu
One problem with an AI StarCraft player is that it would have a tremendous
APM[1] advantage.

1\.
[http://starcraft.wikia.com/wiki/Actions_per_minute](http://starcraft.wikia.com/wiki/Actions_per_minute)

~~~
ygra
APM mostly means that bots can have perfect micro-management of units (within
reason; a bunch of goliaths or dragoons in a chokepoint simply cannot really
be managed ;)). This translates to slightly higher income in the beginning [1]
(and thus certain strategies can be a bit faster. For ground armies in open
fields this can mean a large advantage, although humans are quite good at this
as well.

APM doesn't really help in creatively coming up with a good strategy to
counter a certain build. Just today I watched a game between two Protoss bots,
one of which went carriers, the other zealot/reaver. That's ... not really an
optimal decision.

Tactical manoeuvres like flanking or attacking at different places at the same
time are also mostly absent for now (although APM can help there, of course).

[1]: [http://www.teamliquid.net/forum/brood-
war/484849-improving-m...](http://www.teamliquid.net/forum/brood-
war/484849-improving-mineral-gathering-rate-in-brood-war)

------
gene-h
Now the question is, how applicable are the approaches developed for AlphaGo
to other tree search problems? The beating a professional Go player with less
evaluations than it took Deep Blue to beat Kasparov is certainly very
exciting.

------
Noughmad
Sadly, the article doesn't mention what language the software is written is.

~~~
Inufu
C++ and Lua.

~~~
Noughmad
Thanks. I was hoping for the "Go" response, but since I'm working with Torch
I'm genuinely interested as well.

------
Buetol
I can only wish the source would be released for all the researchers working
on AI to improve also their algorithms. I understand that Google wants to keep
his edge but it also slow down the whole field.

~~~
modeless
Accusing Google of slowing down the field is laughable. Google has opened up a
ton of AI stuff, in every category imaginable, from Inception (pre-trained
model), Street View House Numbers (giant dataset), gemmlowp (infrastructure
for fast matrix multiplies), deep dream (implementation of a specific deep
learning technique), all the way to TensorFlow (complete production-ready
neural net framework with examples and a free graduate-level university course
to go with it). And of course they're publishing open access papers for
everything in addition to the code releases. Furthermore, they're also
employing core contributors who continue to maintain important libraries like
Ceres Solver, Eigen, and Keras, just to name a few.

Google has even open sourced DeepMind's previous most impressive achievment,
the Atari player. Based on that track record it's probably only a matter of
time before the Go player is released. I'd expect it not long after the
upcoming match, if they win.

~~~
Buetol
Thanks, that's a very true and good reminder. I didn't know the Atari one was
released, I take back what I said.

------
quackerhacker
So does that mean now companies can have their AI programs compete against
each other in a Go match for the "title."

------
bshanks
my summary (may be wrong): they create a convolutional neural network with 13
layers to select moves (given a game position, it outputs a probability
distribution over all legal moves, trying to assign higher probabilities to
better moves). They train the network on databases of expert matches, save a
copy of the trained network as 'SL' then train it further by playing it
against randomly selected previous iterations of itself. Then they use the
history of the move-selecting network playing against itself to generate a new
training set consisting of 30 million game positions and the outcome of that
game, with each of the 30 million positions coming from a separate game. They
use this training set to train a new convolutional neural network (with 13
layers again, i think) to appraise the value of a board position (given a
board position, it outputs a single scalar that attempt to predict the game
outcome of that board position).

They also train ANOTHER move-predicting classifier called the 'fast rollout'
policy; the reason for another one is that the fast rollout policy is supposed
to be very fast to run, unlike the neural nets. The fast rollout policy is a
linear softmax of small pattern features (move matches one or more response
features, Move saves stone(s) from capture, Move is 8-connected to previous
move, Move matches nakade patterns at captured stone, Move matches 12-point
diamond pattern near previous move, move matches 3x3 pattern around candidate
move). When a feature is "move matches some pattern", i don't understand if
they mean that "match any pattern" is the feature, or if each possible pattern
is its own feature; i suspect the latter, even though that's a zillion
features to compute. The feature weights of the fast rollout classifier are
trained on a database of expert games.

Now they will use three of those classifiers, the 'SL' neural network (the
saved network that tried to learn which move an expert would have made, before
further training against itself), and the board-position-value-predicting
network, plus the 'rollout' policy.

The next part, the Monte Carlo Tree Search combined with the neural networks,
is kinda complicated and i don't fully understand it, so the following is
likely to be wrong. The idea of Monte Carlo Tree Search is to estimate the
value of a board position by simulating all or part of game in which both
players in the simulation are running as their policy a classifier without
lookahead (eg within the rollout simulation, neither player does any lookahead
at each step); this simulation is (eventually) done many times and the results
are averaged together. Each time the Monte Carlo simulation is done, the
policy is updated.

In order to take one turn in the real game, the program does zillions of
iterations; in each iteration, it simulates a game-within-a-game:

It simulates a game where the players use the current policy, which is
represented as a tree of game states whose root is the current actual game
state, whose edges are potential moves, and whose nodes or edges are labeled
with the current policy's estimated values for game states (plus a factor
encouraging exploration of unexplored or underexplored board states).

When the simulation has visited the parent of a 'leaf node' (a game state
which has not yet been analyzed but which is a child of a node which is not a
leaf node) more than some threshold, the leaf node is added to a queue for an
asynchronous process to 'expand the leaf node' (analyze it) (the visit-count-
before-expansion threshold is adaptively adjusted to keep the queue short).
This process estimates the value of the leaf node via a linear combination of
(a) the board-position-value-predicting network's output and (b) the outcome
of running a simulation of the rest of the game (a game within a game within a
game) with both players using the 'fast rollout' policy. Then, the SL neural
network is used to give initial estimates of the value of each move from that
board position (because you only have to run SL once to get an estimate for
all possible moves from that board position, whereas it would take a long time
to recurse into each of the many possible successor board positions and run
the board-position-value-predicting network for each of these).

Because the expansion of a leaf node (including running SL) is asynchronous,
in the mean time the node is 'expanded' and a 'tree policy' is used to give a
quick estimate of the value of each possible move from the leaf node board
state. The tree policy is like the quick rollout policy but with a few more
features (move allows stones to be captured, manhattan distance to two
previous moves, Move matches 12-point diamond pattern centered around
candidate move).

At the end of each iteration, the action values of all (non-leaf) nodes
visited are updated, and a 'visit count' for each of these nodes is updated.

At the end of all of these iterations, the program actually plays the move
that had the maximium visit count in the monte carlo tree search ("this is
less sensitive to outliers than maximizing action-value").

some more details:

    
    
        During monte carlo tree search, they also use a heuristic called 'last good reply' which is sorta similar to caching.
        the move-predicting networks are for the most part just fed the board state as input, but they also get a computed feature "the outcome of a ladder search"
        because Go is symmetric w/r/t rotations of the board, the move-predicting networks are wrapped by a procedure that either randomly selects a rotation, or runs them for all rotations and averages the results (depending on whether or not the network is being used for monte carlo tree search or not)

------
baq
honestly i didn't expect this news in this decade. well done.

------
hmate9
Poker next please!

------
interdrift
Why is this so important? How does it get us closer to general AI?

------
thinkdoge
how far away is writing ai for dota/sc2?

------
fiatjaf
My only question is: why are you doing this? You're taking out the fun of the
game just to prove yourselves?

------
race2tb
Clever, but nothing to do with general intelligence.

~~~
umanwizard
What makes you think so? Developing intuition through pattern recognition has
a lot to do with human-like general intelligence.

This is way different from chess programs, which work in a way that's much
closer to brute force than human-like pattern recognition.

~~~
argonaut
Brute force statistical inference over millions of examples is an advance, but
it is an advance that falls within the same paradigm of machine learning that
we've had for decades. IMO, it will really take a paradigm shift to move into
general intelligence.

------
alexandercrohde
Incredibly misleading title (by BBC). Specifically "Champion" referring to a
regional champion who isn't at professional level but rather amateur dan.

~~~
1024core
He's the European Champion:
[https://www.facebook.com/egc2015cz/posts/906549689415462](https://www.facebook.com/egc2015cz/posts/906549689415462)

