Hacker News new | comments | show | ask | jobs | submit login
AIs Have Mastered Chess. Will Go Be Next? (2014) (ieee.org)
76 points by apsec112 on Mar 15, 2016 | hide | past | web | favorite | 55 comments

Related, from 2012 "Computers are very good at the game of Go " (from the perspective of an amateur Go player): http://blog.printf.net/articles/2012/02/23/computers-are-ver...

Wow, based on this graph from the article, it's not surprising that Go computers are competitive with top humans now:


Fwiw, the point where the Go curve massively changes slope is when Monte-Carlo Tree Search (MCTS) began to be used in its modern form. I think that's been an underreported part of AlphaGo's success: deep networks get the lion's share of the press, but AlphaGo is a hybrid deep-learning / MCTS system, and MCTS is arguably the most important of the algorithmic breakthroughs that led to computer Go being able to reach expert human level strength.

I don't think that graph makes much sense.

The labels "Grand Master", "Master", "average club player" apply to Chess, not Baduk; and the vertical dimension in the graph seems arbitrarily chosen.

I think the scaling is roughly reasonable. In chess National Master is is ELO 2200–2399, and above that is Grand Master, with lower ELO being skilled and club players. In Go, 0-30kyu is amateur (club player trough master), 1-7 dan is advanced amateur (basically master, based on there being few in this category), and 1-9p is grand master. Based on that the graph accurately portrays the rise of Go AIs up to 2011: Year KGS Rank 2009 1-dan 2010 3-dan 2011 4-dan 2012 5-dan

The scaling is not exact but I think it conveys the basic facts correctly - by 2011/2012 there well Go programs that could play well better than the average club player.

And actually, it looks like the author of the article more or less realized that:

  >Schaeffer now finds it plausible that a 
  >computer will beat Go’s grand masters soon

Yeah, I find it a bit annoying when people say experts didn't expect computers to win at go for decades. Anyone who'd looked at that graph and extrapolated would have estimated some time around now. And if they didn't look at that stuff they probably were not very expert.

One aspect that I miss in these conversations is that of energy use. Brain sure is a energy hungry organ in our body but it still quite small compared to say IBM's Watson. In this particular case I agree with Chomsky. Watson is as impressive as a huge steam roller. I am positively impressed by Watson, but I want to be impressed on different axes as well.

I am curious if a program running on a energy limited device would be able to beat a high ranked chess grandmaster in a standard competition setting (by that I mean the same rules about time etc). My hunch is that the answer is still a resounding no.

My hunch is that the answer is still a resounding no

Not only yes, but Chess engines have been getting better over time as well as new algorithmic advancements have been discovered. Such that a smartphone running a modern engine can beat a full fledged multi-core PC running an older engine.

Note that the Elo for the smartphone used in this example is estimated at about 3000 vs the 2851 of Magnus Carlsen.


Indeed, and its been a pleasant surprise to learn about the state of the art as of now.

First we have to invent a computer - any computer, no matter how big - that can actually replicate the intelligence of a human. Then we can start making it more efficient, smaller, and so on, which is the history of computers.

My guess is it will be a few more decades, probably at least three, before we're able to do that.

I tried to talk about this before in pre-ghost-town LessWrong, but the commenters didn't seem to like it: < http://lesswrong.com/lw/7n1/watts_son/ >.

What happened to lesswrong?

chess grandmaster can easily be beat by a smartphone app (komodo), is smartphone using more energy than a human brain?

the problem with Go is that there's little data and the game is more complex - but given the small sizes of the deepmind NNs if they generate billions of meaningful games, they could probably compress their model and require less searching thus less energy

An adult male needs around 2500 kcal per day. This works out to 2906Wh. The battery in a Nexus 5 is around 8Wh, for comparison. The question then becomes: how many hours would the battery last running the Komodo 9 engine full-out?

That would be an interesting matchup. I bet a grandmaster could play slow enough to kill the battery on the phone. A human could go a day without eating no problem. There's no way the phone could go a day without charging while running 100% CPU.

A normal chess match last about 4 hours. If we divide proportionally the energy of the day, that will be about 400Wh. I remember a factoid that says that the brain consume the 20% of the energy of the body at rest (less % while running) (more % while thinking?), so if we cut the unnecessary stuff [0] that will be 80Wh, i.e. 10 batteries. Probably the phone is more efficient.

[0] Don't try this at home.

So 10 batteries... how long do you think the engine could last on one battery? Then we multiply that number by 10 and see if it's less than 4 hours... hmmm

Why on earth would the phone be running the chess engine 24/7 ? It only has to run the engine when it needs to select its next move.

Traditional chess applications are "thinking" while waiting for you to move. That's not to say they can't be sleeping while waiting for your move, but it would just delay the game tempo and possibly (slightly) limit the overall competency of the chess AI.

Not slightly, greatly.

Speculative move generation of a chess engine on opponent time is a huge part of the overall strength especially when the engine can see a forced move therby collapsing the tree.

> chess grandmaster can easily be beat by a smartphone app (komodo), is smartphone using more energy than a human brain?

I would think a smart phone consumes less power than the brain (more so if you can turn off the display appropriately). I could be wrong though. BTW does all of the komodo logic reside on the client side or some of it executes on the off-client backend ?

The logic is on the phone.

Meh, if you're scraping the bottom of that barrel, you can turn that around and talk about development time. Things always get smaller and more efficient with computers, and they've only been with us for 80-odd years. Humans are a much more mature 'technology', sure, but they've been around for tens of thousands of years.g

Now that AlphaGo has mastered Go, this article begs the question: where does AI go from here? Are there other high profile games to defeat, or has competition based AI run out of opponents?

This question must have arisen within Google (and Facebook). Certainly IBM will tell you (under duress) that neither chess nor Watson were the game changers they had hoped, at least commercially.

If you're Larry Page, what do you do with DeepMind now?

If I'm Larry Page and I really want to get great press for DeepMind - you use it to combat spam and phishing.

Specifically, you code it to engage and waste the time of the people on the other side of those scams to waste their time with false positives that they believe to have been hooked by the system. The scammers are working too and everything depends on ROI. If they are having to invest more and more time the returns stop being worth it.

If you can teach an AI to do that, you can fix one of the web's biggest nuisances.

Gmail does a pretty good job at spam detection. Without going out of my way, I haven't seen a spam email in years.

I rarely see spam as well. I have had quite a few false positives though.

That's brilliant!

I wonder what the effort would be to take an existing turing test bot and apply it to my spam folder.

Of course once this technique is used the spammers will be forced to deploy their own AIs to deal with the increase in response rates. Which of course this act will come to be called our first AI War. :P

This would actually be great for the field.

That's a really nice application of AI! I find it especially interesting because it would work actively against a group of people to improve the live of others. I think that we can all agree that spam is bad, but still... it has very interesting implications.

Chess and Go were fascinating games for AI because they appear so mathematically precise and calculation dependent you'd think computers would easily win like they would at an arithmetic contest. It's really a testament to human intelligence that they held up as well as they did.

Larry is basically letting Demis do his thing. Some things Demis is working on:

Starcraft like 3d worlds https://www.youtube.com/watch?v=xC5ZtPazvF0&feature=youtu.be...

Imagination and the hippocampus https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...

And longer term he's said he'd like to use AI for science research like going through CERN's petabytes of data looking for new physics.

where does AI go from here? Are there other high profile games to defeat, or has competition based AI run out of opponents?

Not sure about "high profile" but there are a lot of other abstract strategy games out there, which might be candidates:


But what a lot of people seem to be talking about is AI vs human in something like Starcraft. My personal hope is that researchers working on this kind of video game playing AI will use an Open Source game though, so it will be easier for others to follow along, participate, etc.

If you're Larry Page, what do you do with DeepMind now?

Good question, because it's hard to know exactly how well this generalizes to other domain(s). That would probably be one of the first questions I'd start looking into.

Jaap van den Herik (https://chessprogramming.wikispaces.com/Jaap+van+den+Herik) predicts that, within 20 years, a computer will beat humans at playing Diplomacy.

I think that is a quite interesting challenge, but high profile? too few people know that game.

Also, measuring that the computer is better will be problematic, as humans (or, for that matter, computers) could conspire to let one of their own ilk win.

Poker? Daily sports betting sites? Stock markets?

AlphaGo has not "mastered" go, given that human opponents are still sometimes able to beat it.

Depends on your definition. I think under the common meaning of the word, to be a master doesn't mean you're undefeatable, just that you're quite good.

I hope starcraft is next!


Interesting for no mention at all of machine learning, beyond "To date, there have been no truly successful approaches to machine learning in this sphere" ["general game playing" rather than specific games like checkers]. Deep learning was already red hot, and on my radar, yet I wouldn't have thought this a strange omission at the time.

It depends on what is meant by machine learning. If it's referring to learning games from first principles, then it's certainly true for the time. As far as I know the first AI to be able to learn multiple games from first principles was demonstrated in 2015:


AlphaGo wouldn't fit in this mould, as it has been taught how to win at Go rather than working it out from the basic rules, so AI hasn't 'beaten' Go fully, but AlphaGo is a great achievement nonetheless.

general video game AI competition: http://www.gvgai.net/

it must be exciting for pros now that computers have caught up, it is always fun to play somebody better than you at whatever game as it pushes you to improve, and if you're a top level pro opportunities for that seems they would be few and far between, so having a computer opponent always available to try ideas on and help you train would be really good.

I just hope that somebody will try their hand at writing software around the engine that dumbs down their play in a "human way", to make beginners and intermediate players benefit also, I wonder if due to the way it works it is possible to do this better than in chess where it doesn't seem engines that play "convincing human strong amateur" play really exist yet

Michael Redmond was talking on the broadcast last night about his childhood and attempting to learn to play Go. He lived in Santa Barbara but nobody near him played Go whatsoever, so he would have to drive hours to L.A. to Koreatown / Japantown when they held tournaments. He also had to regularly fly to NYC to play.

All of the difficulty in finding regular matches is what prompted him to move to Japan and become an 'apprentice' (apologies if this isn't the correct terminology). He eventually became extremely proficient and is a 9-dan player now.

One wonders how good he might have become if he had access to the internet to play against other strong players from a young age. One further wonders how good he might have become if he could play against an AI of Alphago's capability.

That's a great point. I think all the professionals are eager to play against AlphaGo, and they're even willing to pay to do so.

With the game of Go, two players with drastic different abilities can still play together with the handicap system. The weaker players would get more stones already placed on the board at the beginning. This means if AlphaGo is 9dan professional level, it can give several handicap to advanced amateur players and still make it fun.

Also, there are lots of weaker Go AI that beginners and intermediate players can play against already.

This kind of article is interesting, revisiting predictions.

Where are all the "Betteridge's law of headlines" fanboys now?

They could claim that some other game mastered after this 2014 article was "next".

For instance:



Betteridge's law of headlines is intended as a humorous adage rather than always being literally true.

Well, i think, at that time, they thought the answer was: 'no'

Is war fighting next?

I have a hard time being impressed by the win. What's the major difference for writing a go AI versus checkers? Doesn't it really come down to the amount of "if/else" that can be processed in time? To me this is a bigger feat for enclosed, non-networked processing than for AI.

Some things are not as simple as playing out every possible solution and picking the optimal one. Art and humor, for example. It takes intelligence to operate and judge subjectively, not compute power.

The difference in Go is the sheer size of the game complexity. There are too many possible moves to calculate every action in "if/else" statements.

The AI Google created uses a combination of different learning algorithms (supervising it by teaching known Go patterns and playing it against itself millions of times to learn new strategies).

Checkers is a solved game. There is a best move for each play and we just needed to build/process that giant decision tree. Go was a completely different beast, so it's pretty significant we found an algorithm that can beat a world class player.

> Some things are not as simple as playing out every possible solution and picking the optimal one.

That's the point. The branching factor of go is so high that with the current trend in hardware improvements it is impossible to play out every single solution.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact