Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo beats Lee Sedol again in match 2 of 5 (gogameguru.com)
942 points by pzs on March 10, 2016 | hide | past | favorite | 555 comments

As someone who studied AI in college and am a reasonably good amateur player, I have been following the matches between Lee and AlphaGo.

AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.

I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.

These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.

But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.

It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.

> It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.

I have been watching Myungwan Kim's commentary for the games - and it seems notable that a few moves he finds very peculiar immediately when they are made, he will later point out to as achieving very good results some 20 moves later. So it also seems quite possible that AlphaGo is actually reading this far ahead, to find those peculiar moves achieve better results than from the more standard approaches.

Whether these constitute a 'new way' or not I think depends highly on whether these kind of moves can fit into some general heuristics useful for considering positions, or whether the ability to make them is limited to intelligence's with extremely high computational power for reading ahead.

> he will later point out to as achieving very good results some 20 moves later

This. It's a fairly common feature of any AI that uses some form of tree search/minimax, and the effect is very pronounced in chess. Even the best human players can only think 6-8 plies into the feature versus ~18 for a computer. What we can (could?) do is apply smarter evaluation functions to the board states resulting from candidate plays and stop considering moves that look problematic earlier in the search (game tree pruning). AI tends to use very simple evaluation functions that can be computed quickly. They do so given that 1) it allows for deeper search, and a weak heuristic evaluated far in the future often beats a strong one evaluated a few plies prior and 2) for some games (like Go) it's really hard to codify the "intuitions" that human players speak of.

Because search based AI considers board states __very__ far in the future, the results are often completely counterintuitive in a game with an established theory of play. Those theories are born of humans, for humans.

The introduction of MCTS some years back was the first leap towards a human level Go AI (incidentally, MCTS is more human-like than exhaustive tree search in that it prunes aggressively by making early judgement calls as to what merits further consideration). AlphaGo's use of deep policy and evaluation networks to score the board is very cool, and the next step in that journey. What's interesting to me is that, unlike chess AI, AlphaGo might actually advance the human theory of Go. It's possible that these "strange moves" will lead to some very interesting insights if DeepMind traces them through the eval and policy networks and manages to back out a more general theory of play.

Wasn't the breakthrough with AlphaGo that it doesn't consider every board combination in the future? Because that there are too many combinations?

Yes, but pruning (not considering everything) is as old as game tree search. Previous Go AIs used MCTS as well. What's new in AlphaGo is a more sophisticated approach to scoring game boards - policy networks that help the AI prune even more aggressively, and a value network that's used to "guess" the winner in lieu of searching to endgame. Note that guessing the winner is just a special case of an evaluation function. For any game, if you could consistently search to the end, your evaluation function is always a -1/1 corresponding to lose/win. AlphaGo is still using MCTS - just a more sophisticated form.

On the contrary.

I think that Chess machines play perfectly for the next 8 moves, but don't necessarily sense the importance of a Knight Outpost (which may have relevance 20 moves ahead. A proper Knight Outpost will remain a fork threat for the rest of the game).

It is far easier for a Human to beat a Chess Machine at positional play (ex: a backwards pawn shape will probably be a problem at endgame, 30+ moves from now) than to beat a Chess Machine at tactical play (3 moves from now, I can force a fork between two minor pieces)

This was true 10-15 years ago. It is no longer true. Chess engines have positional evaluation algorithms that have been trained using many millions of games, and the weighting parameters for different kinds of positional features have been adjusted accordingly.

Do some reading on Stockfish for example if you doubt the veracity of my statement.

Yes, I do realize that.

But its just as you say: its weighting parameters and heuristics. When Stockfish recognizes a backwards pawn, it deducts a point value. When Stockfish recognizes "pawn on 6th row", it adds a point value to that pawn.

But that's a heuristic. A trained heuristic using games, but still comes down to what I understand to be a +/- point value (like... +35 centipawns).

In contrast, a chess engine truly knows that if you do X move, it will force a Rook / Minor piece exchange in 8 moves.

When you play positionally vs Stockfish, you're arguing with a heuristic (a heuristic which has been refined over many cycles of machine learning, but a heuristic nonetheless that comes down to "+/- centipawns") . When you play tactically vs Stockfish, it is evaluating positions more than a dozen moves ahead of what is humanly possible.

When you play against Stockfish in endgame tablebase mode, it plays utterly, and provably, perfectly.

Take a pick of what game you want to play against it. IMO, I'd bet on its positional "weakness" (yes, it is still very strong at positional play, but it is the most "heuristical" part of the engine)

If this is true, why do computers regularly and consistently defeat even the best humans in full games of chess that last dozens of moves?

Because they beat humans at tactical play, like he said. Just because something has a weakness, doesn't mean it isn't better.

It seems that you are trying to create a new word that describe this new way of looking at the world. If human are able to decode the information contained in those unexpected moves, perhaps by creating a new heuristic, that could be viewed as a way of understanding the features the machine use internally, that is reading the machine brain. If human are able to decode that information creating new heuristics we could say that we are in a new state in IA in which learning among different intelligent species should be studied.

It's also possible that the positional evaluation is strong enough that AlphaGo can see the value in a position before the human because of the complexity involved in determining the "value" of a given position.

My experience is with Chess and Chess AI, but in my experience, the more positional knowledge built into the evaluation function, the better the search performs, even if you have to sacrifice some speed for more thorough evaluation. A significant positional weakness may never be discovered within the search horizon of a chess engine because it may take 50 moves for the weakness to create a material loss, so while it's certainly possible that a deep, but carefully pruned search is being utilized, I suspect that some of the Value Network's evaluation is helping to create some of these seemingly odd moves.

For AlphaGo to recognize a position that doesn't achieve a good result for 20 moves, it would often have to search much deeper than those 20 moves (I'm not sure if you're using the term moves to mean ply or both players moving, but if it takes 20 AlphaGo moves for the advantage to materialize, that would be a minimum 40 ply search) to quiesce the search to the point that material exchanges have stopped (again, this is how chess typically does it, I don't know about Go), so the evaluation at the end of the 20 move sequence is arguably more important than a deep search. The sooner you can recognize that a position is good or bad for you, the more time you have to improve the position.

I would imagine it's absolutely thinking that far ahead. That said, it can't possibly search every possible solution, just needs to find an adequate one

There's also the fact that some of the unexpected moves were apparently more about solidifying against a loss than increasing the magnitude of a win. Which has its own kind of eerie implication: since AIs (like all computer programs) do what you say, not what you mean, the "intelligent species" can sometimes work really intelligently towards a goal that wasn't quite what you had in mind. (Gets especially interesting for any AlphaHuman/AlphaCEO/AlphaPresident successors that are given goals more complicated & nuanced than "maximize Go win probability regardless of ending score". BTW, if you haven't already read the Wait But Why series on the future of AI, I recommend it: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...)

About a year ago I wrote an AI to play the board game "Hive" (shares some similarities with chess). Because I scored all wins equally, it behaved almost exactly like this. It would simply try to minimize my advantage while always keeping open the possibility for it to win, almost like a cat toying with prey. It never actually would make the winning move – however obvious – until it had no other options!

I fixed this behavior by scoring earlier wins higher than later wins. Now it will actually finish games (and win), but almost invariably its edge is very small, no matter how well or poorly I play. Because of the new win scoring, it willingly sacrifices its own advantage if it means securing a win even one turn earlier. (And since scoring is symmetrical, this has the added advantage of working to delay any win it sees for me, thus increasing the possibility of me making a mistake!)

I suppose I could try modifying the scoring rules again, to weight them by positional advantage. A "show off" mode if you like :) And again, with the flip side of working to create the least humiliating losses for itself.

In go, the purpose is to have more territory than the opponent. There is no point in humiliating the opponent by having a big advantage. I think the aim of the strange moves was to increase the confidence of the program in its advance, not to increase the advance.

Sorry, I didn't mean the intent would be to humiliate, just the appearance.

Humans, I think, have the natural instinct to "hedge" themselves in games like go and chess, by creating positional/material advantages now to offset unknowns later. Of course, that advantage becomes useless in the end game, when all that matters is the binary win/lose.

An AI, which may have a deeper/broader view of the game tree than its human opponent (despite evaluating individual position strength in roughly the same manner), may see less of a need to "hedge" now, and instead spend moves creating more of a guaranteed advantage later (as you suggest). And indeed, my experience with my AI is that during the endgame (in which an AI generally knows with certainty the eventual outcome of each of its moves), it tends to retain the smallest advantage possible to win, preferring instead to spend moves to win sooner.

> Humans, I think, have the natural instinct to "hedge" themselves in games like go and chess, by creating positional/material advantages now to offset unknowns later. Of course, that advantage becomes useless in the end game, when all that matters is the binary win/lose.

That's actually an excellent way to win chess games. Keep your eye on the mate while the other person is focusing on position and material.

> I think the aim of the strange moves was to increase the confidence of the program in its advance, not to increase the advance.

Absolutely. Also worth noting that it may be simply unable to distinguish between good and bad moves if both outcomes lead to a win, since it has no conception of the margin of victory being important.

So it might not be that it increased win probability, but that both paths led to 100% win probability and it started playing "stupidly" due to lacking a score-maximizing bias.

But you could indeed humiliate the opponent by actually capturing ALL of his stones. But that won't happen, if the enemy knows at least the basic concepts ... Still, if you play well, you cover much ground - while trying to supress the area of the enemy and even crushing him. But classic go is nice in a way, that it gives weaker opponents a start bonus of some stones - so the game is balanced and domination usually won't happen ...

My brother once played the (then) British Youth Go Champion on a 13x13 board, and lost by around 180 points - literally scoring worse than if he hadn't played at all.

> It never actually would make the winning move – however obvious – until it had no other options!

I'm confused. Why would 'make the winning move' not be the way to maximise probability of winning?

The AI is based on the minimax algorithm [1]. Because of the way Minimax works, the only way for a possible next move to be designated a "win" is if it is a guaranteed win. (The tree is (effectively) fully explored, and the opponent is given the benefit of the doubt in the face of incomplete information.) So, if there are multiple such winning moves, and care is not taken to distinguish the "magnitude" of the win, the AI will choose one arbitrarily.

I suppose that, in Hive, it is more likely that a path to a win is longer rather than shorter. Hence, when my AI was arbitrarily choosing "winning" moves, it statistically chose those that drew the game out.

[1] https://en.wikipedia.org/wiki/Minimax

But once you have guaranteed winning moves, why not pick the shortest one available (in terms of turns)?

Yes, that's what I did after I found the design flaw which effectively threw that information away.

Usually it's because as it searches the move tree, it finds ways for the opponent to maximize their own winning probability and so has to hedge against that. In minimax games sometimes the evaluator finds a long chain of moves that leads to a win, and once it finds that, doesn't necessarily bother trying to find a shorter one. It can be frustrating to tune that out.

That happens if the winning probability of the other move is 1 as well.

maybe he's defining a "winning move" as something with > 50% chance of winning

Thank you for this.

Your post should be required reading in this discussion.

People forget how literal computers are.

> There's also the fact that some of the unexpected moves were apparently more about solidifying against a loss than increasing the magnitude of a win.

Humans play that way too. Everyone wants to maximize the chance of leading by >=1 stone. The difference is that AlphaGo is better at calculating a precise value of a position, so that when uncertainly plays in, AlphaGo can play for, say, "1-3 stone lead", while a human can only get confidence in "1-7 stone lead", and thus needs to play excessively aggressively to overcome the uncertainty.

> the "intelligent species" can sometimes work really intelligently towards a goal that wasn't quite what you had in mind.

That's called programming

Right. Skynet and Terminator are science fiction, but the slippery, unpredictable reality of how computers actually behave is right in front of your eyes as a programmer every day. Sometimes I wonder if science fiction writers do more harm than good: once they make a movie about some possible future, people feel free to dismiss it as "just science fiction", even if they have easily available empirical evidence that something vaguely like the scenarios described actually kinda has the potential to occur.

Not unlike the Simpsons episode where the military school graduation speech tells them the wars of the future will be fought with robots and that their jobs will be to maintain those robots.

thats .. unlikely. this could only happen if two wealthy and highly developed nations nations want to make a spectacle out of a war.

if you have fully autonomous robots which can fight your war, you'd be able to launch a massive offensive within hours. properly mobilizing defenses and responding to that invasion would take too long, as any command centers would've already been wiped out by the first attack.

I wasn't saying it will literally happen exactly as a Simpsons episode predicted, just that it is interestingly relevant for joke from 20 years ago.

I think AlphaGo is playing very natural go! The 5th move shoulder hit that is the subject of so much commentary would fit into the theory of go that players like Takemiya espouse. It has chosen to emphasize influence and speed and has not been afraid to give solid territory early in the games so far. It's very exciting play but not inhuman play, and if professionals are allowed to train with AlphaGo it will surely usher in the next decade's style of play. Don't forget that the game has changed every 10 years for the past 100 years, it should not be surprising that it is continuing to change now!

It didn't look like a Takemiya-style move to me. Takemiya tends to play for a huge moyo in the center. AlphaGo had no such moyo. It wasn't only a strange move; it was also a strange time to play it, and it definitely went against conventional wisdom.

The result of the shoulder hit coordinated with black's bottom formation, and the extension on the 4th line that threatened to cut white's stones off was flexible and could have easily formed an impressive moyo on the bottom. It did not play out that way, but I think that black's strategy was as cosmic as anything Takemiya might have played. His games did not always end with a giant moyo, he was also very flexible. I hope to see written reactions from professional players, and maybe Takemiya will give AlphaGo's style his endorsement :)

Some examples of 5th line early shoulder hits in recent professional play - these situations are not the same as the one seen in today's game, but something like a 5th line shoulder hit is always going to be highly contextual and creative.

http://ps.waltheri.net/database/game/26929/ (move 23) http://ps.waltheri.net/database/game/69545/ (move 22) http://ps.waltheri.net/database/game/71408/ (move 22) http://ps.waltheri.net/database/game/4663/ (move 9)

Those games are really interesting. In the first two, they are both ladder-breakers played by stronger players; my guess is the weaker players set up the ladders assuming that the stronger players wouldn't play a fifth line shoulder hit to break them, and the stronger player didn't back down. In the third game, the fifth line shoulder hits aren't that surprising; they're reductions against frameworks that were allowed to get big in exchange for growing an opposing framework; they're locally bad moves but the global benefits are clear; you'll note that both players play a fifth line shoulder hit.

The only one I can't parse is the last one. There are a lot of variations where I want to know what black's plan is.

Thanks for linking to the examples! That is interesting indeed.

There's an interesting angle to this phrase "intelligent species opening up a new way of looking at the world", which is that we (humans) designed go as a game - a subset of the real world we interact with. Go is "reality" to alphago. The superset of all possible sense data it could have, in principle. Whatever "chunks" AlphaGo uses, if it does use them, all of its policies are built only from subsets of the sense data that is the interactions (self-plays) and inferences from past games. There's nothing outside the game to bring into its decision process. With humans, however, our policies are noisy and are rife with what, for lack of a better term, I would call leaky abstractions.

I think it's more metaphor than leaky abstraction in this case, except to the extent that metaphor is mapping an abstraction of a domain we are trying to understand to an abstraction of one we are better able to understand.

that's an absolutely fascinating way to think about it.

Sometimes optimal solutions don't make sense to the human mind because they're not intuitive.

For instance, I developed a system that used machine learning and linear solver models to spit out a series of actions to take in response to some events. The actions were to be acted on by humans who were experts in the field. In fact, they were the ones from whom we inferred the relevant initial heuristics.

Everyday, I would get a support call from one of the users. They'd be like, 'this output is completely wrong. You have a bug in your code.'

I'd then have to spend several hours walking through each of the actions with them and recording the results. In every case, the machine would produce recommended actions that were optimal. However, they were rarely intuitive.

In the end, it took months of this back and forth until the experts began to trust the machine outputs.

This is the frightening thing about AI - not only can an AI outperform experts, but it often makes decisions that are incomprehensible.

What you said about the expert calling something a bug reminded me of how the commentator in the first game would see a move by alphaGo and say that it was wrong. He did this multiple times for alphaGo but never once questioned the human's move. Yet even with all those "wrong" moves alphaGo won. Didn't watch the second game, so not sure if he kept doing that.

The english-speaking human 9-dan only did this once for AlphaGo yesterday (when AlphaGo made an "overextension" which eventually won the AI the game), but maybe did it approximately 3 or 4 times for Lee (Hmm, that position looks a bit weak. I think AlphaGo will push his advantage here and... oh, look at that. AlphaGo moved here).

Later, he did admit that the "overextension" on the north side of the board was more solid than he originally thought, and called it a good move.

He never explicitly said that a move was "good" or "bad", and always emphasized that as he was talking, his analysis of the game was relatively shallow compared to the players. But in hindsight, whenever he point out an "bad-juju feel" on the part of Lee's move, AlphaGo managed to find a way to attack the position.

Overall, you knew when either player made a good move, because the commentator would stop talking and just stare at the board for minutes, at least until the other commentator (an amateur player) would force a conversation, so that the feed wouldn't be quiet.

The vast, vast majority of the time, the English-speaking 9-dan was predicting the moves of both players, in positions more complicated than I could read. (Oh, but it was obvious both players would move there. There were clearly times when the commentator would veer off into a deep distant conversation with the predicted moves still on the demonstration board, because he KNEW both players were going to play out a sequence of maybe 6 or 7 moves forward).

They really got a world-class commentator on the English live feed. If you got 4 hours to spare, I suggest watching the game live.

Elsewhere in this thread, IvyMike pointed out [1]:

> I sense a change in the announcer's attitude towards AlphaGo. Yesterday there were a few strange moves from AlphaGo that were called mistakes; today, similar moves were called "interesting".

[1] https://news.ycombinator.com/item?id=11257997

The only frightening part of your story is the insecurity of the human experts.

Or, maybe, there could have been bugs in the code.

If I'm an expert in some domain and a computer is telling me to do something completely different ("Trust me--just drive over the river!") I'm certainly going to question the result.

Not really. The alternative is like driving your car into a lake because the GPS told you to.

As a competitive speedcuber (Rubik's Cubes) this makes sense. If I watch a fellow cuber solve a cube, I understand their process even if it's a different method than the one I'd use. But a robot solving it? To my brain it looks like random turns until...oh shit it's finished.

Have you ever managed to learn the human Thistlethwaite algo? It basically lets you solve the cube like a robot would. I'm pretty rusty at cubing nw, but I always wanted to learn it.

I have not. It's just not something I'm very interested in.

> AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.

Could AlphaGO be winning in a way similar to left handed fencers having an advantage over right handers by wrong footing them rather than simply being better? Would giving Lee more chance to see this style give him a chance to catch up?

I'm not a Go player but play other competitive sports. Humans have a herd mentality...as Op mentioned there's certain styles of playing...which has their own strengths and weaknesses. Sometimes people will not examine other styles that may have better strengths and just focus on the exist one. Then comes along someone who 'thinks outside the box' with a new style and revolutionize the playing field.

Think Bruce Lee and the creation of Jeet Kune Do. Before him everyone concentrated on improving one style by following it classically, rather than just thinking of 'how do I defeat someone'.

IMHO Lee is the best at the current style of Go. AlphaGO is the best at playing Go. Maybe humans can devise a better style and defeat AlphaGo, but I'm sure AlphaGo can adapt easily if another style exists.

Lee isn't even the best human player at the moment, he has a 2-8 loss record against Ke Jie, who's actually ranked number 1 at the moment.

Ke Jie is an arrogant 18 year old and he's been saying on social network in the past couple days how he will defeat AlphaGo.

He seems to have backed off that claim after the second game.

Exponential progress is going to bear down on Ke Jie like a ton of bricks soon.

I've seen this happen with "modern tennis" versus how I was taught to play.

This is interesting. Could you (or someone else whose had this experience) elaborate?

Here are three examples for you.

Swimming. It used to be that swimmers were supposed to be streamlined and avoid bulky muscles. Then a weightlifter decided he wanted to swim. Swimmers today all lift weights.

Programming. It used to be that people built programs in a very top down, heavily planned way. Think waterfall. We now understand that a highly iterative process is more appropriate in most areas of programming.

Expert systems. It used to be that we would develop expert systems (machine translation, competitive games, etc) through building large sets of explicit rules based on what human experts thought would work. Today we start with simple systems, large data sets, and use a variety of machine learning algorithms to let the program figure out its own rules. (One of the giant turning points there was when Google Translate completely demolished all existing translation software.)

Serve-and-volley is pretty much non-existent in modern professional singles tennis. We were always taught to attack the net, and every action was basically laying the groundwork to move forwards and attack.

Nowadays, top players slug it out baseline-to-baseline.

In terms of stance, we were taught to hit from a rotated position where your shoulder faces the net, and a normal vector from your chest points to either the left or right side of the court.

Nowadays, it's much more common to hit from an "open" position, where your body is facing the net, not turned. This would have been considered "unprepared" or poor footwork in my day, but it actually allows for greater reach. It does make it more difficult to hit a hard shot, but that's made up for by racquet technology and generally stronger players.

If you're in the mood for some long form literary tennis journalism about this subject, check out David Foster Wallace's Federer as Religious Experience from 2006.


Although it takes a few paragraphs until it gets into the details of "today's power-baseline game."

> AlphaGO is the best at playing Go. Maybe humans can devise a better style and defeat AlphaGo, but I'm sure AlphaGo can adapt easily if another style exists.

Which is a curious point. The gripes about early brute force search algorithms (e.g. Deep Blue?) were that they felt unnature.

However, as the searches get more nuanced and finely grained, is there a point at which a fast machine begins doing fast stupid machine things quickly enough to feel smart?

Are there any chess / Go analogs of the Turing test? Or is a computer players always still recognizable at a high level?

It has been said that a game of Go is a conversation with moves in different areas showing disagreement. The game is also known as 'shou tan' (hand talk). From the commentary, AlphaGO is currently passing the Go Turing Test in almost all cases. There are some moves which some say are uncharacteristic, then later play out well. Or so called mistakes not affecting the outcome of the match. One explanation given was that AlphaGo optimizes for a win, not win by greatest margin, which is a/most valid for human or machine.

Computer players will be recognizable as long as they are designed to win, and not to play the way a human plays.

A Turing test for game players is an interesting idea, it would be useful for designing game players that are good sparring partners rather than brutes that can whipe the floor with you.

Bruce Lee played it very smart and attained a guru status in the West, but there's no evidence he was a world-class fighter, only unsubstantiated claims by his entourage.

As for JKD, people are drawn in by its oriental esotericism, but there's no evidence it is an especially effective fighting style, or that it has something that (kick)boxing does not.

Absolutely! And it doesn't matter in the end...

Remember that AlphaGo has spent months developing its own style and theory of the game in a way that no human has ever seen. Its style is sure to have weaknesses, but humans will have a hard time figuring them out on first sight.

Similarly chess computers do better in some positions than others (they love open tactics!) and one of the games that Kasparov won against Deep Blue he won by playing an extreme anti-silicon style that took advantage of computer weaknesses. However Kasparov didn't have to figure out what that style was because there was a lot of knowledge floating around about how to do that.

Therefore I'd expect that Lee Sedol from a year from now could beat AlphaGo from today. And human Go will improve in general from trying to figure out what AlphaGo has discovered.

However that won't help humans going forward. AlphaGo is not done figuring out the game. At its current rate of improvement, AlphaGo a year from now, running on a single PC, should be able to beat the full distributed version of AlphaGo that is playing today. Now the march of progress is not whether computers can beat professionals. It is going to be how small a computing device can be and still beat the best player in the world.

But when the weaknesses it has require looking 20 ply into the game, can anyone exploit those weaknesses? And furthermore, if the computer itself is able to see 20 ply into the game, then it can spot its own weaknesses and you need to look even further, making the question of whether it's really a weakness.

Weaknesses are only relative to capabilities of the opponent to exploit them. If a tank has a weak spot that rockets can hit, but it's being opposed by humans on horseback, is it really a weakness in that context?

The weaknesses that it has will be of the form that it has wrong opinions about certain kinds of positions. In the case of chess, those weaknesses showed up in closed positions where the program owned the center and large amounts of space. In the case of AlphaGo, the weaknesses will be much more subtle, but will be discoverable and exploitable in time.

Additionally AlphaGo has the advantage that it started with a database of human play, so it has some ideas what kinds of positions humans miscalculate.

As for your tank vs horseback analogy, that's flawed at the moment. AlphaGo is probably reasonably close in strength to the human facing him. Improved human knowledge could tip the balance.

However in the future it will become an apt analogy. Computers are going to become so good that knowing the relative weaknesses in their style of play may reduce the handicap you need against them, but won't give you a chance of becoming even with them. That happened close to 20 years ago in chess, and is now only a question of time in Go.

I wonder if AlphaGo has some specialized knowledge to handle ladders, where stones can have an effect at a distance that might only come into play after 20 moves.

>I wonder if AlphaGo has some specialized knowledge to handle ladders

Yes. A representation of ladders is among the input features of its neural networks.


Stone colour 3 Player stone / opponent stone / empty

Ones 1 A constant plane filled with 1

Turns since 8 How many turns since a move was played

Liberties 8 Number of liberties (empty adjacent points)

Capture size 8 How many opponent stones would be captured

Self-atari size 8 How many of own stones would be captured

Liberties after move 8 Number of liberties after this move is played

Ladder capture 1 Whether a move at this point is a successful ladder capture

Ladder escape 1 Whether a move at this point is a successful ladder escape

Sensibleness 1 Whether a move is legal and does not fill its own eyes

Zeros 1 A constant plane filled with 0

Player color 1 Whether current player is black

(The number is how many 19x19 planes the feature consists of.)

20 moves may sound like a huge number of variations but when you prune things early, it can be quite manageable. The alphabeta algorithm in chess does pruning quite a lot.

I would also posit that lefties' advantage basically disappears once you get to a certain level in fencing. Past some point, it's basically all just footwork anyways, and your orientation doesn't change the distance of your target (foil and saber at least, can't comment on epee as they seem to just kinda bounce in place a lot even at Olympic level).

Can confirm. My brother fenced at a club that had a lot of lefties. All the righties got used to it quickly, and had no real disadvantage when playing against lefties.

I could easily see the difference in tournaments with other clubs that were not used to left handed players.

Seems unlikely. Training was partly from human games, and partly from self play; if there's some new, off book heuristics at play, there's no way to know that humans would respond poorly to them. Though I suppose it's possible it would notice that humans do poorly on simply off book moves generally.

Why does this seem unlikely? Humans do poorly with "off book" moves in general in sports and other games; it's why new styles of play or management work really well until others get used to them. Why would it be unlikely in Go?

I think an important point was brought up by the Google engineer in the beginning of the game: Humans usually consider moves that put them ahead by a greater margin and base their strategies on that, while computers don't have that bias.

Building on that, I suspect that if AlphaGo thinks it has a 100% chance of winning with any of several moves, it has no way of distinguishing between them and chooses effectively at random. The longer that goes on - and once it hits 100% chance of winning, it will be that way for the rest of the game - the more chances it has to pick bad moves. As long as the move isn't bad enough to ruin its 100% chance of winning, it can't tell the difference between that and a good move.

(This also applies without a 100% chance of winning, as long as its chances of winning hover near the highest percent it's able to distinguish.)

I doubt the value network ever outputs a literal 100% chance of winning, it would at most be a lot of nines.

Even if it did output an actual 100% chance, AlphaGo would still end up picking moves favored by the policy network, so it would probably just revert to playing like it predicts a human pro would.

Once it gets to enough nines, its monte carlo trees will run out of sample resolution. If it can resolve to three nines, then a 99.93% win branch has a 70% chance of being reported as 99.9% and a 30% chance of being reported as 100%. When all the branches here get rolled up, they report some average around 99.93% but not necessarily exactly it. This propagates upwards in the tree, adding more meaningless digits. Adding the evaluation network in increases the number of decimals, but doesn't really change the effect.

It's similar to how ray tracing renderers start to return weird speckle patterns when the room is dark enough.

And the policy network chooses branches to investigate, not which one to choose. It adds sample resolution to places pros might play, but doesn't add to the estimated probability of winning.

Edit: Actually, since places pros might play have higher sample resolution, they're less random. So worse moves get worse evaluation, and a higher chance of leading the pack. This might actually bias AlphaGo to play some pretty bad moves - but, again, this is all assuming it's going to win anyway.

Was that in the official livestream, or is there an interview somewhere, where things like these are discussed?

> I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.

The excellent point you're making applies in general to nearly every type of human thinking.

The way we think about other people, our intuitions about probabilities, our predictions about politics, and so on -- all are based on our peculiarly effective, yet woefully approximate, analogy based reasoning.

It shouldn't be surprising in the least when commonly accepted "expert" heuristics are proved wrong by AIs that actually search the space of possibilities with orders of magnitude more depth than we can. What's surprising -- and I think still a mystery -- is how human heuristics are able to perform so well to begin with.

I'm not a Go player, but I saw this same phenomenon as poker bots have surpassed humans in ability. As with AlphaGo, they make plays that fly in the face of years of "expert" wisdom. Of course, as with any revolutionary thinking, some of the new strategies are "obvious" in hindsight, and experts now use them. Others seem to require the computational precision of a computer to be effective in practice, and so can't be co-opted. That is, we can't extract a new human-compatible "heuristic" from them -- the complexity is just irreducible.

> all are based on our peculiarly effective

They are peculiarly effective only because of lack of comparison. Humans have been the most intelligent species on this planet for millennia, where no other species come even close. We don't know how ineffective those strategies are seen by a more advanced species. Well, until now.

This is a good point. I was coming from the point of view that we've had powerful computers for a while, and yet humans were still dominating them, at least until recently, in games like Go, poker, and many visual and language tasks.

Of course, the counterpoint could be that it's only the case because humans, with their laughable reasoning abilities, are the ones programming those computers.

It was not a good point.

AlphaGo can’t decide that it’s bored and go skydiving. Humans aren’t merely capable of playing Go. And when they do it, they can also pace around the table, and drink something, all at the same time, on a ridiculously low energy budget. Or they can decide never to learn Go in the first place but to master an equally difficult other discipline. They continuously decide what out of all of this to do at any given moment.

AlphaGo was built by humans, for a purpose selected by humans, out of algorithms designed by humans. It is not a more advanced species. It’s not even a general intelligence.

Your own original point was much better than the one made in response.

I want to thank you for this comment. It's this kind of subtle, low-key, informed speculation that generates good, hard sci-fi concepts, which are absolutely relevant to my WIP novel.

"oh what if the machine suddenly came alive!?" has been done 1000 times. But such concepts like: a computer can detect and act patterns which we cannot, in ways that are almost, if not possibly intelligence, are magnitudes more believable, and therefore, compelling.

Thanks! :-)

Is it about a tyrannical super-AI that maintains power over the human race by strategically releasing butterflies into the wild at specific times and locations?

Actually, it's a Soviet knock-off of a PDP-10. Constructed in 1970's India, the machine has a 12mhz clock rate, a 4M of RAM, and a directive to bring about "World Peace".

Of course, those fools underestimated it. They should have known better...

Why would it bother if it can just convince people to do the thing it wants done by talking to them?

It is now.

AlphaGo is essentially built on the work that IBM did on TD-Gammon (a reinforcement learning backgammon player) in the 90s.

Pretty much the same thing happened with TD-Gammon with it playing unconventional moves, in the longer term humans ended up adopting some of TD-Gammon's tactics once they understood how they played out, it wouldn't be surprising to see the same happen with Go.

From my understanding, computers have also had this affect on chess. The play styles of younger champions has evolved to the point where unpredictability is actually part of the strategy. I'm not a chess expert by any means, but this quote by Viswanathan Anand (former World Chess Champion) describes it.

  “Top competitors who once relied on particular styles of play are now forced to mix up their strategies, for fear that powerful analysis engines will be used to reveal fatal weaknesses in favoured openings....Anything unusual that you can produce has quadruple, quintuple the value, precisely because your opponent is likely to do the predictable stuff, which is on a computer” [1]
[1] http://www.businessinsider.com/anand-on-how-computers-have-c...

>powerful analysis engines will be used to reveal fatal weaknesses in favoured openings...

Anand isn't really talking about strategy here, he's just talking about choice of opening. Players with narrow opening repertoires, like Fischer, have always been easier to prepare for than players who play a wide variety of openings.

As far as actual changes to strategy, the most obvious one is that computers tend to value material more highly than humans. So a computer will take a risky pawn if it looks sound, while a human will see that taking the pawn is very complicated and prefer a simpler move.

Computers and the internet have changed chess in several ways:

(1) Online game databases have made it easier for players to track developments in opening theory and prepare to play specific opponents

(2) Chess engines add to this be used to search for antidotes to complicated opening systems

(3) Young players have greater access to high-quality sparring partners - either engines or fellow humans on online servers.

This has lead to the best players becoming younger, and players playing more varied and less 'sharp' openings.

Reading the paper, it doesn't at all sound like AlphaGo uses anything that TD-Gammon used.

It uses MCTS, which is unlike minimax. It doesn't use temporal difference learning, although they say that the policy somewhat resembles TD.

That doesn't sound like 'essentially built on', its sounds maybe like 'slightly influenced by'

You're missing the forest for the trees.

Tesauro's work on TD-Gammon was pioneering at the high level, i.e. combining reinforcement learning + self-play + neural networks.

> AlphaGo is essentially built on the work that IBM did on TD-Gammon (a reinforcement learning backgammon player) in the 90s.

Citation needed.

And you'll find it in the AlphaGo paper. It's not a contentious claim.

Citation still needed.

He just gave you a citation. "The AlphaGO paper".

This one, I assume? http://www.nature.com/nature/journal/v529/n7587/full/nature1...

Looks like citation 46 is the relevant one here.

I wonder if this is similar to how musket battles were fought in the american civil war era, with soldiers lining up across each other in a battlefield and taking turns shooting at each other. I hear they did this because the rifles were very inaccurate so it made sense to use a bunch of them at the same time as an area-effect weapon, in effect like a gigantic shotgun.

Until someone got better weapons and suddenly the "rules" of the battlefield that dictated standing in lines across each other made no sense to follow anymore because the original principles that dictated those rules to be good were not valid anymore.

I like your statement: "It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours."

I think this will the theme of our future interactions with AIs. We simply can't imagine in advance how they will see and interact with the world. There will be many surprises.

That quote reminds me of "The Two Faces of Tomorrow" by James P. Hogan. One of the subplots is that humans can communicate because of shared experience. We all must eat, sleep, breathe, seek shelter, etc. Communication with an alien or artificial intelligence may be difficult or even impossible without this shared framework.

>It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.

It's not like this at all; let's not do this sort of thing. Humans are inveterate myth makers (viz. your description of how people conceive the Go board as army units), and our impositions on the world are easily confused for reality.

In this case, there's no "intelligent species" at work other than humans. We made this, and it is not an intelligence, it is a series of mathematical optimization functions. We have been doing this for decades, and these systems, while sophisticated, are mathematical toys that we have applied. We built and trained this thing to do exactly this.

As a student of AI you know that convolutional neural networks are black boxes and are hard to interpret. A different choice of machine would have yielded more insight about how it is operating (for example, decision trees are easier to interpret). The inscrutability of the system is not a product of its complexity; even a simple neural network is hard to understand.

This, actually, is my primary objection to using CNNs as the basic unit of machine learning - they don't help US learn, they require us to put our faith in machines that are trained to operate in ways that are resistant to inspection. In the future I hope that this research will move more towards models that provide interpretable results, so they ARE actually a tool for improved understanding.

> We made this, and it is not an intelligence, it is a series of mathematical optimization functions

You can say the same about your mind too which is a bunch of optimization nodes. If something is intelligent, does it matter if it's evolved in nature or created by a species who is evolved in nature?

> In the future I hope that this research will move more towards models that provide interpretable results I think it's not really possible to understand in detail how these networks operate on the level of nodes, because emergent behavior is necessarily more complex than the sum of its parts.

It's a bit precious I think to say that a human is a "bunch of optimization nodes". I can write code to create a CNN, and I can draw a graph of how it operates on a piece of paper. We can't even decode a few million rat neurons the same way.

A CNN is a pure mathematical function - if you want, you could write it down that way. Given a set of inputs, it will always produce the same output. We don't call a linear regression model an "intelligence", a CNN is no different.

Of course I agree that humans are built up of billions of tiny machines like this, but let's appreciate the vast difference in scale.

My exaggeration was intentional to point out that if you scale up NN based systems, we are not that different :) I do appreciate it, but let's not forget that we have finite nodes, so at one point a machine can surpass us with "just mathematical functions".

> A CNN is a pure mathematical function That's their basic property, but who are we to say that our cell based neural network is superior? Cells are just compositions of atoms and they are defined by quantum mechanics, which is... "just" math and information.

I also think that Go might be a great communication tool between AI and humans. If you look at the commentary from this angle if's fun to think about like this.

As a follow up to your idea, we should explore two paths: first create the most powerful AI, second create subsystems devised to be interpretable. The powerful method could be used to train the interpretable method, that is we need an interpreter to translate from machine AI to human AI, and interpretable systems provide a middle ground.

I think training one function to approximate another function wouldn't help much; we'd inevitably lose the subtleties of the higher-order function and any insights that come with it. If we could train a decision tree to do what a CNN does and then interpret the outcome, why not use decision trees in the first place?

I think the answer must be in figuring out how to decompose the black box of a CNN - it is, after all, just a set of simple algebraic operations at work, and we should be able to get something out of inspection.

I have to imagine Hinton et al. have done work in this regard, but this is far afield for me, so if it exists I don't know it.

Having a machine that gives you feedback in the middle of the game perhaps could be used to describe what is the weak point of a decision tree, and in which situations the method is good. It could detect some situations in which decision trees are good, then use that decision tree to understand what is happening and with that new understanding devise a new method in the middle. We could train a decision tree using new very powerful information about the value of the game in the middle of the game, that is new and powerful.

> they don't help US learn, they require us to put our faith in machines that are trained to operate in ways that are resistant to inspection.

Human intuition and to certain extent, creativity are like this as well.

The same thing happened in chess. Computers play in a very "computerish" way that was initially mocked, but became hugely influential on how humans play chess. Computer analysis opened up new approaches to the game.


> It's like another intelligent species opening up a new way of looking at the world.

And this is just the beginning with AlphaGo. As we keep on training Deep Learning systems for other domains, we'll realise how differently they approach problems and solve them. It'll, in turn, help us in adapting these different perspectives and applying them to solve other problems as well.

> It's like another intelligent species opening up a new way of looking at the world

.. that we'll be probably unable to comprehend ourselves.

I believe that when Google talked last year about DeepMind playing those 70's Atari games, it also surprised the team with some of the tricks that it learned to be more effective in the game. So this is quite interesting stuff.

The analogy I can come up with, based on your post, is of something like addition. We don't know how we add numbers in our heads; but we somehow do it. Some people can do it very, very quickly[1], but won't be able to explain how they did it. On the other hand: a computer doesn't look at digits and numbers; it just looks at bits and shifts them around as appropriate.

[1] https://en.wikipedia.org/wiki/Shakuntala_Devi

Abstraction is the domain we need to research before we understand intelligence in general, the ways our abstraction is determined by nature and more importantly the ways that will become possible when we surpass it.

Can you give an example of an "unusual" move? I'm a (very) novice Go player, and I think it'd be really interesting to see some specific commentary on how the machine is playing the game.

Your metaphor about army units has got me thinking: When are we going to see the next generation of AlphaGo, but applied to a real world army?

so, is it not possible to get the log of its thinking and take a look at why it took certain step later?!

It might look something like attention detailed in Show, Attend, Tell: http://arxiv.org/abs/1502.03044

Which attempts to visualize machine areas of attention that look like: http://www.wildml.com/wp-content/uploads/2015/12/Screen-Shot...

A great breakthrough could be to decode the information contained in the feature space of the nn or the rnn. A topological language in which shapes and chains are explained by analogies with real world situations and actions. Being able to share our vision and communicate our intentions (the weight given to the distinct features and the links among the several layers of the nn - the overall plan) should transform the concept of AI into one of CAI communication between intelligent agents to create a synergistic approach).

Someone somewhere asked why a lot of people in the Go community is taking this in a somewhat hard way, here is my hypothesis:

Go, unlike Chess, has deep mytho attached to it. Throughout the history of many Asian countries it's seen as the ultimate abstract strategy game that deeply relies on players' intuition, personality, worldview. The best players are not described as "smart", they are described as "wise". I think there is even an ancient story about an entire diplomatic exchange being brokered over a single Go game.

Throughout history, Go has become more than just a board game, it has become a medium where the sagacious ones use to reflect their world views, discuss their philosophy, and communicate their beliefs.

So instead of a logic game, it's almost seen and treated as an art form. And now an AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics. It's a little hard to accept for some people.

Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

Actually this AI kind of confirms these myths, since its basis are not in mathematics but in neural networks. While it could be argued to be just math, so is the brain, but that's besides the point. The point is, even the programmers have no idea what the AI is thinking.

The way it picks moves is very similar to how top professionals do.

Intuition is reduced to memories stored vaguely as neural connections.

I fear this is an overly mystical misinterpretation of neural nets. (Standard feed-forward) neural nets are layers of non-linear feature transformations. They take inputs and at each step transform those inputs into a more compact representation, distilling the inputs into their most important factors (and throwing away unimportant factors).

So the most likely explanation is that policy/value nets in AlphaGo have learned to extract - with cold logic - the key factors that make up what humans believe to be "good" board positions.

It has little to do with voodoo about neural connections and magic emerging from the weights. AlphaGo has most likely managed to identify the important factors of good board positions (by seeing tons of examples of good and bad moves/positions). It only appears to be magical because these factors are most likely very complex and inter-dependent.

This is supported by the AlphaGo paper - they report that AlphaGo without tree search is about as good as the best tree search programs (amateur pro level). So AlphaGo has taken amateur-pro-level board analysis ability, and combined it with tree search, to achieve top-player performance..

> So AlphaGo has taken amateur-pro-level board analysis ability, and combined it with tree search

I don't think that's an entirely accurate way of putting it, because a player who reaches that level is also doing a little tree searching. Maybe if you found some human who managed to reach "amateur pro" level by playing purely on snap, instinctual decisions without any logic or exploration of variants at all, yes, then you could say their ability to evaluate positions is as good as AlphaGo's.

But I would guess that you are right anyway that we can deduce that its ability to evaluate a board position really is below that of better professionals, and its huge strength is due to the tree search (which of course involves a second "policy" net to pick moves to explore).

In go, the tree searching is named "reading". Even beginners need to read very deep trees (more than 7 moves) to anticipate the result of a Semeai (capturing race).

> distilling the inputs into their most important factors (and throwing away unimportant factors).

Isn't that pretty much the definition of intuition? Combining a bunch of things in some unknown and nonlinear way to result in a 'feeling' about the situation?

Pretty much. The nature of evaluating a Go position, is that it has some notions of similarity like islands, "aliveness", and local features that can be looked at liberty-wise. But in order to fully understand how it all pieces together, one must have seen other positions to intuit the kind of future playstyle that will result in the game. Anyhow, it's perfect for a stochaistic system that just recognizes patterns. As long as you can provide the tree search (ie. the partial position evaluation) you can basically just let it rip.

Yes. But the person I was responding to was definitely up in the clouds describing things as "magic" (and implicitly equating intuition to magic).

Is that really different to what humans do, though?

I fear this is an overly mystical misinterpretation of neural nets.

Isn't most human experience an overly mystical interpretation of physics/chemistry?

Well obviously it's reduced to actual algorithms.

The point is, the programmers don't quite understand what exactly are those features that the neural net is seeing.

You can still beat Go using "simpler" math. If you have enough compute power, you can always just minimax the whole game tree. Neural networks aren't un-mathematical; they're just a slightly more complicated technique for discovering an approximation to a function that does what you want (even if you're not sure what that function looks like internally).

So AlphaGo's reign will last only until QuantumGo arrives on the scene? It would be sort of ironic to have spent decades developing practical AIs with classical computing to have them swept aside only when they started to really deliver results...

Stored discretely and permanently you mean..

I mean it's not clear in anyway "where" is the intuition, it just sort of magically emerges from the connections and the weights in a way that no one can really grasp. If nothing else, just due to the sheer number of neurones.

Is this just something you're imagining, or did Google's developers explicitly say, "AlphaGo is beyond our understanding. We have absolutely no idea how it makes its decisions"?

I'm pretty sure I heard them say this.

They can see certain stats and high level overview.

But they have no idea what it's thinking.

They know the general pattern of the algorithm. They even explain it.

But the algorithm involves two deep neural networks, and they don't really know what's going on inside them.

One of the developers showed up during the commentary on the second game and talk about this stuff:


AlphaGo developers don't understand how it works in the same way you wouldn't understand how the program you've written to find prime numbers actually found a big prime number. The sequence of operations is known, but numbers are too big to be comprehended.

I think its more like, real parents dont understand why their children do the bizarre things that they do.

I don't think so.

Which kind of makes me think. Would machine learning succeed even mildly at recognizing primes? Would we be able to decode the final weights after weeks of learning, and find a sieve program encoded as data?

Well apparently my question leads to some deep mathematic theories about languages encoded by data. http://cstheory.stackexchange.com/questions/15039/why-can-ma...

How soon we forget. Twenty or thirty years ago Chess was spoken of in exactly those reverent tones.

I suppose in the era of Bobby Fischer, chess was proxy for superpower one-upmanship. That's long gone, but Chess as a game is still doing fine and I expect that it will be the same for Go.

We still have chess tournaments, super-star grandmasters and circus freaks (people who can play blindfolded against multiple opponents). And, yes, computers can easily smoke all but elite players.

Why should Go be different?

And, yes, computers can easily smoke all players.


In chess the top engines are rated hundreds of ELO points above Magnus Carlsen (top human). No top ranked human vs computer match has been publicised in over 5 years because humans are thoroughly trounced. There are cyborg matches which are interesting. Human + Computer vs Human + Computer because gameplay techniques are considered different. Humans still depend more on higher level goal strategy and less on ruthless positional efficiency (which is probably why they get beat midgame).

What is mind boggling is that 6 months ago no go engine was scratching the surface of professional level go. It took the engines getting a 4-5 stone handicap to be competitive at the lowest level of professional levels.

It looks like this one algorithm has blown through the professional ranks in about 3 months. And a 5-0 victory here would be like 2006 vs 1996 (or even 1993) chess in 3 months.

I guess it's because Go isn't that well known in the West but I find it a bit surprising (though not really) that this isn't getting more press. When Kasparov lost it was news but not really surprising. If it wasn't Kasparov that a computer beat, it would have been the next champion. The writing was on the wall for a long time. It was just a question of when exactly.

As for Go, I guess I would never had made a long bet against computers. But as recently as just over a decade ago, computers lost to merely competent players and people working on Go programs were pretty much saying that they didn't even know what the path forward looked like. Things improved a lot with Monte Carlo but even that stalled out. Admittedly, I don't follow this area closely, but these wins pretty much came out of nowhere.

It's probably due to the relatively recent fact that neural networks stopped being "a 1990s fad" and became a thing again.

Go may not be well known in the west, but it was extremely well known in AI circles. Even before DeepBlue, Go was considered the holy grail of competitive game AI.

Oh, I'm well-aware of that and obviously this is big news on sites like this one. I was mostly remarking that this is pretty low on general news radar screens.

I see. What one means by "computer" is always a moving target. Deep Blue was a specially prepared supercomputer from 20 years ago. Apparently, today's top chess software running on a half-decent off-the-shelf machine could crush Deep Blue.

A smartphone can crush any human alive at chess.

I played Chess as a kid. I watched the local tournament shrink from the big town hall, to the side room in the same building, to a local school hall, to a classroom in that school. I really do think the game is dying - perhaps something that was happening already, but Kasparov losing to Deep Blue seemed to really catalyse it.

(Not saying this is a bad thing. Evolution in games is natural, and I think it's amazing how much innovation is going on right now (particularly enabled by Kickstarter) - you'd think that board game design would have been worked out decades or centuries ago, but in the same way that incandescent bulb development accelerated massively when competition arrived, it feels like game design has got so much better when forced to compete with computer games. If there are other activities that people find more fun than Chess, that's all to the good)

I don't believe there is a relation between the improvement of computer chess and the reduction in the availability of in-person chess.

I also played chess as a kid and the allure of both local and national tournaments was that you could play with a multitude of different players, as opposed to the same 4 or 5 habitual chess players in your family/school/circle of friends.

But now, with the internet, at any second you can play with different people from all over the world, different strengths, styles and whatnot.

Hence now, instead of looking for the local chess club in the weekends we can play, any time of the day, any day of the week, anywhere.

Sites like the excellent lichess [1] are even free (in this case, free both as in beer but also as in speech) and, at any moment there there are 9 thousand, 10 thousand players enjoying this magnificent game.

[1] http://en.lichess.org/

Because it is far more complex. But apparently self learning KI has advanced enough.

And yes, chess has lost some reputation. And I guess it will be similar with go. I mean there is a University just for go. But learning something where you know you can become the best, is something different, than learning something knowing computers will be allways better than you ... so I guess they are having a hard time right now ..

But the new door is opened with the ability to play stronger than human opponents and discover new strategies.

Yeah, can confirm. I am not a Go player, and didn't know much about it. But top Go players are really well respected as some of the highest talents in the society, almost like being feared. It is smart people's game, eventually.

And the fact, that, prior to the deep learning revolution, Go is the only board game that human cannot be beaten, add even more myth and charm to the game and players alike.

Now, it comes to the time, that Go can be modeled by computers, and hundred years of human study is topped by computer in less than a year's time. All those myths around it will be gone. That is the biggest bummer I guess.

How well would the computer fare if it didn't have access to a library of human-played games, and only got self-study?

How well would the computer fare with a slightly different game -- something like go, but with differing rules? Would a smart human learn faster (in real time, or alternatively with comparable energy use) than an artificial reinforced deep learning system?

And who can make the most interesting new go-like game?

Perhaps this could be tested with chess or checkers, even.

Same argument applies to humans

On a more realistic side note... Professional Go players devote decades in training ever since their youth, giving up normal educations and lots of other more lucrative opportunities for their lives. It's very easy to imagine their frustrations now that their life-time devotion actually means nothing in front of the AI.

It's an upright denial to the way of life they so chose and devoted.

IMHO Google should donate the prize towards Go education and Go organizations instead of some random charities.

Isn't this a good thing? Why are high IQ people devoting their entire lives to a game? Maybe this will make them shift their priorities to solving problems that only really smart humans (like them) can solve.

At the root of it, they earn a living by being entertainment. This can be applied to any of the arts or sports. Why are smart people making movies, writing fiction, making music? I think these are the sorts of things that make life worth living.

Abstract strategy games require highly domain specific skills. These skills do not transfer to other endeavors. The world champion Go player might just end up as, had he not played Go, a mid-level lawyer or manager. Who knows. Source: https://books.google.com/books?id=nCMWxjkTAvEC&pg=PA130&lpg=...

Didn't happen in Chess, won't happen in Go. Entertaining a few million people is too lucrative. Everyone wants to cheer for their country in an international competition, so it's always going to have large prize pools.

Uh, talent for Go doesn't translate automatically into talent for math, physics, finance or other branches of science. Even if they are, being the top Go player is probably more attractive than being a meh quant or programmer.

This could equally apply to the bankers -- and the software engineers who enable them -- who crashed the economy in 2008. Go and chess players have contributed much more to the world than these psychopaths.

Perhaps Go playing is on it's way to being one of the first white collar jobs to be lost to AI.

I don't think people will pay to watch Go Bots square off, but I think this example of "obsolete education" is a great reminder that it's not just the assembly line jobs on the chopping block.

Google, should they win, is donating their money to Go charities, STEM education and UNICEF.

So they're doing what you want them to (I can't find a summary of how they're allocating the money across each category). Personally I think the work UNICEF is doing to help women in developing countries is more important than Go charities, but I guess their choices should satisfy everyone.

I wouldn't worry about a quick shift like that. You can look to the Chess world, there are still plenty of masters and grandmasters earning their bread. There's still lots of interest in the human vs. human aspect of the game. In lectures, some GMs make good use of those widely available AIs for analysis, too.

Why is it a denial of anything? Machines can go faster than we can and we still have the 100m dash.

I follow neither Go nor martial arts, but there seem to be some interesting parallels here with some emotional reactions to what appears to be the relative weakness of karate or kung fu versus grappling in UFC. The mystical aura of these martial arts as traditionally practised for hundreds of years suddenly falls away in the face of what often seems like brute force.

In fairness, UFC fairly severely limits what can be done. To begin with, they use gloves, and things like eye gouges and finger locks are illegal.

Because strategies like "ripping out someone's intestines" are illegal, boxing and wrestling have an unfair advantage because they don't have to worry about that stuff to begin with. For more realistic fighting situations, see: https://en.wikipedia.org/wiki/Lei_tai

The eyes are small targets. The only time you can reliably gouge them is when you're already winning a grapple, at which point it's superfluous. Finger locks are similar, with the additional disadvantage of being less likely to end the fight even if successfully applied. Admittedly, UFC rules now also take entertainment into consideration, but early UFC rules mostly just banned things that risked permanent injury for very little tactical benefit.

Small correction: they didn't use gloves in early tournaments.

it was high time that these ancient martial arts were shaken from their comfortable reverie. what were once based on actual fights centuries ago had devolved into pedantic adherence to ritual and form. it actually started way before the UFC: bruce lee shook the kung fu establishment with jeet kune do in the 1960s. helio and carlos gracie shook up jiujitsu by incorporating real world situations. you are actually now seeing wing chun make a comeback. look at conor mcgregor's unorthodox style and you will see phenomenal angles that come from kung fu and tae kwon do (karate), spinning leg kicks that are actually landing, switch stances, etc.

i believe all these ancient arts are making a big comeback. the roots are still there, they just got concealed over the centures.

Now go read "The Player of Games", by Iain M. Banks, to see that idea taken to extremes.

That was the first Culture novel I ever read, and still probably my favourite. Time to dig it out and read it again!

Of course, the title of that book is delightfully ambiguous as to which game and which player it is referring to.

Just read the book and of course was thinking of it when reading his description of Go

To me the book was always about Go taken to the extreme. The Game is the Empire and visa-versa. Imagine if Japan had been more dominate in previous war, then conquered space - then you had the Player of Games.

Superb book.

Art is, mostly, domain of knowledge not yet claimed by science. When a technique turns art into science, it means that humans are ready to tackle even harder problems. Think about medicine before and after microscopes. Holy art turned into boring science: huge win for human race.

If an AI won the next Hugo award, I would be rejoiced. It wouldn't mean the end of literature at all; it would mean that humans are ready to produce an even higher form of literature.

We already have a successful AI classical music writer


We have all kinds of visual art made by computers and AI - prom painting from photos to abstract art to 3D renders.

We have computers writing poems and haiku.

The only thing that's missing is the conceptual creation, which, let's be honest, most human artists struggle at as well. So writing and interesting story is not yet in the AI's domain.

Art is more personal though. There is no single path to "winning" in art, and "good art" tends to mean different things to different people.

I'm sure soon (if not now) AI can easily create art that regurgitates popular trends in the past, and perhaps some artists may find a way to use AI / other algorithmic techniques in a way that complements their personal vision. But AI is a long way off from replicating the quirks of human nature, the unique personalities and personal visions of humans. Until that happens, I can't see terribly interesting art emerging from AI alone.

None of this stuff is that good yet. I see no fundamental limit, but let's not pretend that machine-generated music or poetry is as good as the best human stuff, yet.

Actually my very first example of the classical music IS that good. The guy created and sold 13 different albums:


And they were well accepted by the music community.

Ok, I'll have to listen. I have his computer musical creativity book. Guess I should finish reading that too.

I disagree. Art is mostly not a "problem" to be "solved" by science. Art is not graded in a scale of difficulty, from "easy" to "harder" art that mankind has to gradually reach.

Literature is not a lower form of art that we must strive to automate so that we can dedicate ourselves to more "complex" forms.

You are confusing the unknown with art.

You confused the problem statement. What is being solved is "how do we created an AI that can produce art" not "art"

Maybe. That's definitely not how I read it. Example:

> If an AI won the next Hugo award, I would be rejoiced. It wouldn't mean the end of literature at all; it would mean that humans are ready to produce an even higher form of literature.

To me this seems to be claiming that what we have now is a form of "lower" literature, to be tackled by AI so that humans can produce "an even higher form of literature". But, of course, literature isn't graded in a scale of "low" to "high". (Well, there is lowbrow and highbrow, but that's something else).

The mention of medicine as "holy art turned into boring science" (already somewhat dubious) also seems to point to the idea that it is art that's being "solved". But I admit I might have misread it.

By the way, I don't rule out that art can be produced by an AI (whatever that means). I subscribe to the notion that art is in the eye of the beholder, so if humans can find meaning in something produced by a non-human, that's probably valid art!

> To me this seems to be claiming that what we have now is a form of "lower" literature, to be tackled by AI so that humans can produce "an even higher form of literature". But, of course, literature isn't graded in a scale of "low" to "high". (Well, there is lowbrow and highbrow, but that's something else).

Being "low" or "high" is all dynamic. We already have a good example: the advertisement industry. When a way of advertising your product first came out, it is fresh and captures people eyes. As more and more advertisers follow suit, it became bad ad, and advertisers are forced to find new ways to attract people. Basically the criteria for good ads changes all the time, but that doesn't kill the ads industry.

Now imagine if AIs can write sci-fis that are "good" according to today's criteria. That would mean there will be loads of "good" sci-fis in the market, and people soon get tired of it. Now sci-fi authors have to come up with more creative ways of writing good sci-fis.

So AIs being able to produce literature means more variations and faster iteration in literature style, much like the ads industry today. I don't know whether this is a good or bad thing, but it is certainly far away from the death of literature.

In general, I don't have a problem with your opinion for all human endeavors. I readily accept that many of them can be optimized and automated, indeed freeing humankind to pursue worthier goals.

I'm specifically objecting to your notion of art.

The advertisement industry is not a good analogy. It can indeed be improved, possibly by automated means. In contrast, the progression from "good" to "better" art doesn't work like that -- if it even exists at all! What is your measure of quality, anyway? Complexity? But sometimes minimalism is preferred in art. Maybe how many people like it? It doesn't work either; a lot of people like stuff that is not enjoyed by the majority.

When is art "better"? How can it be "improved"?

PS: the Sci-Fi market is already flooded by below-average human writers, so we don't need an AI to picture this nightmare scenario of good SF writers struggling to sell their books :P

> AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics.

Just wait until machines start producing top notch research in experimental fields such as chemistry and Physics...

I could see an AI with huge access to data and a database of existing publications finding potential correlations in disparate datasets, generating hypothesis and the experiments required to test them, and forming conclusions based on the results of those experiments. At least Go/Chess/Sports/Art have value in both the action and observation of the act, but when human-driven research can't keep up with AI-driven research, scientist might need to rethink how to explore new forms of science.

> Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

I've been thinking precisely about that. I think a book written by a machine will make the NYT bestseller list within our lifetimes (I would give it a 75% chance within 10 years, but that's just a gut feeling).

But aren't the winners of literary awards sort of subjective? I mean, chess and Go are based on beating a direct opponent based on a set of known rules. But getting an award for the best book that is picked by a judge means that you have to hit all the right notes in all the right places for that specific judge or panel. I'm not saying it won't happen, just that it's more subjective

Literary awards are definitely subjective, but I don't think that matters. I'm just saying I think an AI will write a book that a significant portion of the population thinks is very good. The first "good" AI book will likely be from some formulaic genre (think 50 Shades of Gray) that most people think is trash, but enough people will like it to legitimately propel it to the top of the sales charts.

I also think my kids will live long enough to see an animated movie that is conceived of, written, scored, and animated by an AI.

Music is a much easier problem (tight structure, lots of existing data to quickly analyze), and the animation bit is already being pretty thoroughly explored by procedural generation in games. You'd still need a "director" to pick shots, but most of the other pieces are nearly in place. We have algorithms that can create new environments, and design and animate new characters.

But generating a coherent narrative, and good writing to "implement" that narrative. These are huge problems which - as far as I'm aware - would require major breakthroughs to achieve. Machine translation is still utter garbage, and that's fairly straightforward work. We're nowhere near an AI which actually understands language.

> These are huge problems

Sure. I think they will be solved in the next 100 years though.

the first AI to replicate the success of: an oscar-winning movie; a pulitzer prize novel; a tony award winning musical-- will all at once devalue the IP value of all creative content worldwide by a measurable degree. it will be like watching a global stock market crash in terms of valuation. but, as in chess, and as in go, humans will probably learn from this AI and emulate it as well.

I'm not sure what will happen. Creative works aren't generally fungible. If I order tickets to go see Jack White play, I'm not going to be tempted by Nickelback tickets that are half the price.

you're confusing performance with the underlying intellectual property (e.g., the published song)

It's the same fungibility argument though. The market for sheet music for Willie Nelson's catalog isn't affected by the availability of other music, is it? If I want to play On the Road Again, there's only one IP owner for that.

Depends if you want _that_ music that you remember, or are looking for _some_ music that you'll like though. Once you're going into it with no prior values, price popularity and expectation of interest will dominate your choices, at which point computer generated options will be totally viable.

oh got it, good point. yeah there is low substitute effect

The first time it happens, it'll likely be released to the public under a human pseudonym, as a turing test for the people.

> Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

I think that would be pretty awesome and amazing, to be honest.

I think it'd be terrifying.

Imagine the best book you've ever read. Entrancing, enlightening, cathartic. You reach the end, and it's ... perfect. Oh hey, a sequel. Wow, the sequel is just as good as the first book. It expands upon it without diminishing the original -- you feel better, more complete for having read it. Wait, is that a third book in the series? Wow, it's even better than the first two! A fourth -- well, maybe you should go to work now, it's Monday, but the book is so good. Calling in sick once won't hurt anything.

Imagine a perfect series of books, published without end, each better than the last, a new one coming out weekly ... daily ... hourly ...

I've lived that scenario, and the book was called heroin, so imagining it isn't really hard for me.

In the end all life is is one choice after another, and making good ones over bad mostly leads to a happier life.

What you describe is a push situation: the books come out and you have to try to keep up to speed with their release. It could however, also be a pull: whenever you feel like reading an amazing book, you just ask the AI to generate one for you, optionally continuing the last story you read.

Not one, but the best book that you need to read in this particular moment. Full with all the advice that you were seeking, with the right amount of new things that you learn and familiar knowledge that you reinforce. The protagonist casually comments things very related to the open issues in your work, and helps you see the particular issue you're having with your boss from another perspective. With just the right amount of common content so you can comment with your peers at work (perhaps your office pal is reading the story of a side-character in your book - the watercooler conversation is great, he gives you new insights for the reading of this evening - and now you both agree on the discussion thread of last week). Hey, what's that? It seems that the new upgrade is now able to create scenes in Unity with the scenarios that are covered by your next novel. Great! Also there's this interactive package where your work items can be not only an input but also an output and turns your work into a game. By the way your girlfriend has entered your book, let's switch to some of the shared scenes... let's put on our VR glasses... good. Now I only need someone to feed and clean me.

Scary :)

You could also potentially specify constraints on a book and have it generated for you, e.g. generate me a book about a gay dutch vampire in the 1800s. Could open up a whole new concept of hyper speciaised books tailored for individuals particular desires and preferences.

The practical problem with this is that, as I understand it, the deep learning system needs a pretty large data set to work with to infer rules from. You can do this with go because there is a constraint on legal moves and a deterministic win condition, but given how vast the number of potential novels is (If we count the space of all ten thousand word collections of grammatically acceptable sentences) the existing number of novels may no be enough to infer a pattern. (Though possibly you could split the problem up by separately doing the natural language processing and abstractin out the plot)

Ignoring the training problem, apply it to movies: I'd like to see this movie, but starring these actors, directed by this director, with a soundtrack by this composer/band.

This is only a bad thing if you feel compelled or addicted to reading these books. There is nothing wrong with always having a better book ready until you obsess about it.

So, the internet.

> And now an AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics

Well, I am of the opinion that mathematics is the language that subsumes all other kinds of languages and line of thoughts. In the end we shall be able to describe every idea or thought in purely mathematical form.

I believe that mathematics is just a tall model of implications built from atoms and relations. To say that math underlies everything is to say that we can model things. To say that math is not in something is to say that no set of atoms and relations can account for that something's behavior at a rate better than chance.

> In the end we shall be able to describe every idea or thought in purely mathematical form.

That is a very ironically imprecise sentiment.

> turns Go into a simple game of mathematics

This can be claimed to be true when we understand how deep neural networks mathematically.

Even though we do not have a complete understanding of exactly how the networks work, what is the function that they are minimizing but what we do know is that it is a mathematical function i.e. it has been mathematically modeled. So, I think it is safe to say that it does turn it into a game of mathematics.

Yes the function themselves are pure mathematical functions, but the way to derive them relies on human intuition (not mathematical formula). It's like saying: I know how to do a 1 times 3, I just do 1 + 1 + 1, but I can't tell you why, then I can't say that 1 times 3 is just mathematics.

Speaking of Hugo Award, all these reminds me too much of Iain Banks's The Player of Games.

An outsider, new to the game, had managed to pick up and challenge top players successfully in a venerated game.

The reactions of community to this is uncannily similar.

AlphaGo did paradigm shifting moves.

How so?

The top comment on Reddit says:

"As a casual player of Go myself, some of the moves that AlphaGo made were crazy. Even one of the 9th Dan analysts said something along the lines of 'That move has never been made in the history of Go, and its brillant.' and 'Professional Go players will be learning and copying that move as a part of the Go canon now'."

There was one move that literally caused the 9th dan commentator to do a triple-take. It apparently turned out to be super effective.

Specifically, move 37 at O10: https://youtu.be/l-GsfyVCBu0?t=1h17m45s

Not only did the commentator do a triple-take, but the next white move took Sedol about 15 minutes.

One interesting thing that happened during the time for Sedol's next move was that the 9th dan commentator started referring to AlphaGo as "he".

Yeah, I've been noticing the pronouns thing. In chess challenges I always got the impression that the AI's play style was like a chain chomp. Limited, but ruthless within its limits, and definitely 'mechanical'. In these games the commentators are treating AlphaGo like a person.

I might be imagining it, but I think this has been increasing with each game.

That did not happen. He played a few unexpected moves but this games would have little to no impact in terms of modern Go theory.

Other than the kake(shoulder hit on the right) the game might have been a regular top-prop game.

If people didn't know AlphaGo was a machine, and simply played anonymously online against masters, I wonder how they would interpret AlphaGo's personality?

"Go, unlike Chess, has deep mytho attached to it."

While I'm not able to comment on the length/depth of history of chess vs. go, the above statement seems foolish. Chess also has a lot of mythology and mystique attached to it. Champion chess players (perhaps more so a decade or two or three ago) are also treated with a respect that is not casual.

I liken Go to a real-time strategy game. Essentially a game of StarCraft 2 can have an infinite set of 'moves'. When an player wins at StarCraft 2, you can argue that he is wise too.


Just as much as you can argue that for Go. Interestingly enough, the top chess player Magnus Carlsen insists that he's not particularly bright.

Starcraft 2 is less than a decade old. Go goes back three millenia.

Yuioup said that the winning player can be called wise, not the game it self. I doubt there are any three millenia old Go players around, but I can be wrong.

I hope you're wrong.

Depends ... I wouldn't mind Laotse being still around somewhere and loughing hard about things going on ...

> And now an AI without emotion, philosophy or personality

Why do you think AI has no emotion, philosophy, or personality? We too are mere machines. Magnificent machines, no doubt, but machines nonetheless.

> Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.

Considering some of the recent Hugo winners, it turns out the story doesn't actually have to be good, so yeah, a computer-written story winning is probably closer than we think.

Let's compare Go and Chess. We all know that Go is more complex that Chess, but how much more?

There's 10^50 atoms in the planet Earth. That's a lot.

Let's put a chess board in each of them. We'll count each possible permutation of each of the chess boards as a separate position. That's a lot, right? There's 10^50 atoms, and 10^40 positions in each chess board so that gives us 10^90 total positions.

That's a lot of positions, but we're not quite there yet.

What we do now is we shrink this planet Earth full of chess board atoms down to the size of an atom itself, and make a whole universe out of these atoms.

So each atom in the universe is a planet Earth, and each atom in this planet Earth is a separate chess board. There's 10^80 atoms in the universe, and 10^90 positions in each of these atoms.

That makes 10^170 positions in total, which is the same as a single Go board.

Chess positions: 10^40 (https://en.wikipedia.org/wiki/Shannon_number) Go positions: 10^170 (https://en.wikipedia.org/wiki/Go_and_mathematics) Atoms in the universe: 10^80 (https://en.wikipedia.org/wiki/Observable_universe#Matter_con...) Atoms in the world: 10^50 (http://education.jlab.org/qa/mathatom_05.html)

I am not sure that calculating the raw number of positions is a good indication of complexity at a given point. What if most positions are obviously junk in go while they are more difficult to assess in chess? Not saying this is the case in this particular example but thats a possibility in theory.

To illustrate your point: you can just add rows to a game of Nim (https://en.wikipedia.org/wiki/Nim) to get a truly enormous state space, without changing the simple winning strategy.

> What if most positions are obviously junk in go while they are more difficult to assess in chess?

I wouldn't go with most (because I don't know about that), but many of these boards would also be either impossible to achieve (in a normal game) or illegal.

The 10^170 figure is legal positions. It's about 1% of possible board positions. How many of those are sensible is another matter.

This doesn't seem to be the main reason why Go is harder than chess for computers. It was noted that even in 9x9 Go, with a comparable branching factor to Chess, traditional Go programs are still no stronger than on big boards. The main difficulty for Go is that it is much harder to evaluate board positions. So in Chess the depth of the search can be significantly reduced by using a reasonable evaluation function, whereas in Go no such function seems to exist.

>It was noted that even in 9x9 Go, with a comparable branching factor to Chess, traditional Go programs are still no stronger than on big boards.

Are they not? MoGo beat pros of 9 Dan on 9x9 in 2011: https://www.lri.fr/~teytaud/mogo.html

Well, I guess it was more true before the advent of Monte Carlo Tree Search. Even so, note that even in the case of MoGoTW in 2011, it played blind Go (this helps the computer), and out of 4 games, won two games against a 9p player, and lost 1 game to a 5p player. Though it is perhaps better than MoGo's performance on 19x19, it still isn't very good, doesn't seem much better than MoGo on 13x13, and performs much worse than computer Chess, despite a similar branching factor.

The branching factor is much larger, around 75 legal moves after the opening, while chess has at most like 30.

Fuego beat a pro in 2008 using MCTS actually.

The branching factor of 9x9 Go isn't 75. 75 could be the factor in early game, but the average factor is somewhere between 40 and 50, versus 35 in chess. State-space complexity is also considerably higher in Chess than in 9x9 Go.

Not sure what you meant regarding MCTS, I never said anything about MCTS not being able to beat pros.

This evaluation function does exist, and it's better than the super-simple chess evaluation function.

See, a chess program needs to find a lot of valid moves (see Deep Blue which won because it had stupid but extremely fast HW move generators), evaluate the moves and do a very deep search, up to 14, out of the very few alternatives. Russian chess programmers were better those times. They came up with AVL trees e.g. But hardware won.

In Go it's completely different. A move generator makes no sense at all, and a depth search of 14 neither. There are not a few alternatives, there are too many. What you need is a good overall pattern matching of areas of interest and an evaluation of those areas. And we saw that this feature outplayed Lee Sedol. Sedol couldn't quite follow in the recalculation of the areas.

Same as in chess AlphaGo learned the easy thing, that the center is more important than the corners, something Lee forgot during the game. But it's not a deep search, it's a very broad search, and very complicated evaluation function. A neural net is perfect for this function.

> whereas in Go no such function seems to exist.

It does exist. It's the neural net. It's a simple pattern recognizer, which learns over time more and more.

AlphaGo has a learned evaluation function for each move.

Evaluation function exists but it is not as simple as it can be for chess.

On the other hand, 2^565 is already slightly larger than 10^170. In other words, a couple of hydrogen atoms as quantum bits can perfectly well encode every possible position.

Starcraft has way more possible board states

There's a Starcraft AI League (http://sscaitournament.com/) you might be interested in.

Why am I feeling a bit scared of all this ?

The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown. -- H.P. Lovecraft

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact