edit: For those saying it's still a long way to beat the strongest player - we are playing Lee Sedol, probably the strongest Go player, in March: http://deepmind.com/alpha-go.html.
That site also has a link to the paper, scroll down to "Read about AlphaGo here".
This is a great advancement, and will be even more so if you can beat Lee Sedol.
There are two interesting areas in computer game playing that I have not seem much research on. I'm curious if your group or anyone you know have looked into either of these.
1. How to play well at a level below full strength. In chess, for instance, it is no fun for most humans to play Stockfish or Komodo (the two strongest chess programs), because those programs will completely stomp them. It is most fun for a human to play someone around their own level. Most chess programs intended for playing (as opposed to just for analysis) let you set what level you want to play, but the results feel unnatural.
What I mean by unnatural is that when a human who is around, say, a 1800 USCF rating asks a program to play at such a level, what typically happens is that the program plays most moves as if it were a super-GM, with a few terrible moves tossed in. A real 1800 will be more steady. He won't make any super-GM moves, but also won't make a lot of terrible moves.
2. How to explain to humans why a move is good. When I have Stockfish analyze a chess game, it can tell me that one move is better than another, but I often cannot figure out why. For instance, suppose it says that I should trade a knight for an enemy bishop. A human GM who tells me that is the right move would be able to tell me that it is because that leaves me with the two bishops, and tell me that because of the pawn structure we have in this specific game they will be very strong, and knights not so much. The GM could put it all in terms of various strategic considerations and goals, and from him I could learn things that let me figure out in the future such moves on my own.
All Stockfish tells me is that it looked freaking far into the future and it makes any other move the opponent can force it into lines that won't come out as well. That gives me no insight into why that is better. With a lot of experimenting, trying out alternative lines and ideas against Stockfish, you can sometimes tease out what the strategic considerations and features of the position are that make it so that is the right move.
For your second question, of course, a grandmaster can only tell you why they think they made the move. It may be a postrationalization for the fact that they feel, based on deep learning they have acquired over years of practice, that that move is the right one to make.
It doesn't seem too difficult to have an AI let you know which other strong moves it rejected, and to dig into its forecasts for how those moves play out compared to the chosen move to tell you why it makes particular scenarios more likely. But that would just be postrationalization too...
I think this is wrong. This idea comes from when humans first tried to program computers to do what we wanted them to do. We failed, and it turned out to be really hard. A grand master couldn't explain the algorithm he used exactly. But that doesn't mean he couldn't give any useful information at all.
Think of it like describing a tiger. I have no idea how to describe complicated edge detection algorithms and exact shapes. But I can say something like "an animal with orange stripes", and that would be useful information to another human.
Likewise a grandmaster can explain that pawns are valuable in situations like this, or that point to a certain position and say don't do that, etc. To a computer that information is useless, but to a human that's extremely useful. We already have some pattern recognition and intelligence, we just need some pointers. Not an exact description of the algorithm.
"Likewise a grandmaster can explain that pawns are valuable in situations like this, or that point to a certain position and say don't do that, etc. To a computer that information is useless, but to a human that's extremely useful. "
I think that the only reason this is true is because, humans, shared the same low (or intermediate in this case) level features of their models of the world and a common language to share them.
Artificial neural networks understanding have evolved along other path, and they probable have another different organization between the different levels, but that doesn't make the fundamental mechanism different.
I play competitive chess, and I assure you most moves are made because of players proving in their minds that a move is objectively good.
The reasons for why the player may think the move is objectively good can vary, but they are almost always linked to rational considerations. E.g. that the move increases their piece count, their control of center squares, their attacking opportunities, or is tactically necessary due to the position.
My point being that when grandmaster players play chess, they are just as interested in finding objectively right moves as a computer is. Unless it's speed chess it's rarely a "I suspect this might be good" sort of thing.
(That said, many grandmasters do avoid lines of play they consider too dangerous. World Champion Magnus Carlsen's "nettlesomeness" - his willingness to force games into difficult positions - has been one explanation for why he beats other Grandmasters.)
If the move's objectively good, there would be no variation in moves between players. Since there is variation, I assume different players apply different heuristics for 'good'. And whether the move increases their piece count is a fine justification, but why are you privileging increasing your piece count at this point in this game against this opponent? At some point the answer becomes 'because I learned to'.
Well almost every computer playing chess algorithm uses piece counts to evaluate the quality of chess positions, because barring an amazing tactical combination (which can usually be computationally eliminated past 5 moves) or a crushing positional advantage, a loss of pieces will mean the victory of the person with more pieces.
I would argue you see far more pattern recognition at play in chess than you do of heuristics. Heuristics is more common at lower levels of play.
When Grandmaster's rely on pattern recognition, they are using their vast repertoire of remembered positions as a way to identify opportunities of play. It's not that they think the move looks right, it's that they played a lot of tactical puzzles, and because of this pattern recognition, they are now capable of identifying decisive attacks that can then be objectively calculated within the brain to be seen as leading to checkmate or a piece advantage.
They don't make the move because of the pattern or heuristic. They make the move because the pattern allowed them to see the objective advantage in making that move.
------
As for your point about a move being objectively good: Unless you completely solve the game of chess, there will never be always one move in every situation that's objectively the best. In many games (and you will see this in computer analysis), 2 or 3 moves will hold high promise, while others will hold less. From an objective standpoint all these three moves could be objectively better than all others, but it could be hard to justify that one is necessarily better than another.
The reason for this is partly because between two objectively 'equal' moves, there may be a rational reason for me to justify one over the other based on personal considerations (e.g. because I am familiar with the opening, because I played and analyzed many games similar to this line, because I can play this end game well, etc.) Decisions based on those considerations are not what I would call heuristics, because they are based on objective reasons even if heuristics may have contributed to their formation within the mind.
"Well almost every computer playing chess algorithm uses piece counts to evaluate the quality of chess positions"
This is quite wrong. They use a score that material is only one (although a major) factor of.
"because barring an amazing tactical combination (which can usually be computationally eliminated past 5 moves) or a crushing positional advantage, a loss of pieces will mean the victory of the person with more pieces."
Again, this simply isn't true. For one thing, talk of "piece counts" and even "increasing piece counts", rather than material, is very odd coming from a serious chessplayer. Aside from that, time, space, piece mobility and coordination, king safety, pawn structure, including passed pawns, how far pawns are advanced, and numerous other factors play a role. All of these can provide counterplay against a material advantage ... it need not be "crushing", merely adequate. And tactical combinations need not be "amazing", merely adequate. And whether these factors are adequate requires more than 5 moves of lookahead because chess playing programs are only able to do static analysis and have no "grasp" of positions. All of which adds up to the need for move tree scores to be made up of far more than "piece counts".
You're right that material is the correct term. I was trying to use language appropriate for someone thinking about programming a chess machine.
I perhaps resorted to hyperbole in my original description for the sack of emphasis. You are correct that at higher levels of play, positional considerations matter far more than material considerations. The advantage does not need to be amazing, but adequate. However, as material begins to accumulate the advantage one must have in position in order to justify the loss will increasingly require a position that moves into the realm of "amazing" and "crushing".
You are right that objectively calculating the positional strength of a position is very difficult to do without immense brute forcing, and likely needs more than 5 moves ahead of insight. When I said that I was really referring quite strictly to tactical combinations where the vast majority of tactical mistakes can be caught quickly.
> If the move's objectively good, there would be no variation in moves between players.
If we could solve chess, this most likely would be true, just as it's true for tic-tac-toe, which anyone can solve in mere minutes once they realize that symmetry allows for only 3 distinct opening moves (corner, middle and edge) and games should always end in a draw unless someone makes a silly mistake.
Granted, there are lots of paths to draw that one might take, but the objectively strongest move is to take a corner, then the opposite corner, which requires the opponent to either try to force a draw or lose, whereas it's not hard to use weak moves to hand either player a victory, even though the game of tic-tac-toe can always be forced into a draw with skilled play.
If I had to speculate at a high level why the Sicilian opening is so popular for black in professional play, it would be ultimately because the Sicilian allows black to obfuscate white's board symmetry, which creates opportunity for counterplay against white's fundamental advantage of having the initiative.
I will say though that as someone who devoted some serious time into trying to become a master, that opening theory completely changes as you get to the master level and beyond.
In tournaments I would play a solid but relatively obscure opening as black that worked very well as a safe opening to guard against highly tactical book play, but when I really analyzed the entire line going out past 12-15 moves with a grandmaster, I learned that with careful play there actually was a way to gain a slight edge for white with it -- enough to make the opening uninteresting to most grandmasters. It would play well against masters, but not against a top GM who would know how to play out the line correctly.
Very true. And even in long, professional play its not uncommon to see GM's play highly tactical, but unsound openings if they think the other player doesn't know how to beat it. E.g. I saw Nakamura play the Kings Gambit once against somebody sub-2000 in a professional tournament once (not blitz, full regular timed game).
It's clear that you don't play chess. Anyone who does understands from experience why "increasing your piece count" (which is a backwards and inaccurate way to put it) is the most important and reliable path to victory ... of course it's not always the right thing, but other things being equal, winning material is advantageous. Asking why gaining material advantage is "privileged" is like asking why a weightlifter "privileges" gaining strength, or why a general "privileges" winning battles or destroying supply lines. It's not "because they learned to", it's because "duh, that's obvious".
And the claim that there would be no variation in moves between players if moves were objectively good is absurd nonsense. Just because not everyone plays the best move, that doesn't mean it's not the best move. Of course different players apply different heuristics -- some players are better than others. But in the vast majority of positions, all grandmasters will, given enough time for analysis, agree on the best move or a small number of equally good best moves. When there are multiple best moves, different grandmasters will choose different ones depending on their style, familiarity, opponents, and objectives (tournament players play differently when all they need is a draw than when they need to win).
Your previous comments, about "postrationalization", are also nonsense. Certainly GMs play intuitively in blitz games, but when taking their time they can always say why a move is better -- and they do just that in postgame analyses, many of which can be seen online. The explanations are given in terms of the major strategic factors of time, space, and material, or other factors such as pawn structure and piece coordination, or in terms of tactical maneuvers that achieve advantages in those factors ... or that result in checkmate (which can be viewed as infinite material gain, and many chess playing programs model it as such).
But chessplaying programs aren't goal driven. They evaluate such factors when they statically analyze a position, but they evaluate millions of positions and compare the evaluations and bubble these evaluations up the game tree, resulting in a single score. That score does not and cannot indicate why the final choice is better than others. Thus
"It doesn't seem too difficult to have an AI let you know which other strong moves it rejected, and to dig into its forecasts for how those moves play out compared to the chosen move to tell you why it makes particular scenarios more likely."
is just facile nonsense grounded in ignorance ... of course it can let you know which other strong moves were rejected, but it cannot even begin to tell you why.
"But that would just be postrationalization too... "
You keep using that word, in completely wrong fashion. The computer's analysis is entirely done before it makes the move, so there's nothing "post" about it. And it makes moves for reasons, not "rationalizations". Perhaps some day there will be AIs that have some need to justify their behavior, but the notion does not apply to transparent, mechanistic decision making algorithms.
It's interesting that Lee's style is similar to Carlsen: Lee Sedol's dominating international performances were famous for shockingly aggressive risk-taking and fighting, contrary to the relatively risk-averse styles of most modern Go professionals.
I'm not sure they're that similar. The whole professional Go world had to change to a more aggressive fighting style in order to dethrone Lee Changho, whereas Carlsen in a sense had to do the opposite to consistently take on the best GMs -- he was very aggressive when he was younger, now he is "nettlesome" which isn't quite the same thing.
With me knowing nothing about this particular pocket f the world, how does one live as a "Go professional"? Who pays for this? I don't imagine this is very attractive for sponsors, or do I underestimate how popular this is in Asia?
They are viewed the same sports professionals in China/Japan/Korea. Go news would share sports frontpage, and players are paid well. In China the national Go association is managed under the sports council, with dorms and national squads under the association.
As it's a game more popular amongst older demographics, there tend to be a lot of wealthy patrons and supporters (individuals and companies) who sponsor tournaments and teams. One of the highest-paying competitions is the Ing Cup with a prize of $400,000. Japan has nearly 10 major year-long tournaments every year, totaling over $2mil in prizes, many are sponsored by major newspapers.[1] China has domestic year-long leagues, where city teams each have their own sponsors. All the games I mentioned here pay a match fee whether players win or lose.
So yes, it is a popular game in Asia, however less so for the younger demographic and is unfortunately in decline. Most people just don't have the attention span, interest or time these days. :(
But not as popular as Chinese chess (xiangqi) as a game people actually play, though. Go might be more popular than xiangqi as a spectator sport; I don't know.
I was responding to a comment specifically about China.
Actually Janggi, the Korean variant of Chinese chess (yes, there are some rule differences, but it's recognizably the same game - like Chess without castling and en passant) is very popular, though according to Wikipedia currently less so than Go.
People have learned to communicate heuristics which is very useful for beginning players. A grand master may not be able to communicate nuances of the game to low level players, but they do benefit from working in groups which suggests they can share reasons for a given move not just propose moves and have other independently evaluate them.
So, while people can't map out the networks of neurons that actually made a given choice, we can effectively communicate reasons for a given move.
Perhaps postrationalizaton is still useful? Could empathy and mirror neurons help transfer some of that deep learning? Two weeks later the student faces a similar situation, and they get the same gut feeling their teacher did, and that helps them play better than if their teacher didn't postrationalize?
Absolutely! My point is that human intelligence doesn't actually have any deep insight into how it makes decisions, so we shouldn't be that disappointed that an AI doesn't, either. Humans can postrationalize - explore how they think they make decisions - but they can't tell you how they actually decided. Doing the same for an AI is interesting, but I don't think it's a necessary component of making good decisions in the first place to be able to explain why you made it.
On the other hand, not all actions are decisions. There is plenty of actions we would classify as decisions which are application of rules instead. There is a clear rational path in application of rules. To clarify terminology, for me decisions are actions in response to intractable computationally challenges, e.g. will that couch fit in my living room (when I am in a store, where size calculations are hard in an unfamiliar context) etc. This could mean that your action is my decision if we are not equally capable of calculating based on rules.
Although, when looking at the Deep Dream images that come as a by product (more or less) of image recognition AI, I get the impression that there ARE ways of communicating things about what a computer is "thinking" when trying to solve problems in areas where humans are traditionally better.
Both points are excellent. I think the second one is more important immediately. If you look at the "expert systems" literature, specifically older expert systems based on logic programming, there usually was an "explanation component" in the general architecture.
However I think this area has been under-researched but is obviously important. It would enable very strong learning environments and man/computer hybrid systems. I think there's very direct relevance in safety/security critical systems and there's some literature in operators not understanding what is going on in complex systems and how that can be fatal (think power plants and the like)
For #2, is Stockfish implicitly discovering things that a human might explicitly recognize and articulate (e.g. that the pawn structure has an outsized impact on the value of certain pieces)?
If so, could it be just as easily programmed to answer those questions as it evaluates moves? That is, it seems the information is there to form those answers, but it's not the question the AI has been asked.
Historically there has been a back-and-forth with chess engines....
Early engines tried to really "smart", but consequently couldn't really analyze very deeply, as they spent a lot of time on each position. Newer engines mostly churn through really deep analysis, going many layers deep, but are making comparatively simplistic analyses.
"For instance, it wasn't asked to evaluate the pawn structure and provide that analysis as an output, but it certainly could be programmed to do so."
This quite misses the point. These programs do that as a matter of course, for individual positions. But choosing a move is the result of evaluating many millions of positions and comparing the scores through tree pruning. The program cannot tell you that it chose one move over another because the chosen move is generally better for the pawn structure than the move not chosen, because it doesn't have that information and cannot obtain it.
It should be possible. E.g. I've seen people train a neural network or similar to classify images and then "run it backwards" to get e.g. the image that the network thinks is most "panda".
"If so, could it be just as easily programmed to answer those questions as it evaluates moves?"
No. A chess playing program's move score is a value obtained from treewise comparisons of the static evaluations of millions and millions of positions.
This reminds me something I've read long time ago about the Heroes of Might and Magic game. At some point the AI was so good, it wasn't fun to play, and it had to be dumbed down.
This is a frequent pitfall in video game AI - people go into it thinking the AI's goal is to win, then learn the hard way that the AI's goal should be to lose in an interesting way.
Nobody says that the red koopa in Super Mario Bros. has bad AI.
I can't remember, but I do think it was an interview with Jon Van Caneghem... Either in a book of game design, or magazine. I have to find it.
Similarly long time ago, read about how AOE (Age of Empires) used a lot of computers to play against each other, then do statistic which units actually got used. The idea was to rebalance the units such that all are almost equally used (well in terms of computer-ai play).
I think these two articles were both in the same book, so I'll have to dig.
HOMM3 is also my favourite game, along with Disciples. I'm a big turn-based strategy fan :)
It just means analyzing more than one move at a time. Any engine supporting the UCI protocol should be able to do it. Like Stockfish, which is free.[1] So you don't need to create a new engine to implement this feature. It might be possible to implement it with a bash script. Certainly with JS.[2]
Stockfish does have a "skill level" setting, and it's not terrible at faking club-level play (if you have the Play Magnus mobile app, it's just Stockfish at different skill levels). However, as of 2013 the implementation is much more primitive than what I'm suggesting here.[3]
To be clear, even though it's incredibly obvious, I've never seen this idea anywhere else. It first occurred to me after reading the original Guid & Bratko paper on intrinsic ratings in 2006. Happy to continue this offlist if you want to work on it. My e-mail is in my profile.
I actually understood what you meant, I just think it's funny when we use "just" right before saying something that seems complex (even if it isn't actually all that complex).
Having said that, I only added the link to that comic because I didn't want to just write a comment saying "thanks for the links", and I'm only replying again, because I'm hoping it continues the pattern where you keep replying to my replies with super interesting links!
You can successfully make a computer program play at a 2000 FIDE level (say), in that its win/loss/draw results will be consistent with that of a 2000 FIDE human. IPR is a good way of doing this in a quantitative way.
The interesting problem is to make the computer play like a 2000-rated person, not just as well as a 2000-rated person. I'm a big fan of Regan's work, but I don't think IPR on its own is sufficient to make the computer play idiomatic suboptimal moves.
Shredder claimed to have human-like play at lower levels, so I gave that a try. It works surprisingly well at my level, making plausible mistakes in a fairly consistent manner. When I was playing against it I was in the 1200-1500 range, so I don't know how well it does at higher levels. Also, it had a setting where it would rate you and auto-adjust itself for the next game.
It made playing against a program a lot nicer than other chess programs I had tried.
> 1. How to play well at a level below full strength. In chess, for instance, it is no fun for most humans to play Stockfish or Komodo (the two strongest chess programs), because those programs will completely stomp them. It is most fun for a human to play someone around their own level. Most chess programs intended for playing (as opposed to just for analysis) let you set what level you want to play, but the results feel unnatural.
The astronauts in 2001 play chess with the computer, and it sometimes loses on purpose to make the game more interesting for them.
This is actually a problem I've given a decent amount of though on (although not necessarily reaching a good conclusion), but I think these problems are actually related and not impossible for this simple case. It comes to an issue of what parts of the analysis and at what depth did a best move come to vision? Was it bad when it was sorted for 8 ply but good at 16? Maybe that won't "tell" a person why a move was good, but it gives a lot of tools to help try to understand them (which can be exceedingly difficult right now if a line is not part of the principal variation, but ultimately affects the evaluation by an existing "refutation". But I think the other "difficulty" is that 1800 players play badly in lots of different ways, 2200s play badly in lots of different ways and even Grandmasters play badly in lots of different ways, but very strong chess engines play badly only in a few sometimes limited ways.
It's a bit of a game design problem too, since you may want to optimize for how "fun" the AI is to play against. There are patterns of behavior that can be equivalently challenging, but greatly varying in terms of how interesting or enjoyable they are to play against.
I.e. there are various chess bots that can be assigned personality dimensions like aggressiveness, novelty, etc.
A general AI will likely be LESS able to explain why a move is good, for exactly reason mentioned above (post-rationalization of a massive statistical computation)
No, a truly general AI would play the way we do ... based on goal seeking. Current chess playing programs give moves a single score based on comparing and tree pruning millions of position evaluations, so they cannot possibly articulate what went into that score.
For point #2, the current state of AI allows only for "because you're more likely to win with this move." Today's AI can't reason like a human mind does, it just simulates thousands of scenarios and analyzes which are more likely to be successful, with a very primitive understanding as to why.
When playing, they have a strategy, which they could explain to other go players. They don't just recognize patterns or do brute-force look-ahead. The same is true for good chess players.
There's typically an extra layer (or more) with humans. "Because this puts your Bishop in wasabi which makes it harder for your opponent to extract his Kinglet, making it more likely to win."
Wouldn't it be possible to compare the "top" move with the "runner up" move, compare outcome percentages, and declare whether there is a small or large deviation? Or comparing the "top" move with any other possible move? Or is that too much calculation?
Well, you can make chess engines give you a numeric evaluation for several possible moves. These are typically tuned so that the value of a pawn is around 1 point. A grandmaster 1 or 2 points ahead can routinely convert to a won game, assuming he doesn't blunder.
So if the best move has an evaluation of +0.05 and the second best has -0.02 , the difference is probably a very subtle positional improvement (and the first move may not in fact be better; chess programs aren't perfect). If the best is +3.12 and the second is -0.02, and you can't see why, there's a concrete material tactic you're missing (or, less likely, a major, obvious positional devastation).
But, it can't tell you what you're missing, just the magnitude of the difference.
Seems like a pretty thin line between these conceptions of "understanding". If the AI is programmed to "understand" the rules and ojectives of the game, and it uses that information to assess the best moves for various scenarios, then how does that materially differ from human understanding?
It's strange that almost no one commenting on this has the faintest idea how chess programs work. Chess programs score moves by scoring and comparing, recursively, many millions of positions.
Actually, I wrote one as a senior project years ago. It worked exactly that way, with the exception that the number of positions was less, owed to the available computing power.
Concept was the same as you describe. No rocket science. I purposely read nothing beforehand, because I wanted to devise an approach and this seemed the most obvious.
Of course, nowadays that is not the only technique in use.
In any case, I'm not sure why you think it impossible to add any additional analysis to the program as it repeatedly scores millions of positions.
To summarize, I believe what they do is roughly this: First, they take a large collection of Go moves from expert players and learn a mapping from position to moves (a policy) using a convolutional neural network that simply takes the 19 x 19 board as input. Then they refine a copy of this mapping using reinforcement learning by letting the program play against other instances of the same program: For that they additionally train a mapping from the position to a probability of how how likely it will result in winning the game (the value of that state). With these two networks they navigate through state-space: First they produce a couple of learned expert moves given the current state of the board with the first neural network. Then they check the values of these moves and branch out over the best ones (among other heuristics). When some termination criterion is met, they pick the first move of the best branch and then it's the other player's turn.
they also train a mapping from the board state to a probability of how how likely it is a particular move will result in winning the game (the value of a particular move).
How is this calculated?
When some termination criterion is met
Were these criterion learned automatically, or coded/tweaked manually?
1. The value network is trained with gradient descent to minimize the difference between predicted outcome of a certain board position and the final outcome of the game. Actually they use the refined policy network for this training; but the original policy turns out to perform better during simulation (they conjecture it is because it contains more creative moves which are kind of averaged out in the refined one). I'm wondering why the value network can be better trained with the refined policy network.
2. They just run a certain number of simulations, i.e. they compute n different branches all the way to the end of the game with various heuristics.
This was the question which originally led me to lose faith in deep learning for solving go.
Existing research throws a bunch of professional games at a DCNN and trains it to predict the next move.
It generally does quite well but fails hilariously when you give it a situation which never comes up in pro games. Go involves lots of implicit threats which are rarely carried out. These networks learn to make the threats but, lacking training data, are incapable of following up.
The first step of creating AlphaGo worked the same way (and actually was worse at predicting the next move than current state of the art), but Deep Mind then took that base network and retrained it. Instead of playing the move a pro would play it now plays the move most likely to result in a win.
For pros, this is the same move. But for AlphaGo, in this completely different MCTS environment, they are quite different. Deep Mind then played the engine against older versions of itself and used reinforcement learning to make the network as accurate as possible.
They effectively used the human data to bootstrap a better player. The paper used a lot of other cool techniques and optimizations, but I think this one might be the coolest.
> How can a human ever get better than their teacher?
By learning from other teachers, and by applying original thought. Also, due to innately superior intelligence. If your IQ is 140, and that of the teacher is 105, you will eventually outstrip the teacher.
I concluded that the all time no. 1 master Go Seigen's secret is 1. learn from all masters; 2. keep inventing/innovating. Most experts do 1 well, and are pretty much stuck there. Few are good at 2. I doubt if computers can invent/innovate.
I would have thought (he says casually) that some kind of genetic algorithm of introducing random moves and evaluating outcomes for success would be entirely possible, no?
It's because they have a much larger stack size than a human brain (which does not have a stack at all, but just various kinds of short term memories). An expert Go player can realistically maybe consider 2-3 moves into the future and can have a rough idea about what will happen in the coming 10 moves, while this method does tree search all the way to the end of the game on multiple alternative paths for each move.
Not true. Profession go players read out 20+ moves consistently. Go Seigan's nemesis Kitani Minoru regularly read-out 30-40 moves.
As an AGAAmateur 4 dan I read 10 moves pretty regularly, that's including variations.
And if the sequence includes joseki (known optimal sequences of 15-20+ moves), then pros will read even deeper...
Yes, the latter number was perhaps too conservative; no doubt about deeper predictions being easily possible, but I doubt even expert players consider many alternative paths in the search tree. They might recognize overall strategies which reach many moves into the future, but extensive consideration of what will happen in the upcoming moves is probably constrained to a only few steps; at least relative to the number and depths of paths that AlphaGo considers.
I think a key missing component to crowd success on real expert knowledge (as opposed to trivia) is captured by the concept of prediction markets. (https://en.wikipedia.org/wiki/Prediction_market) The experts who are correct will make more money than the incorrect ones and eventually drive them out of the market for some particular area.
That's no counterpoint because the World team (of which I was a member) was made up of boobs on the internet, not players of Kasparov's strength, which was the premise of the question you responded to.
The easy thing about combining AI systems is that they don't argue. They don't try to change the opinion of the other experts. They don't try to argue with the entity that combines all opinions, every AI expert gets to say his opinion once.
With humans on the other hand, there will always be some discussion. And some human experts may be better at persuading other human experts or the combining entity.
I think it would be an interesting thing to try after they beat the number 1 player. Gather the top 10 (human) Go players and let them play as a team against AlphaGo.
This is nonsense. To combine AI systems requires a mechanism to combine their evaluations. The most effect way would be a feedback system, where each system uses evaluations from other systems as input to possibly modify its own evaluation, with the goal being consensus. This is simply a formalization of argumentation -- which can be rational; it doesn't have to be based on personal benefit. And generalized AI systems may well some day have personal motivations, as has been discussed at length.
the key part is that they basically just play all the permutations possible and next permutations and so on and get a probability to win out of each path and take the best. It is indeed a very artificial way to be intelligent.
Hey Inufu. I just replayed the games and have to say that the first game the bot shows very high quality plays.
The next two games, it seems like Fan Hui did not perform as well as the first (as opposed to the computer being clearly better than him). Where the games played in a row?
Regardless, I'm looking forward to the games with Lee Sedol. I studied in his school in Korea, and personally know how hard it is to get to that level.
My assessment is that the bot from those games will NOT beat Lee Sedol. So train it hard for march :)
You can see that in the first game the bot played really solidly without risk-taking and didn't want to lose even a few stones. You could say that it played very conservatively but solidly. It won by a tiny margin (2.5) so Fan Hui probably concluded that the bot would win every game by a similar small margin if he didn't change the style of play. I'm sure that in the first game Fan Hui was sounding the bot out for strengths and weaknesses, seeing if it knew all the tesujis and josekis and whatnot.
So from then on you see Fan Hui trying to mix it up and play more aggressively and what is very interesting is that he got outplayed in every game, even to the point of losing a big group and resigning.
So - if you play conservatively and tentatively and solidly it'll beat you by a sliver, if you try to out-think it it'll nail you. At least at the 2dan pro level.
I'd be hesitant for calling a Lee Sedol victory ahead of time. We know that in chess Kasparov beat IBM's bot initially but then IBM tweaked and within a couple of years the bot was too strong. Even though go is much harder than chess I predict that if Google lose this time and if they don't lose by much they'll win the time after that.
The games were all played on separate days. As Fan Hui mentioned in the video, he changed his strategy after the first game to fight more, so that may explain why it seems his performance changed.
This bot has a very flexible style. It is at ease both in calm point-grabbing positions (first game) and large-scale dogfights (see the second one), where humans used to crush computers.
Lee Sedol is so strong at fighting, this is gonna be a great match between touch opponents.
Japanese 9-dan pros and former Japanese cup holders who played against CrazyStone beat it less than 80% of the time [1], while AlphaGo's win rate against it is 80%, according to a comment below by inufu, an engineer at Google DeepMind.
If transivity applies then AlphaGo is likely stronger than the average of those former Japanese champions, including Norimoto Yoda, who is currently ranked at 187th (about 300 Elo rating below Lee Sedol and 300 above Fan Hui) [2].
There's a saying in Go circles that there is a substantial gap in playing style or intuition between pros and even top-level amateurs. Whether that is true or not, AlphaGo has definitely crossed the threshold to pro-level play in Go.
By March 2016, Google DeepMind would have improved AlphaGo somewhat at least through self-playing and perhaps more processing power.
The game with Lee Sedol will be an interesting one to watch!
Just to clarify your comment so people aren't confused by the apparently low winning percentages, those are all 4-stone handicap games. (It's still an apples-to-apples comparison.)
I think the 80% win rate against CrazyStone is for the single-machine version. The distributed version won 100% of the time against CrazyStone and 80% of the time against the single-machine version.
2 dan pro is a much stronger strength than 2 dan amateur. The difference between a 2 dan pro and a 9 dan pro is usually just one stone handicap whereas the difference between a 2 dan amateur and a 9 dan amateur would be around 7 stones.
You need to keep in mind that professional Dan levels aren't purely based on ELO or some other objective measure.
You'll find lots of Korean pros who got awarded their 1p for going abroad and teaching. 8p and 9p in Japan, in my recollection, are reserved for those winning one of the major tournaments and defending a title there.
Wow, this is stunning. You guys beat a professional 2-dan player. That happened a lot sooner than expected. There's some kind of exponential evolution going on with AI these days.
Is AlphaGo being made available to the public? I'm a mediocre player, but I'd like to try a few games against it. Current synthetic players don't quite play the same as humans do, and it's a bit jarring. I wonder if the Google AI is more human-like in its style.
Interesting that DeepMind was using Google Cloud for compute. I imagine that the MCTS expansion can become massive. Any chance DeepMind may publish some of the internals about how many instances were used, how computation was distributed, any packages or frameworks used, etc.
And congrats on achieving this impressive AI milestone!
Thanks, cfcef! Implementation details are in the "Methods" section. Have started experimenting with small GPU ML cloud jobs and the costs do add up. Wanted to get a sense what a large job looked like and indeed, AlphaGo is gargantuan. 50 GPUs approx train time one month for the policy/value network. So, a Google R&D size budget would be a prerequisite ;)
In the recent 5-game match between Jie and Sedol a few weeks ago, it was decided in Ke Jie's favour by less than a single point in the fifth game. It literally would have come out differently if they'd used a subtly different (commonly used) scoring ruleset. It's not at all clear who's stronger at their peak.
Keep in mind that professionals are counting throughout the game, and are playing to win, not to win by a lot. So a 0.5 point victory may simply mean the victor was confident in their position and chose not to take unnecessary risks.
This is an impressive achievement. However there are many subtleties involved when humans play against computers. I think only time can tell how big a breakthrough this really is.
It is telling that AlphaGo only won 3:2 for the informal games. As a computer doesn't know the difference between formal and informal this seems to indicate that Alpha isn't truly above Fan Hui in strength. Also the formal games were played with fast game rules, which may be particularly advantageous to computers. Unlike chess go accumulates pieces on board throughout the game. Towards the end there are many stones on board and it is easy for human to err while the search space (possible moves) actually gets smaller for the computer and there is no doubt that computer has stronger book keeping capabilities. So to fairly evaluate human vs computer we may need new time rules different from human vs human games.
The paper does not disclose whether the trained program displays true understanding of game rules. Humans don't just use pattern recognition, they also use logic to evaluate game state. While this could be addressed by the search part of the algorithm the paper doesn't appear to give any indication on whether this was studied. For example, the board position strictly speaking does not determine the game state due to ko rules (so the premise of the paper that there is a valuation function v(s), where s is the board position, that determines game outcome is incorrect). It would be particularly interesting to see how the algorithm fares when there are multiple kos going on at the same time. Also it would be interesting to see how well the algorithm understands long range phenomenons such as ladder and liveness. With a million dollar challenge in the plan it is understandable the Google team may not want to disclose weaknesses of the algorithm but in the long run we will get to know how robust it really is.
From my experience playing against conv nets I would say if you treat your computer opponent as a human it would be like playing against the hive mind of a group of experts with infallible memory and it is not to your advantage. So one would be better off trying "cheat" moves that human experts do not use on each other and see how well the computer generalizes. Without search and with neural nets alone it is clear that computers do not generalize that well. So it would be interesting to see how well search and neural nets work together and if someone could find the algorithm's weak spots.
>a computer doesn't know the difference between formal and informal this seems to indicate that Alpha isn't truly above Fan Hui in strength. Also the formal games were played with fast game rules,
I'm not sure if I'm misunderstanding you, or you're misunderstanding the situation, but the informal games had faster time controls than the formal ones: the formal games had one hour of main time, while the informal games just had byo-yomi (3 periods of 30s: if you take more than 30 seconds for a move, you "use up" a period).
"so the premise of the paper that there is a valuation function v(s), where s is the board position, that determines game outcome is incorrect"
No, it isn't, any more than the claim that there's a position evaluation for chess is incorrect because of castling and capture en passant. A "position" isn't just where the pieces are on the board, but includes all relevant state information.
I believe that Lee Sedol would sweep all the games if he were to play Fan Hui under the same conditions, which makes the wait until the official match tantalizing. It's fair to say that AlphaGo has mastered Go, but there is a very large difference between a professional who has moved to the west and professional players who are competing at the highest level in regular matches. It's fair to represent Fan Hui as a master of the game, but misleading to represent him as equivalent to currently competing professionals. It is great that we'll get to see a match up against a player who is unquestionably one of the best of all time.
As a non-expert, may I ask (as the term does not appear in the paper): How valuable is the Shannon number in order to evaluate "complexity" in your context?
Since both numbers are out of the realm of brute-forcing, the bigger achievement is because of the more fluid and strategic nature of Go compared to chess. Chess is more rigid than Go, and playing Go employs more 'human' intelligence than chess.
Quoting from the OP paper:
"During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement learning methods."
"Go is exemplary in many ways of the difficulties faced by artificial intelligence: a challenging decision-making task, an intractable search space, and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function. The previous major breakthrough in computer Go, the introduction of MCTS, led to corresponding advances in many other domains; for example, general game-playing, classical planning, partially observed planning, scheduling, and constraint satisfaction. By combining tree search with policy and value networks, AlphaGo has finally reached a professional level in Go, providing hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains."
I will admit to not following AI at all for about 20 years, so perhaps this is old hat now, but having separate policy networks and value networks is quite ingenious. I wonder how successful this would be at natural language generation. It reminds me of Krashen's theories of language acquisition where there is a "monitor" that gives you fuzzy matches on whether your sentences are correct or not. One of these days I'll have to read their paper.
For language generation, AFAIK there is no good model that follows this architecture. For image generation, Generative Adversarial Networks are strong contenders. See for instance:
was it easy to convince the players to have a match with AlphaGo? or was there some reluctance especially now when losing is becoming more of a possibility at even strength?
I don't know about Go professionals, but this might be the last time a human can win against computers. (Or the first time computers will win all the time.) It's a strange honour in either case.
When can we hope to see some form that will allow the public to play against this even if it is pay to play for each game or a weaker PC version? I hope the system does not end up being put away like Deep Blue was.
Also, what kind of hardware results in this level of play and how is hardware correlated with strength here?
I'd love to put it on a public go server and will try to convince people :)
However, this will have to wait until after the match in March, that's our number 1 priority at the moment.
There are graphs in the paper showing how it scales with more hardware.
Deep Blue was only innovative in that it was specialized hardware for this type of search. The algorithms it used were well-established, and as there was no way to play it as a piece of hardware without great expense, there wasn't really a reason to keep it around.
Chess engines you can run today, for free, on your own laptop, are far and away better than Deep Blue (and any human), and I believe still don't reach Deep Blue's raw speed.
I'm curious: is there a Chess league for software? And if yes, how far are they already better (in ELOs) than humans if run on commodity server hardware?
I can't find the claimed ELO for Jonny (current champ) but Junior (previous champ) is listed at 3200+, Magnus' top rating, the highest ELO rating ever, is 2882 for reference
There is a lot of politics in chess programming but the bottom line is that Komodo is currently the strongest program followed by Stockfish (which is distributed under GPL).
In the paper you estimate that Distributed Alpha Go has ELO around 3200 (if I read the plot correctly.) According to goratings.org, Fan Hui is rated 2900 and Lee Sedol is rated 3515. Doesn't that mean you still have work to do before beating Lee Sedol?
The ancient Chinese game of Go is one of the last games where the best human players can still beat the best artificial intelligence players. Last year, the Facebook AI Research team started creating an AI that can learn to play Go.
Scientists have been trying to teach computers to win at Go for 20 years. We're getting close, and in the past six months we've built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build.
Speaking of which, what would a robot war look like? I imagine a large portion of the effort would be to hack/otherwise persuade the enemy robots to switch sides.
Here's what the US Army Research Lab thinks warfare will look like in 2050. Reading just the Contents is a pretty good TL;DR but basically augmented humans, micro targeting, lots of misinformation, and so much information that decisions have to be automated to the point that humans can only operate "on the loop" instead of "in the loop".
Politics. There are obvious strategies (bomb everything, if war still not over, build bigger bombs and GOTO 10), but that sort of demotivates humans and they stop the war.
It'll be interesting (as in the old fake Chinese curse interesting) to know what happens when less democratic power structures wage war. They are still constrained by economics (trade creates value, if they bomb the shit out of someone they might find themselves cut off from trade, see embargoes and sanctions on Russia) and the possibility of an internal struggle (civil war) is always there.
It will always come back to threatening human lives. You can always smash machines against each other, but ultimately it's pointless until the humans themselves are threatened with their lives.
I suspect it wouldn't be difficult to program a computer to beat humans at League, but no one has put real effort into it because of anti-cheat and the fact that it would be looked at more as "hacking" than an intellectual challenge.
With over 10 000 hours of Dota [1] under my belt I am fairly certain that even with perfect mechanical skills [2] a strong human player will still beat the A.I.
Even at the very top pro level, matches are constantly being won & lost purely based on the initial hero drafting phase. Calculating the optimal draft is way more difficult than Go. It's probably not an exaggeration to say that the search space scale difference from dota draft to Go is about the same as from Go to tic-tac-toe. Because it's not only about the 100+ different heroes grouped into combinations, but also every possible game that can happen then with those combinations.
Then once the game starts, the A.I. may be able to respond to actions extremely quickly, with perfect precision. But what should the responses be? This is not a simple thing to answer and humans keep taking completely different approaches as our understanding of the game keeps evolving.
Also, the very best dota bots can currently only beat absolute beginners who don't understand the game at all yet. It takes a few hundred games of practice for a human to go beyond the best A.I. currently available.
--
[1] A game similar to League of Legends
[2] Directly reading from memory and then directly calling gameplay functions, basically a perfect hack, would get you what is known as "mechanical skills". For example being able to last hit.
I understand what you're saying, but I think you fail to realize that a good A.I. is almost the same thing as a better human brain.
You can just train the A.I. by replaying thousands of professional matches, and then let it train by playing billions of matches extremely quickly. It can even play both sides at the same time, and try out millions of different strategies against each other. It doesn't even matter if there's a limitation on how fast it can click and press keys, since every single action will be perfectly optimal.
Not only that, but you don't even need to write a single line of code to tell it about the rules of Dota. Before you start the training, it doesn't even have to know what each key does, or what happens when you move the mouse. Neural networks are capable of learning all of this from scratch, basically by trial and error.
This is not your typical dota bot. Bots are not A.I.s, so this is a whole different ballpark.
I understand what you're describing and I belive that this will be possible in the distant future. I just haven't seen any evidence of this being even close to possible with todays technology.
I think the primary problem is the search space size. I've seen this type of learning work on simple 8 bit games, and it seems we may finally be at the stage to handle Go. However Dota has many orders of magnitude more different possible moves at any given situation. The total search space grows incredibly fast after every move.
Thus, I do think neural nets can eventually learn how to play well, it's just that there's simply not enough memory or processing power to achieve any success right now.
My point was that the actual game is what you need to compare - the choice of heroes is superficial compared to the complexity of the game itself, in either case. "Calculating the optimal draft is way more difficult than Go." is simply massively wrong.
The game field in a computer game will be quantized - even if it uses floating-point arithmetic that's still quantization. Conversely a go board can be scaled up or down without losing the feel of the game. If the grids were the same resolution then a given point in time you have many more choices in go because you can play literally anywhere.
"Calculating the optimal draft is way more difficult than Go." is correct because you can't calculate the optimal draft without calculating all the possible games that can happen. Every hero is unique, you can't really preserve anything from the game calculations of another hero.
There's a question of what the AI is doing - makes more sense with an FPS:
1. The AI is a program running on the computer, so the input is the state of the game. This is basically just an aimbot, and would be trivial to do - just make it wander around randomly and headshot everything instantly.
2. The AI is looking at the screen like a human player, and has to parse the screen data (we could even let them have direct pixel input, not a camera). This would be much harder.
For a turn-based game like Go or Chess, the distinction is vague because the CV required to parse a board is fairly trivial and orthogonal to the problem of strategy.
I understand that training this thing requires massive amount of computation, so has to be done on a massive cluster to do in reasonable time. Once they have it trained, though, what are the computational requirements like? Would it be feasible to run it on an ordinary PC?
Speaking of Go programs and computation requirements, one of the best performance hacks ever was done by Dave Fotland, the author of the program "Many Faces of Go", which was one of the top computer go programs from the '80s through at least around 2010.
He donated code from MFoG to SPEC, which incorporated it into the SPECInt benchmark. So, the better a given processor was at running Fotland's go code, the better it scored on SPECInt. Since SPEC was one of the most widely reported benchmarks, Intel and AMD and the other processor makers put a lot of effort into making their processors run these benchmarks as fast as they code.
Net result, Fotland had the processor makers working hard to make their processors run his code faster!
Hiya. Reporter here. On the press briefing call Hassabis said that the single node version won 494 out of 495 games against an array of closed- and open-source Go programs. The distributed version was used in the match against the human and will be used in March. Distributed alphago used in the October match was about 170GPUs and 1200CPUs, he said.
Video from Nature: https://www.youtube.com/watch?v=g-dKXOlsf98&feature=youtu.be
Video from us at DeepMind: https://www.youtube.com/watch?v=SUbqykXVx0A
edit: For those saying it's still a long way to beat the strongest player - we are playing Lee Sedol, probably the strongest Go player, in March: http://deepmind.com/alpha-go.html. That site also has a link to the paper, scroll down to "Read about AlphaGo here".
If you want to view the sgfs in a browser, they are in my blog: http://www.furidamu.org/blog/2016/01/26/mastering-the-game-o...