A few months back, the expert consensus was that we were many years away from an AI playing Go at the 9-dan level. Now it seems that we've already surpassed that point. What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.
In game four, we saw Lee Sedol make a brilliant play, and AlphaGo make a critical mistake (typical of monte carlo-trained algorithms) following it. There's no doubt that with further refinement, we'll soon see AI play Go at a level well beyond human: games one through three already featured extraordinarily strong (and innovative) play on part of AlphaGo.
Game 4: https://news.ycombinator.com/item?id=11276798
Game 3: https://news.ycombinator.com/item?id=11271816
Game 2: https://news.ycombinator.com/item?id=11257928
Game 1: https://news.ycombinator.com/item?id=11250871
These kinds of predictions are almost always useless. You can always find people who say it'll take n years before x happens, but no one can predict which approaches will work, and how much improvement they'll confer.
> What this underscores, if anything, is the accelerating pace of technological growth, for better or for worse.
What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.
Appreciate it for what it is - an historic achievement for AI & ML - and stop trying to attach broader significance to it.
Let's rephrase. For a long time, the expert consensus regarding Go was that it was extremely difficult to write strongly-performing AI for. From the AlphaGo Paper: Go presents "difficult decision-making tasks; an intractable search space; and an optimal solution so complex it appears infeasible to directly approximate using a policy or value function."
For many years, the state-of-the-art Go AI stagnated or grew very slowly, reaching at most the amateur dan level. AlphaGo presents a huge and surprising leap.
> Continued advancement doesn't mean that it is accelerating
Over constant time increases, AI is tackling problems that appear exponentially more difficult. In particular, see Checkers (early '90s) vs Chess ('97) vs Go ('16). The human advantage has generally been understood to be the breadth of the game tree, nearly equivalent to the complexity of the game.
If we let x be the maximum complexity of a task at which AI performs as well as a human, then I would argue that x has been growing at an accelerating pace over the past few decades.
To be clear, the above refers to specific concepts in Reinforcement Learning.
A policy is a function from state (in Go, where all the stones are) to action (where to place the next stone). I agree that it is unlikely to have an effective policy function. At least one that is calculated efficiently (no tree search)... otherwise its not what a Reinforcement Learning researcher typically calls a policy function.
A value function is is a function from state to numerical "goodness", and is more or less one step removed from a policy function: you can choose the action that takes you to the state with the highest value. It has the same representational problems found there.
The hardest AI problems are the ones that involve multiple disciplines in deep ways. Here's a top tier artificial intelligence problem: given a plain English description of a computer program, implement it in source code.
There might be some cases where this is possible, and some cases are bound to fail.
Those are the kind of difficult problems in AI, which combine knowledge, understanding, thought, intuition, inspiration, and perspiration - or demand invention. We would be lucky to make linear progress in this area let alone exponential growth.
I think there's certainly an impression of exponential progress in AI in popular culture, but the search space is greater than factorial in size, and I think hackers should know that.
Your intuition is mistaken. Go is indeed "easier to understand" than Chess in terms of its rules, but it is arguably harder to play well and has a way larger search space, which makes it less amenable to brute force, and this was precisely why people thought it'd be impossible for a computer to play it consistently at champion level.
I don't think the achievement of AlphaGo is solely due to increased processing power, otherwise why did people think Go was such a hard problem?
Are human champions not subject to those same difficulties of the game, though? When you're pitting the AI against another player who's also held back by the large branching factor of the search tree, then how relevant really is that branching factor anyway in the grand scheme of things? A lot of people talk about Go's search space as if human players magically aren't affected by it too. And the goal here was merely to outplay a human, not to find the perfect solution to the game in general.
(These are honest questions -- I am not an AI researcher of any kind.)
For example, go players habitually think in terms of "shape". Good shape is neither too dense (inefficiently surrounding territory) or too loose (making the stones vulnerable to capture). Strong players intuitively see good shape without conscious effort.
Go players will often talk about "counting" a position - consciously counting stones and spaces to estimate the score or the general strength of a position. This is in contrast to their usual mode of thinking, which is much less quantitative.
Go is often taught using proverbs, which are essentially heuristics. Phrases like "An eye of six points in a rectangle is alive" or "On the second line eight stones live but six stones die" are commonplace. They are very useful in developing the intuition of a player.
As I understand it, the search space is largely irrelevant to human players because they rarely perform anything that approximates a tree search. Playing out imaginary moves ("reading", in the go vernacular) is generally used sparingly in difficult positions or to confirm a decision arrived at by intuition.
Go is the board game that most closely maps to the human side of Moravec's paradox, because calculation has such low value. AlphaGo uses some very clever algorithms to minimise the search space, but it also relies on 4-5 orders of magnitude more computer power than Deep Blue.
> Go players activate the brain region of vision, and literally think by seeing the board state. A lot of Go study is seeing patterns and shapes... 4-point bend is life, or Ko in the corner, Crane Nest, Tiger Mouth, the Ladder... etc. etc.
> Go has probably been so hard for computers to "solve" not because Go is "harder" than Chess (it is... but I don't think that's the primary reason), but instead because humans brains are innately wired to be better at Go than at Chess. The vision-area of the human's brain is very large, and "hacking" the vision center of the brain to make it think about Go is very effective.
Sadly, I'm neither an AI researcher nor a Go player; I think I've played less than 10 games. I don't know if we truly understand how great Go players play. About 10 years ago, when I was interested in Go computer players, I read a paper (I can't remember the title, unfortunately) that claimed that the greatest Go players cannot explain why they play the way the do, and frequently mention their use of intuition. If this is true, then we don't know how a human plays. Maybe there is a different thought process which doesn't involve backtracking a tree.
In that respect chess is a much simpler problem as you remove material from the board, prefer some locations over others etc. Where go is generally going to have the same number of pieces on each board and it's all about balancing local and board wide gains.
Unless you have or are a PhD student in AI who has kept up with the current deep net literature I assure you that the whole of Alphago will be unintuitive to you. However, if you were an AI PhD student, you likely wouldn't be so dismissive about this achievement.
That and the policy network to prune the branching factor.
I would consider it a breakthrough if we could get human beings to do this at a decent rate :)
Oh wait .... https://en.wikipedia.org/wiki/Halting_problem
At ONE task, yes. But humans are average at many things but excel at being able to adapt to many different tasks, all the time. Typical AIs (as we know them now) cannot ever hope to replicate that.
Advancement faster than predictions does mean accelerating advancement, coupled with the (true) fact that people's predictions tend to assume a constant rate of advancement . Actually, all you'd need to show accelerating advancement is a trend of conservative predictions and the fact that these predictions assume a non-decreasing rate of advancement; if we're predicting accelerating advancement and still underestimating its rate, advancement must still be accelerating.
It even seems like this latter case is where we're at, since people who assume an accelerating rate of advancement see to assume that the rate is (loosely) quadratic. However, given that the rate of advancement tends to be based on the current level of advancement (a fair approximation, since so many advancements themselves help with research and development), we should expect it to be exponential. That's what exponential means.
However, the reality seems like it might be even faster than exponential. This is what the singularitarians think. When you plot humanity's advancements using whatever definition you like, look at the length of time between them to approximate rate, and then try to fit this rate to a regression, it tends to fit regressions with vertical asymptotes.
True, but it's pretty refreshing to have a prediction about AI being N years from something that is wrong in the OTHER direction.
Contrary to your point about 'appreciate it for what it is', there is ONE lesson I hope people take from it: You can't assume AI progression always remains in the future.
A general cycle I've seen repeated over and over:
* sci-fi/futurists make a bunch of predictions
* some subset of those predictions are shown to be plausible
* general society ignores those possibilities
* an advancement happens with general societal implications
* society freaks out
Whether it's cloning (ala Dolly the Sheep, where people demonstrated zero understanding of what genetic replication was e.g. a genetic clone isn't "you") or self-driving cars (After decades of laughing at the idea because "who would you sue?", suddenly society is scrambling to adjust because they never wanted to think past treating that question as academic), or everyone having an internet-connected phone in their pocket (see encryption wars...again), or the existence of a bunch of connected computers with a wealth of knowledge available, society has always done little to avoid knee-jerk reactions.
Now we have AI (still a long way off from AGI, granted) demonstrating not only can it do things we thought weren't going to happen soon (see: Siri/Echo/Cortana/etc), but breaking a major milestone sooner than most anyone thought. We've been told for a long time that because of typical technology patterns, we should expect that the jump from "wow" to "WOW!" will happen pretty quickly. We've got big thinkers warning of the complications/dangers of AI for a long time.
And to date, AI has only been a big joke to society, or the villain of B-grade movies. It'd be nice, if just once, society at least gave SOME thought to the implications a little in advance.
I don't know when an AGI will occur - years, decades, centuries - but I'm willing to bet it takes general society by surprise and causes a lot of people to freak out.
> What? This is a non-sequitur. Continued advancement doesn't mean that it is accelerating, and even if this does represent an unexpected achievement that doesn't mean that future development will maintain that pace.
It's not a non-sequitur, but there is an implicit assumption you perhaps missed. The assumption is that the human failure to predict this AI advance is caused by an evolution curve with order higher than linear. You see, humans are amazingly good at predicting linear change. We are actually quite good at predicting x² changes (frisbee catching). Higher than that, we are useless. Even at x², we fail in some scenarios (braking distance at unusual speeds, like 250km/h on the autobahn for example).
The fact that it will maintain its pace is an unfounded assumption. However, assuming that the pace will slow is as unfounded. All in all, I'd guess it is safest to assume tech will evolve as it has in the last 5000 years.
That would be an exponential evolution curve.
Otherwise it's a blanket retort. It's like saying
"There are lots of X".
Ok, name 7. If you get stuck after 2 or 3 you're full of it.
Interesting, people seem to be saying the same about self driving cars.
AlphaGo plays Go. It probably doesn't play Go like a human (because a human probably can't do what it does), but that's OK because it also appears to be better than humans. AlphaGo is interesting not because it has done something impossible, but because it has proven possible a few novel ideas that could find other interesting applications, and adds another notch to the belt of a few other tried and tested techniques.
While growth may be accelerating, this is simply the result of one big paradigm shift in deep learning/NNs. Once we've learned to milk it for all its worth, we'll have to wait for the next epiphany.
In fact looking at the rate of change in applications over an "epiphany" period is probably the least useful estimate of progress & rate of change in progress.
I believe hmate9 is correct. If this paradigm is exploited to the full, unless we've missed something fundamental about how the brain works, we don't need to bother ourselves with inventing the next paradigm (of which there will no doubt be many), because one of the results of the current paradigm will be either an AGI (Artificial General Intelligence) that runs faster and better than human intelligence, or, more likely, an ASI (Artificial Super Intelligence). Either of those is more capable than we are for the purpose of inventing the next paradigm.
You have missed something fundamental about how the brain works. Namely, neuroscientists don't really know how it works. Neuroscientists do not fully understand how neurons in our brain learn.
According to Andrew Ng (https://www.quora.com/What-does-Andrew-Ng-think-about-Deep-L...):
"Because we fundamentally don't know how the brain works, attempts to blindly replicate what little we know in a computer also has not resulted in particularly useful AI systems. Instead, the most effective deep learning work today has made its progress by drawing from CS and engineering principles and at most a touch of biological inspiration, rather than try to blindly copy biology.
Concretely, if you hear someone say "The brain does X. My system also does X. Thus we're on a path to building the brain," my advice is to run away!"
Recently, we also introduced activation functions in our neural nets, like rectified linear and maxout just for their nice mathematical properties without any regards to biological plausibility. And they do work better than what we had before.
But we don't know how the brain works. I think you extrapolate too far. Just because a machine learning technique is inspired by our squishy connectome it does not mean it's anything like it.
I'm willing to bet there are isomorphisms of dynamics between an organic brain and a neural net programmed on silicon but as far as I know, there are still none found - or at least none are named specifically (please correct me).
Our current assertion is that neural networks basically replicate the brain's function
come on, that's hyperbole
I mean, come on- "the art of creating AI paradigms"? What is that even? You're going to find data on this, where, and train on it, how, exactly?
Sorry to take this out on you but the level of hand-waving and magical thinking is reaching critical mass lately, and it's starting to obscure the significance of the AlphaGo achievement.
Edit: not to mention, the crazy hype surrounding ANNs in the popular press (not least because it's the subject of SF stories, like someone notes above) risks killing nascent ideas and technologies that may well have the potential to be the next big breakthrough. If we end up to the point where everyone thinks all our AI problems are solved, if we just throw a few more neural layers to them, then we're in trouble. Hint: because they're not.
As others have pointed out, we don't really know how the brain works. Neural nets represent one of our best attempts to model brains. Whether or not it's good enough to create real intelligence is completely unknown. Maybe it is, maybe it's not.
Intelligence appears to be an emergent property and we don't know the circumstances under which it emerges. It could come out of a neural network. Or maybe it could not. The only way we'll find out is by trying to make it happen.
Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.
This is Hacker News, not a mass newspaper, so I think we can take the more nuanced and complex view here.
See now that's one of the misconceptions. ANNs are not modelled on the brain,
not anymore and not ever since the poor single-layer Perceptron which itself was
modelled after an early model of neuronal activation. What ANNs really are is
algorithms for optimising systems of functions. And that includes things like
Support Vector Machines and Radial Basis Function networks that don't even fit
in the usual multi-layer network diagram particularly well.
It's unfortunate that this sort of language and imagery is still used
abundantly, by people who should know better no less, but I guess "it's an
artificial brain" sounds more magical than "it's function optimisation". You
shouldn't let it mislead you though.
>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.
I don't agree. It's a subject that's informed by a solid understanding of the
fundamental concepts - function optimisation, again. There's uncertainty because
there's theoretical limits that are hard to test, frex the fact that multi-layer
perceptrons with three neural layers can learn any function given a sufficient
number of inputs, or on the opposite side, that non-finite languages are _not_
learnable in the limit (not ANN-specific but limiting what any algorithm can
learn) etc. But the arguments on either side are, well, arguments. Nobody is
being "blind". People defend their ideas, is all.
>Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.
Not really. Right now it's taking the position that there is no practical path that anyone can imagine from a go-bot, which is working in a very restricted problem space, to a magical self-improving AI-squared god-bot, which would be working in a problem space with a completely unknown shape, boundaries, and inner properties.
Meta-AI isn't even a thing yet. There are some obvious things that could be tried - like trying to evolve a god-bot out of a gigantic pre-Cambrian soup of micro-bots where each bot is a variation on one of the many possible AI implementations - but at the moment basic AI is too resource intensive to make those kinds of experiments a possibility.
And there's no guarantee anything we can think of today will work.
Can you explain why this is typical? What can be done against this to strengthen the algorithm?
In all of these games, AlphaGo used close to a constant amount of time per move, while Lee's varied a lot.
Apparently they only recently added a neural net for time management. Seems it is either not the best approach, or just not yet well trained.
When Lee Sedol made the move, the AI was in unknown territory as it hadn't explored down that avenue.
Sounds similar to what a human would do then: you wouldn't spend much time simulating in your head what would happen if your opponent made a very atypical move or a move that would seem very bad at first thought.
So while atypical in the sense of "occurring infrequently", it was not a difficult move to find for a player of that level – all the pro commentators saw it pretty much right away.
This might be the one weakness of AlphaGo, which is interesting.
That AlphaGo can play at this level suggests that similar techniques could help other parts of the infrastructure (like air traffic control) and that would also positively impact the quality of life for a many air passengers every year.
Fusion would have similar political problems to fission; and the economics aren't much improved either.
Perhaps if we ever ran out of fissionable material, fusion would become economic.
Fusion is just yet another nuclear reactor design as far as politics might be concerned.
No doubt? Seriously? What kind of knowledge do you have to make such statements? There are plenty of examples where technology has rapidly advanced to some remarkable level, but then almost completely plateaued. For example, space travel or Tesla's work on applications of electromagnetism. Heck, even other areas of AI research.
I really don't see why people here readily assume that this particular approach to computers playing Go is easily improvable. Neither do I see why everyone assumes there will be no discoveries of anti-AI strategies that will work well against it.
With neural networks involved, it's hard to say. And all we have so far is information about about, what, 15 games? Some of which were won by people. Mind you, those people never played AlphaGo before, while the bot benefited from a myriad of training samples, as well as from Go expertise of some of its creators.
I'm also tired of all the statements about "accelerating progress". It's not like all the AI research of the past was useless until DNNs came along. That's the narrative I often get from the media, but it misrepresents the history of the field. There was no shortage of working ML/AI algorithms in the past decades. The main problem was always at applying them to real-world things in useful ways. And in that sense, AlphaGo isn't much different from Deep Blue.
One big shift in the field is that these days a lot of AI research is done by corporations rather than universities. Corporations are much better at selling whatever they do as "useful", which isn't such a good thing in the long run. We're redefining progress as we go and moving goalposts for every new development.
Uh, click the link in the OP and find out? AI just beat a top 5 human professional 4-1. Go rankings put that AI at #2 in the world.
If AlphaGo improves at all at this point it will have achieved a level well beyond any human.
It is incredibly, ludicrously unlikely that AlphaGo has achieved the absolute peak of its design given that it went from an elo of ~2900 to ~3600 in just a few months.
(1) Better timing control. Maybe when the probability of winning reaches below say, 50% but has not hit the losing threshold, spend extra time.
(2) Introducing "anti-fragility". Maybe even train the net asymmetrically to play from losing positions to gain more experience with that.
(3) Debug and find out why it plays what looks like non-sense forcing moves when it thinks it is behind (assuming that is what is actually happening).
There's another interesting thing. Among the Go community, there might have been initially some misplaced pride. But the pros and the community very quickly changed their attitude about AlphaGo (as they have in the past when something that seems to not work, yet proves itself in games). They are seeing an opportunity for the advancement of Go as a game. I think a lot of the pros are very curious, even excited, and might be knocking on Google's doors to try to get access to AlphaGo.
Granted, chess AI is basically at that point right now. But, go AI has a ways to go.
PS: Honestly, it might be a year or a decade, but I suspect there is plenty of headroom to drastically surpass human play.
That's a big difference. Bugs can be identified and fixed. By the time AlphaGo faces another top professional (Ke Jie?) we can safely assume that whatever went wrong in Game 4 won't happen again.
Consider how much stronger the system has become in the few months since the match against Fan Hui. Another advance like that will place it far beyond the reach of anything humans will ever be able to compete with.
I'm not sure this is true. It made the wrong move at move 79 in game 4, but I'm not sure that should be considered an obvious mistake.
My understanding is that the moves that people said were most obviously mistakes later in the game were a result of it being behind (and desperately trying to swing the lead back in its favor), rather than a cause.
Go rankings weren't designed for ML algorithms, which can have high-level deficiencies and behave erratically under certain conditions.
Will AlphaGo show us better strategies that have never been done before? In other words, can AlphaGo exhibit creative genius? It may have, but that's rather hard for us to observe.
In any case, I am looking forward to future AI vs AI games. It is still fundamentally a human endeavor.
Yep. There's a grave risk that funding to AI research ends up being slashed just
as badly as in the last AI winter, if people start thinking that Google has
eaten AI researchers' lunch with its networks and there's no point in trying
Incidentally, Google would be the first to pay the price of that, since they
rely on a steady stream of PhDs to do the real research for them but now I'm
just being mean. The point is, we overhype the goose that lays the golden eggs,
we run out of eggs.
Many go professionals, after reviewing the 2 sets of games, have stated that is quite clear how much AlphaGo has improved in those 4 months.
And that's why you assume that it does not skyrocket in the future? Predicting the future is hard either way, ask a turkey before he gets his head chopped off.
> I'm also tired of all the statements about "accelerating progress". It's not like all the AI research of the past was useless until DNNs came along.
It's not that it was useless, but AI is improving as any other field is, some say faster than most other fields, and it's becoming more useful from day to day.
My guess would also be that "with further refinement, we'll soon see AI play Go at a level well beyond human", but it's just a guess.
Will we though? AlphaGo trains on human games, so can it go well beyond that level? Will it train on its own games?
A priori, this makes sense: you don't need to train on humans to get a better understanding of the game tree. (See any number of other AIs that have learned to play games from scratch, given nothing but an optimization function.)
I don't think there is a theoretical upper limit on this kind of learning. If you do it sufficiently broadly, you will continuously improve your model over time. I suppose it depends to what extent you're willing to explicitly explore the game tree itself.
> To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.
Any sources for this statement? I've seen it repeated over and over again, but without any specific examples of who those experts were or what they said.
Why is there no doubt? I strongly doubt there even exists a go level that's well beyond human. There is hypothetical perfect play of course, but there is absolutely no way to guarantee perfect play. And while I have no way to judge, I've heard that 9p players may not be all that far removed from perfect play. One legendary player once boasted that if he had black (no komi, I assume), he would beat God (who of course plays perfect go).
There is of course no way to know if that's true or gross overconfidence, but it's certainly possible that there's not all that much room left beyond the level of 9p players.
AlphaGo will no doubt improve, and reduce the number of slips like his move 79 in the 4th game, but it's never going to be perfect, and there's always the chance that it will miss an unexpected threat.
I'm really just objecting to the description of this as "beyond human". Yes, it's good, and it's many orders of magnitude beyond my level, but so are Lee Sedol and other 9p players.