
AlphaGo Is Not the Solution to AI - fforflo
http://cacm.acm.org/blogs/blog-cacm/199663-alphago-is-not-the-solution-to-ai/fulltext
======
superobserver
It may not be "the solution", but no one in Deep Mind said it was. It doesn't
particularly matter how journalists spin this stuff, as they clearly don't
understand it anyway.

By the way, if Monte Carlo search worked, then we would have seen MC beating
9-dan pro's long ago, which clearly didn't happen, so the author's definition
of "effective" lacks pragmatic insight - the real limiting factor in this sort
of evaluative judgment.

The author doesn't seem to have watched any of the matches of AlphaGo vs Lee
Sedol, though, as much of the pre-game and post-game discussion brings to
light some important details surrounding specific implementation features in
AlphaGo by Deep Mind. Glossing over that makes this article remarkably
uncritical and uninformative.

Edit: it's one thing to combat hype, but representing the facts as they are in
contradistinction to that hype is another matter entirely. Anti-hype is
perhaps more pernicious in bringing about an "AI Winter" than is drumming up
interest in what AI, or more accurately AGI, will be capable of in the not-
too-distant future. And given current developments, I think the only AI Winter
we have to fear is the one where uninformed persons accost us with misguided
fears about AI and what its capabilities will be.

~~~
ktRolster
_It doesn 't particularly matter how journalists spin this stuff, as they
clearly don't understand it anyway._

It matters because it could lead to another AI winter when the hype falls
through.

~~~
qrendel
There's not going to be another AI winter. The past AI winters occurred
because people drastically underestimated how difficult AI would be. MIT
(Seymour Papert specifically) thought computer vision could be solved by some
graduate students as a summer project. Same story for other AI problems, e.g.
NLP, speech recognition, general reasoning and inference, etc.

Once the difficulty of those problems started to be understood, of course
funding dried up. Industry is focused on short-term ROI, so it's hard to get
funding if the profits won't be seen for 50 years.

The difference is that now there's an entire string of profitable markets for
solving near-term AI problems. AI is fundamental to the business models of
some of the world's largest companies (i.e. Google). There's basically zero
risk of an AI winter when we're on the verge of advanced robotics, Watson-
style Q&A systems, self-driving cars, large-scale genomics, etc.

An _AGI_ winter is another story, but most AI work isn't really focused on
serious, full-scale AGI right now anyway. That winter never ended and is
ongoing. Everyone is focused on incremental AI improvements because no one
really knows what's required for full AGI, and are in the meantime hoping
they'll hit on it while building on known techniques like deep learning, comp
neurosci, etc.

tl;dr: As long as investors continue to see marketable products for new AI
developments anywhere in the next ~10 years, funding won't dry up again.

~~~
gumby
> There's not going to be another AI winter.[...] > The difference is that now
> there's an entire string of profitable markets for solving near-term AI
> problems.

As someone who experienced AI winter (and ended up leaving the field) and even
attended the AAAI panel where the term was coined on stage, I am not so
confident.

What happened then looks like it _could_ happen again. At that time there were
tons of expert systems being deployed with high promises that they would
revolutionize business and practical problem solving. Non-technical people
from other fields were flooding in to sell "solutions." Dedicated workstations
were proliferating.

The self-driving car hype is already at the same level as the expert systems
hype was back then.

> tl;dr: As long as investors continue to see marketable products for new AI
> developments anywhere in the next ~10 years, funding won't dry up again.

Well sure, that's true, but no different from saying "as long as house prices
continue to go up there will be a housing boom."

------
cs702
Yes. We need more people like Langford and LeCun to push back on the AI hype.
It's better for AI researchers to _under_ promise and _over_ deliver than the
other way around. Otherwise we risk another "AI Winter," and that's no fun.

The challenge is that to approximately 99.99% of the seven billion people on
this planet, including roughly every journalist, the recent progress made by
AI researchers is indistinguishable from "magic."

When all those people try to reason about "magical things," they're bound to
reach nonsensical conclusions and say nonsensical things. To them, debating on
the promise of technologies like deep learning is a lot like debating how many
angels can dance on the head of a pin.[1]

\--

[1] [http://www.straightdope.com/columns/read/1008/did-
medieval-s...](http://www.straightdope.com/columns/read/1008/did-medieval-
scholars-argue-over-how-many-angels-could-dance-on-the-head-of-a-pin)

~~~
onion2k
Undepromise and overdeliver works well when you're building something for a
customer, but I suspect when you're trying to demonstrate a technology that
has the potential to be scary in the eyes of the public downplaying what it
can do is probably the worst possible strategy. The way to take away the fear
of the unknown is to make it unsurprising; for the public to accept AI their
expectations are going to have to match the reality quite closely. If AI does
less than they expect they won't buy it, if it does more they'll see it as an
invasion of their privacy and they won't buy it.

------
StanislavPetrov
Its just blatantly false to suggest that this victory means "games are
solved". Games like chess and GO are games of complete information. Its just a
matter of brute processing power, even if currently computers don't have the
processing power to "solve" them.

Games of incomplete information, like poker, are much harder, if not
impossible for computers to master without actual AI. It was only recently
that heads-up, limit poker was "solved". Even in this case, the "solution"
meant only that the computer could put forth a strategy that would not lose
significantly over time, not a winning strategy. Games like no-limit hold em
are infinitely more complex. This is true of heads-up, but vastly more so for
full-ring games.

AlphaGo's victory has certainly been very impressive, but its also
predictable. Given enough computing power, winning any game of complete
information is just a matter of brute computing force. We'll see how computers
(and real AI) can do in the future in regards to other games.

~~~
nandemo
> Games like chess and GO are games of complete information. Its just a matter
> of brute processing power, even if currently computers don't have the
> processing power to "solve" them.

That's like saying public-key encryption is broken because we can brute force
decryption.

When we talk about solving a game (or solving an optimization problem, etc),
we aren't simply interested in whether the problem is computable. We want a
efficient, polynomial-time algorithm if we're interested in the generalized
problem (the size of the board is a parameter), otherwise at least something
that runs and completes in a reasonable time if we're OK with solving only a
particular board size (an instance, in terms of computational complexity).

The generalized go problem is PSPACE-hard or EXPTIME-complete depending on the
ruleset you adopt. So it's not just a matter of brute processing power.

Now, most people outside theoretical computer science research don't care much
about generalized games and asymptotics, they're fine with solving 19x19 Go
instead of n x n Go. But that doesn't help you much: the only board size that
has been solved is 5x5 [0], and there are only about 10^10 legal positions in
a 5x5 board. The standard 19x19 has 10^170 legal positions! Good luck brute
forcing it.

[0]
[http://erikvanderwerf.tengen.nl/5x5/5x5solved.html](http://erikvanderwerf.tengen.nl/5x5/5x5solved.html)

~~~
StanislavPetrov
>That's like saying public-key encryption is broken because we can brute force
decryption.

No, that's like saying that public-key encryption will eventually be broken
because at the end of the day its just about having enough brute force
processing.

>When we talk about solving a game (or solving an optimization problem, etc),
we aren't simply interested in whether the problem is computable.

No we aren't. But games of complete information that ARE computable, such as
chess and GO can be removed from discussion. There is no question that these
games of complete information are "solvable" with or without AI, since every
move, from start to finish, can be calculated before the game even begins -
its just a matter of processing power.

Which brings us to the question of AI and games of incomplete information. By
definition full-ring no-limit hold-em is not computable. Even with infinite
processing power, with every possible permutation calculated, there is no
guarantee that a brute-force computer player is going to win full-ring no-
limit hold-em. If there are multiple unknown players to act behind you, there
is no way to compute with certainty what the outcome will be. This is where AI
becomes really interesting as far as gaming is concerned.

~~~
terryf
> No we aren't. But games of complete information that ARE computable, such as
> chess and GO can be removed from discussion. There is no question that these
> games of complete information are "solvable" with or without AI, since every
> move, from start to finish, can be calculated before the game even begins -
> its just a matter of processing power.

In a theoretical universe, sure. In the one that we live in - no.

Let's assume that we can turn every atom in the entire universe that we know
of into a 4Ghz processor, further assume that we can perfectly parallelize the
search and that we can check one board position in one cycle on one of those
processors.

How long would it take to check all possible 19x19 go positions?

(1 * 10^80 processors) * (4 * 10^9 checks) = 10^89 checks per second.

Total number of valid go board states: 10^170, therefore it would still take
10^79 seconds to check all the positions in order to find a perfect game.

Clearly, it _is not possible_ to ever brute-force go, no matter what kind of
advances computing technology will take.

~~~
xigency
This argument surely discounts other computing models, like quantum computing.

~~~
terryf
Fair enough, if there is ever a quantum entaglement (or other type of course)
based solution to go, then I will have been proven wrong. We'll see, but I'm
not feeling very worried ;)

------
Houshalter
Ugh I've been following every AlphaGo discussion I can find, and it's always
the same debate about what the significance of it is. I just want to make this
final comment.

There is no significance to AlphaGo - to people paying attention to AI
research. The methods aren't incredibly novel, the idea that CNNs could make
very good value approximators isn't new, etc. I predicted computers would be
beating expert Go players by the end of 2015, and was actually really
disappointed when I was wrong. Turns out I wasn't, but it had just been kept
secret.

However AlphaGo's victory is significant to the general public and the
skeptics. It's a sign of the rapid progress being made in AI over the last few
years. Rapid progress that shows no sign of halting.

Some of the people I spoke to about my Go prediction thought that it might be
possible, but was probably too hard a domain. Hopefully those people will
update their beliefs and be _a little_ more optimistic about NNs, and AI in
general. They are extremely powerful tools that we are just figuring out how
to use. Go will hardly be the last domain they succeed at.

~~~
empath75
I'd like people to make some predictions about what an ai will not be able to
do in the next 10 years.

~~~
noelsusman
I'll go ahead and predict that we won't have human-level artificial
intelligence within 10 years. As in you could throw it literally any task and
it would handle it as well as a human and get better at it as fast as a human.

~~~
rimantas
As it was said on BBC today: you can go into any kitchen and make yourself a
cup of tea. Good luck with AI doing that.

------
ThomPete
My digestive system isn't the solution to my consciousness either but it's in
some ways a part of it.

There is going to be no "solution" to AI.

Only emerging complexity which I am guessing at one point is going to give us
something akin a semi aware entity. After that all bets are off.

------
Eliezer
Who the heck was ever claiming that AlphaGo was an AGI? Straw argument much?

~~~
ilaksh
Well a bunch of people were confused and still don't know the difference
between AI and AGI.

I actually think it's OK if some people think AlphaGo is more general purpose
than it really is, because the general attitude or at least appropriate
response to the likely development of AGI relatively soon still seems to be
amazingly muted.

~~~
mannykannot
There was a time when AI meant AGI. AGI is a retronym induced by premature
claims of the achievement of AI.

------
nxzero
Idea that AI will be a singular entity/solution/etc. needs to stop, it's like
saying humanity only needs a single type of person. Generally speaking, only
important measure is when there's not a single person that's able to out think
one of many different instances of AI. That day will come, though when is not
clear.

------
justicezyx
The seemingly "poor" intelligence represented from the article, ironically,
really weakens what intelligence is. In certain sense, this article really
strengthen the importance of AlphaGo: AlphaGo does appear know Go, and the
author seemingly have weak understanding of AI and GO, making it a poor
intelligent being that is specialized in writing articles. Apparently Go is
considered require more intelligence than writing normal articles. So AlphaGo
performs well in a task that requires more intelligence, and the author
performs poorly in a task that requires less intelligence. Does this make
AlphaGo a better intelligent being than the author?

------
ludamad
Every accomplishment will seem obvious in retrospect. Who here honestly
thought neural networks were going to be a dominant force in board game AI?
Sure we cannot extrapolate, but that doesn't mean that we aren't building up
effective tools for AI.

~~~
mturmon
"Who here honestly thought neural networks were going to be a dominant force
in board game AI?"

The potential for neural networks to learn to play games, by playing each
other, has been apparent at least since NeuroGammon
([https://en.wikipedia.org/wiki/Neurogammon](https://en.wikipedia.org/wiki/Neurogammon)),
in 1989. The "playing each other" mechanism is key, because it allows large-
scale training.

The question is, what games happen to be well structured to allow so-called
"credit assignment" so that you can know what states lead to wins across a
reasonable spectrum of (unknown/unknowable) opponent plays.

According to the OP, it is known that you can do credit assignment in Go.
("For Go itself, it has been well-known for a decade that Monte Carlo tree
search (i.e., valuation by assuming randomized playout) is unusually effective
in Go.") I did not know that, but if so, it would simplify the Go problem,
although the search space remains huge, etc.

~~~
cromwellian
Most scientific and engineering advancements are an incremental combination of
what has come before. One can make all the predictions one wants, but until
someone actually builds something working, it's just a hypothesis.

The key advancement in AlphaGo seems to be generating a subset good moves, as
opposed to randomly selecting them, and determining whether a given
configuration has a probability of winning. Monte Carlo Tree Search by itself
only gets you to amateur ranks from what I read, but won't beat a pro player.
Being able to combine the policy and value networks, and figuring out how to
bootstrap and train them was where all the work is.

In hindsight, everything looks like an obvious combination of separate
already-known-to-be-good pieces. I don't think this achievement should be hand
waved away.

More importantly, the DeepMind folks are being pretty humble about it and not
making the big claims everything seems to think they're making.

------
Florin_Andrei
Well, the Ford T was not the solution to electric, self-driving, suburban
commuters either.

------
unabst
Weak AI will never become self-aware. Actually, neither will strong AI unless
we are including awareness as one of it's strong traits (some do, some don't).

Awareness is a virtue in and of itself, and any machine that has it will
require an implementation of awareness itself, and that is regardless of the
machine's intelligence. Awareness is about self motivation and the creating of
its own goals. It's the generating of meaning to back one's words as well as
actions, as even we often struggle to do.

As far as I am aware, no one at Deep Mind or IBM is actively working on intent
and self-awareness. So for the foreseeable future, if a two-legged drone shows
up at your door and points a gun at you, it won't be because AI has awakened.
It'll be because someone at the CIA wants to kill you.

------
turingbook
This is a repost. The original link should be:
[http://hunch.net/?p=3692542](http://hunch.net/?p=3692542) .

Other discussion from Yann Lecun
[https://www.facebook.com/yann.lecun/posts/10153426027052143](https://www.facebook.com/yann.lecun/posts/10153426027052143)
and [http://www.information-age.com/technology/applications-
and-d...](http://www.information-age.com/technology/applications-and-
development/123461099/google-deepminds-alphago-victory-not-%E2%80%98true-ai-
says-facebooks-ai-chief)

------
1812Overture
Didn't some AI researcher have a quote along the lines of "AI is whatever we
can't do yet"?

~~~
tariqali34
Larry Tesler (source:
[http://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Co...](http://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html))

"Tesler's Theorem (ca. 1970). My formulation of what others have since called
the “AI Effect”. As commonly quoted: “Artificial Intelligence is whatever
hasn't been done yet”. What I actually said was: “Intelligence is whatever
machines haven't done yet”."

