
Geoff Hinton on AlphaGo and the Future of AI - tim_sw
http://www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/
======
huahaiy
I am actually disappointed with his view. I thought he had a psychology
background and should have known better than his obvious reductionist view.

What neural network have demonstrated so far is that it can do a form of
perception given enough ground truth data. Speech recognition and image
recognition are all form of perception. Go playing is also mostly about
perception: recognizing good patterns.

There are multiple issues with extrapolating this achievement to anything more
than what it is.

First, perception is a only minuscule part of the brain's functionality. One
only needs to glance at the table of content page of any cognitive psychology
101 textbook to see that. Second, it is not clear human's perceptual ability
is the same as what NN has demonstrated. For example, human's perceptual
learning is mostly based on unsupervised learning, not supervised like in NN.

Finally, his dismissal of "traditional" approach is extremely short-sighted.
He seems to suggest that we can solve AI by just throwing more hardware and
power at NN. This view is very narrow minded.

He also seems to overlook the high possibility that the "traditional" approach
could benefit similarly from the advances in hardware and power. I predict the
next breakthrough in AI will come from that direction.

------
jedharris
Hinton has a very balanced perspective. His comments on the short and medium
term dangers of AI are interesting. He considers the main risks to come from
politics, which sounds right to me -- deliberate (mis)use of AI, not drama
intrinsic to AI self-improvement. Look at the current FBI vs. Apple mess, and
our drone program(s).

His comments on his own professional experience, toward the end, are also very
interesting -- not the sort of discussion one usually sees from senior
scientists.

~~~
pinouchon
Hinton seems to refuse to talk about a timeline longer than 5 years, and 5
years seems to short to have any kind of human-level AI, let alone a self
improving superintelligence. The real danger lies when we approach the human-
level threshold.

~~~
marvin
Good point. I'm not aware of any high-profile technologist or commentator that
claims self-improving, superhuman artificial intelligence is a likely risk
this decade. This doesn't imply that self-improving or superhuman AI will
never be a risk.

I get the sense that some machine learning/AI researchers downplay this long-
term concern, while at the same time misattributing discussons around AI
safety as criticism of the _current_ approach to AI.

It must be possible to have two thoughts in our heads at once. Machine
learning is making massive strides at the moment, and we must use this for all
the benefit it's worth while keeping an eye on the _current_ risks that AI
represents -- for instance job loss and displacement, misuse by governments
for e.g. surveillance purposes and so on. But it's probably also a good idea
to keep one eye on the horizon as well, because things are likely to happen
pretty quickly if the state of the art advances further.

It's best to be prepared for this eventuality well in advance, because
successful navigation of a potential risk scenario there requires the
cooperation of a lot of people. And people are slow to change their
expectations.

Don't get me wrong, I greatly appreciate the current discussions around this
subject. There are a lot of good points being made.

------
j2kun
Am I the only one that cringed when Hinton described neural networks as
enabling intuition in machines? I'm really hoping this was just a tortured
analogy for a pop-science audience (and I really hate it when world-renowned
scientists do that). Why not just admit that we don't have anything close to a
deep understanding of _why_ AlphaGo did so well compared to previous
approaches?

~~~
tim333
I don't think it was a tortured analogy - more a guess that human intuition
works much the same was as pattern recognition by a neural network.

------
rdlecler1
I applied to Geoff's lab fora postdocs in 2009. He said that he didn't have
any grant money to take on anyone else. Those were dark days.

------
dang
The story about Hinton's entomologist father is interesting, and reminded me
that he is also a direct descendant of Boole.

~~~
graycat
What's he talking except some non-linear data fitting with a lot of data and a
lot of variables? In what sense is this anything like _intelligence_?
Moreover, from such fitting, can apply it to data like that used in the
fitting but where else? What hope is that that there is any progress at all on
AI?

------
return0
He remarks about the need to simulate the entire brain. However, at the
moment, artificial neurons do really well while being a tiny minuscule
proportion of the real neurons in the brain. It's possible neural nets capture
the level of description required to reproduce intelligence and we don't need
to go any deeper. Learning in real neurons is extremely more messy and
probably more distributed, but that does not mean that all this mess is
necessary to improve current models.

~~~
fla
On the other hand, the human brain cosumes approximately 20 Watts.

~~~
return0
you mean it's power efficient? Consider that it takes the brain, let's say, ~5
years to learn the stuff that a visual NN learns in days.

------
graycat
A better way, when can do it:

From basic game theory, each _board_ game such a Go, chess, checkers, Nim, has
an optimal strategy. Then the only issue left is who moves first. If the
advantage is to the player who moves first, then they should always win;
similarly for the player who moves second. Details in, say,

T. Parthasarathy and T. E. S. Raghavan, _Some Topics in Two-Person Games_.

Well, such a strategy is known for Nim, e.g.,

Courant and Robbins, _What Is Mathematics?_

So, when the strategy is known, then following the strategy stands to beat
Hinton's AI. Or, Hinton's million node AI can _learn_ and learn all it wants,
millions of games, maybe against itself, and just a tiny computer program, if
it has choice of who moves first, will always win (no ties in Nim).

More generally, for now, when can solve a cleanly specified problem
mathematically, that is, complete with theorems and proofs, then that is
better than Hinton's AI.

Maybe in time something like Hinton's AI will be able to do research in math
and produce results as theorems and proofs, maybe find the optimal chess
strategy, and then, with choice of who moves first, never lose at chess, etc.

But here's

Lesson 1: For now, when math can solve a problem, it is better than Hinton's
AI.

So, sure, for startups, here's

Lesson 2: F'get about Hinton's AI. Instead, pick a cleanly specified problem
where can do some math research and get a solid math solution. Then stand to
beat intuition, Hinton's AI, and, in practice, about everything else.

Are there some problems now where math can do better than Hinton's AI? Yup.

Point 1: Whatever can be done with Hinton's AI broadly in the economy, one of
the keys to a successful startup is being exceptionally good. Well, a way to
never lose and usually to win, usually win big, is to have the best math
solution. So, that's one of the criteria in picking what a startup is to do.
Again, it's just crucial to be exceptional.

Simple example: Hinton's AI would have one heck of a time competing with the
simplex algorithm on realistic problems, say, from running an oil refinery, on
linear programming. A worse time competing on non-linear programming, say,
also for running an oil refinery. A worse time competing on stochastic optimal
control. Really hopeless on competing with Issac's differential game theory in
pursuit-evasion games. And there's lots more.

