
This famous roboticist doesn’t think Elon Musk understands AI - ehudla
https://techcrunch.com/2017/07/19/this-famous-roboticist-doesnt-think-elon-musk-understands-ai/
======
tbabb
This guy is smart, and dead on:

\- People who don't understand AI are afraid of it; those who do know how
fragile and limited it is.

\- The call for "regulation" of technology that doesn't exist is too vague to
be useful.

\- It's the AI in self-driving cars which has the most potential to
immediately kill/save thousands of people, and it's telling that it's not this
technology that Elon seems to be calling to be regulated. Whether or not
regulation is the right thing to do, any argument for/against regulation of
self-driving cars could be applied just the same to a hypothetical super AI,
but the former is tied to real, practical problems which exist today.

Brooks clearly knows what's up.

Also, to add my own commentary:

\- The dystopian robot future we should all be afraid of is not the [paperclip
maximizer]([https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer))
Musk and friends wave their arms about, but marketing/business algorithms that
have ripple effects at the scale of societies-- the Facebook, YouTube, and
Google ranking algorithms are examples of this. We could shortly be in a place
where large scale human behavior is shaped by algorithms with more data and
insight about collective human behavior than any single human could have, and
it will be used to optimize for money making instead of stability, fairness,
or cultural values. Some society-shaping decisions/policies could even be made
without any human awareness of the reasoning behind them. This is not less
scary if they're being made by fragile/flaky algorithms.

~~~
TheOtherHobbes
Exactly. Our lack of awareness of political, social, and economic consequences
is far more of a problem than any hypothetical paperclip demon.

AI isn't terrifying.

AI _built with our current values_ is an horrific prospect.

~~~
cirgue
You don't need AI to have a paperclip demon. You just need complex systems
that impact people's lives and aren't well understood. Capitalism is arguably
_the_ example of the paperclip deamon, and that will not get better with the
permeation of sophisticated, fragile, poorly tuned machine learning
techniques.

------
Aron
To me, Elon is playing catch-up and thrown his hat in the Yudkowsky club
although Bostrom and other more credentialled people were probably his vector
into it. They were talking about this stuff 10-20 years ago. I haven't yet
seen anything where Elon moves the ball forward conceptually, although he's a
doer so he's not messing around with writing futurism documents and nitpicking
details of rationality that almost no one is actually capable of implementing.

On the other hand, Brooks doesn't show any indication he knows what Musk is
talking about and throws out a bad summary of his position. I got nothing from
this article except a slightly lower respect for Brooks.

The real minds to watch IMO is the Hinton + Deep Mind crew, and I think
Yudkowsky and the fearmongers are largely correct, or at least correct enough
to be taken seriously. I don't think people following the meme 'real AI
researchers know that AI is limited and fragile' are on the right track. So
that's my bias.

~~~
latently
"On the other hand, Brooks doesn't show any indication he knows what Musk is
talking about"

Not quite true: "Tell me, what behavior do you want to change, Elon?"

------
chmaynard
This is getting absurd. An interview with Dr. Rodney Brooks, one of the great
minds working in CS and robotics, has to spend time rebutting uninformed
claims and fear mongering about AI research. There is so much Brooks can teach
us. I look forward to reading his book.

------
natch
People who think they understand AI don't understand AI. Which I think is a
big part of Elon's point. So criticizing Elon this way is rich.

------
Houshalter
And Brooks doesn't understand Musk. He's not saying current AI is a threat.
He's talking about the very long term future. What AI will be like in 30
years, or even further.

Its inevitable we will eventually solve AI. And when that day comes it will be
dangerous. How easy do you think it is to control a being thousands of times
smarter than you? If it was invented today we would have no ability to control
it. Our best AI control mechanisms are just pressing a button to reward or
punish it for it's behavior. You can't imagine any way that would fail?

Our slightly larger brains made the difference between swinging in trees and
walking on the moon. But we are only the very first intelligence to evolve.
Its unlikely we are anywhere near the peak of what is possible.

And this will likely happen in our lifetimes. The median expected date
estimated by AI researchers is in the 2040s. Sure they can't possibly predict
it very well, but who else can? And there is something to the wisdom of
crowds.

~~~
cbames89
Why is super-human general AI inevitable?

Do you have references for this 2040 date? I'd love to see who's making this
prediction.

~~~
Houshalter
Because there's nothing magical about the human brain. Evolution created it
through just dumb mutation and selection. Under a bunch of ridiculous
constraints. Like it had to use less than 10 watts of power, and weigh only a
handful of pounds, and it had to be made of meat, and could only be
iteratively improved from whatever happened to work at guiding locomotion in
fish, etc.

Our current transistor tech is already orders of magnitude smaller, faster,
and more efficient than neurons. There's no reason to expect the software
stats of the brain are much better.

Here's one survey:
[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

~~~
thanatropism
Interestingly, about 18% of the researchers said "never", but these answers
didn't influence the posted CIs. Any reasonable imputation (say, "never =
2350") would both stretch those intervals by a lot and still underestimate the
expert consensus.

~~~
Houshalter
They did account for them in the median AFAIK. Which is the number I gave and
the most appropriate metric.

I don't see any mention of confidence intervals mentioned in the survey, so
what are you talking about? Means and standard deviation are given. But they
are mostly useless as the distribution is very skewed. They would be infinite
if the "never" people were included of course.

------
cs2818
Really glad to hear this perspective.

Over the past seven years most of my time has been spent in robotics research
labs, and I really struggle to reconcile the state of research with the
concerns of those like Elon Musk. I think a series of discussions between the
major figures on each side of this would be really valuable.

------
enkiv2
It's kind of amazing that we're at a point where "Rodney Brooks understands
more about AI than Elon Musk" is news, but here we are. The power of PR is
incredible.

------
cbames89
Does anyone else find it telling that at the same time Musk called for
regulation, Steve the security robot is raging across the internet?
[http://www.npr.org/sections/thetwo-
way/2017/07/18/537905142/...](http://www.npr.org/sections/thetwo-
way/2017/07/18/537905142/when-robot-face-plants-in-fountain-onlookers-show-
humanity-by-
gloating?utm_source=facebook.com&utm_medium=social&utm_campaign=npr&utm_term=nprnews&utm_content=20170718)

------
borplk
"AI" is the "flying cars" of our generation.

