
Elon Musk says Mark Zuckerberg's understanding of AI is 'limited' - mcone
http://money.cnn.com/2017/07/25/technology/elon-musk-mark-zuckerberg-ai-artificial-intelligence/index.html
======
fossuser
They're talking past each other because "AI" has become a useless term.

Musk's comments are about a potential (maybe conscious) artificial super
intelligence and the risks that could come from that if we enable one somehow
without understanding the preconditions.

Zuckerberg is just talking about "AI" in the sense that machine learning can
solve a valuable class of problems.

The super intelligence risk could be serious or not - it's hard to know since
we haven't really dealt with an intelligence that wasn't created through
natural selection with those biases for survival. We also don't really
understand how consciousness works either.

One risk of AI is that if Musk's argument is right we won't be as 'lucky' as
we were with nuclear weapons. Enriched uranium is hard to get so nuclear
weapons are easier to control and hard for an individual to make. Turning on
an AI or copying the code for one and spreading it would probably be easier.
If it does end up being dangerous that's not a great situation.

~~~
6d6b73
>Turning on an AI or copying the code for one and spreading it would probably
be easiear.

What people never mention in any discussion with AI is that the AI will be as
limited by physical world as we are. AI It's not just code, it's also the
hardware that will run the code. That hardware has theoretical and practical
limits, and requires a lot of power. It's not like AI running on some
peta/exascale supercomputer will be able to jump to your laptop and take it
over and spread itself to every electronic device in the world.

~~~
fossuser
I think the main argument against this is once you're in the world of "super
intelligence" you're severely outmatched. If for some reason it had an
interest in staying on it might not be easy to stop.

~~~
6d6b73
super intelligence will not happen overnight. AI will keep evolving with us
and before it's truly conscious it will give us better understanding of math,
physics, medicine.. This will help the AI grow, but at the same time we will
be growing with it. It will be a symbiotic relationship.

~~~
fossuser
Not necessarily, there's the idea of an "intelligence explosion" \- basically
that while it may take a while to figure out the initial conditions it might
be able to self improve rapidly from that point. Also consciousness may not be
required (or possibly even preferable).

~~~
6d6b73
"intelligence explosion" is also limited by the physical world. AI won't be
able to come up with a better understanding of physics simply by reading
academic journals. Yes it might have some good ideas, and find some of the
stuff we're missing by simply connecting the dots, but that in no way will
cause "intelligence explosion".

Just think how much money and time we have to spend to test just a few of all
of the theories in physics. And no, AI will not be able to suddenly come up
with better theories simply by simulating physical world.

------
TDL
I question whether either of these guys have a good understanding of AI.

~~~
Dzugaru
I'm pretty sure noone on planet Earth has a good understanding of AI yet. I
stopped reading about "what AI is, what AI isn't" completely.

------
pesenti
A majority of the experts in AI would side with Zuckerberg on this one. See
for example:
[http://blogs.discovermagazine.com/d-brief/2017/07/18/artific...](http://blogs.discovermagazine.com/d-brief/2017/07/18/artificial-
intelligence-elon-musk/). It's not that we shouldn't worry about AI, it's that
the issue Musk is raising - AI as an existential threat -, is distracting from
the real issues.

~~~
caio1982
Was Isaac Asimov being distracted from the real issues as well just a few
decades ago? Food for thought :-)

~~~
pesenti
This is not about science fiction. It's about science today and in a near
future and its consequence on policies and regulations.

~~~
caio1982
I honestly believe some of Asimov's concerns regarding AIs are not science
fiction but rather forward thinking and will eventually happen.

~~~
pesenti
"eventually" is the key word here. Most experts in the field admit to have no
clue how and when these predictions will ever come true. It makes discussing
them an overly speculative exercise.

------
JustAnotherPat
I don't trust Zuckerberg on any issue in which Facebook has much to gain from
one side. We all know how much money he'd like to make by having some AI
programs follow us around 24/7.

~~~
hourislate
One guy is trying to change the world and one guy is trying enslave it.

When you have the likes of people like Stephen Hawking agreeing with Musk, why
wouldn't you listen and be wary. Mark doesn't seem like a very intelligent
person, just a very lucky person.

~~~
icebraining
Musk was the guy that created a car that can track and report everywhere you
go using a permanent connection to his mothership, plus films and uploads the
surroundings and even has an camera inside. The only protection being his word
that they won't read it unless you allow them.

------
josefresco
Zuckerberg sells / appeals to a mass-audience who needs reassurance that AI
will not take over the world and murder their grandkids.

Musk sells / appeals to techies, and those seeking to embrace cutting edge
technology and are not scared by "AI is dangerous" talk.

Different audience, different messaging.

------
hndamien
Wow. Mark used an example of self-driving cars being good for humanity as his
example of why AI will pose no danger; to make a point about his perspective
in a feud with the guy that is leading self-driving car revolution pointing
out that AI could potentially be dangerous. I hold great concerns for
Facebook's future.

~~~
hourislate
Lets hope farcebook has no future....

------
MarkMMullin
Facebook is capable of monetizing current ML capabilities to do things like
constantly tune individual news feeds so that its profit is maximized.
Certainly counterproductive to some degree for modern society, but it is
profitable. Straightforward goal. straightforward outcome.

Elon's claimed that Tesla will have level 5 autonomous vehicles in 2 years.
Yeah, in your dreams Elon, that's a massive cognitive stack you can't fit in
the vehicle, one we've never built out to such a degree, and your car's
autonomy level is going to be a function of its bandwidth to the cloud anyway.
The mean part of me wonders if Elon plans on covering this wild assertion with
'Well, we can't now. Regulations and stuff' \- if he's just worried about
putting a weapon on a solar powered drone that patrols some area and shoots
baddies, yeah me too. But that's just human stupidity, it ain't AI, it ain't
intelligent, it's just a dumb program with effectors. All ML really is these
days is real big boundary relaxation systems, calculating the massive number
of coefficients we need to make the equations work out the way we want. But
come on, it's stupider than a rock. It's an equation solver and its brittle as
hell. As a tool I love what we can now accomplish, but there's a long road
from here to Hal. It's not bloody magic, it's just math.

------
mcmacintosh
Honestly I'm pretty appalled by Musk's statements. There's no indication from
recent research that we're close to the "dreadful super intelligence". I think
he has an a very naive and scifi-ish understanding of the state of AI and his
fear mongering is irresponsible at best. Or maybe he's right and private
companies are so technologically advanced that they surpass academia by a wide
margin. Somehow I highly doubt that.

------
AndrewKemendo
Mark Recruited Yann Lecun, arguably the top "AI" practitioner in modern times
(backprop anyone?). Lecun agrees with Mark and I would be surprised if Mark
wasn't taking most of his direction in AI from Lecun.

~~~
arcanus
Good point. Another noted expert in "AI" is Andrew Ng (formerly Google and
Baidu) who stated that, "Fearing a rise of killer robots is like worrying
about overpopulation on Mars."

...For some reason Elon is really hung up on gradient-based descent methods
terminating all of humanity. We just aren't close to that possibility yet. It
is not even in sight.

~~~
AndrewKemendo
The problem is that both are potentially right.

The majority of the ML field still ignores entirely the AGI discipline - and
in general rightly so as there are enough narrow problems to solve to fill a
lifetime. That's the perspective of Lecun/Ng etc... because they are
scientists.

The others Bostrom et al. are philosophers, so they approach it from a
different perspective.

There are valuable reasons to think about it at both levels.

~~~
hndamien
Correct. There were plenty of economists that said Bitcoin would fail, and
philosophers that thought otherwise. The jury is still out, but I know who is
winning.

[https://twitter.com/damiendonnelly/status/91766074021904386](https://twitter.com/damiendonnelly/status/91766074021904386)

------
659087
“It is difficult to get a man to understand something, when his salary depends
on his not understanding it.”

The possibility of using AI to surveil and manipulate Facebook users (and non-
users) on a mass scale (and probably give his presidential campaign a boost
along the way) is worth far too much money for Zuckerberg to admit to the
potential downsides.

Zuckerberg isn't optimistic about AI, he's optimistic about the money and
power AI will grant him.

------
__s
Seems this pivots on a disagreement of AI as a risk

Musk seems very anxious of existential threats-- it isn't surprising he may
overestimate the danger of AI

It also comes off as very monkey-centric. Is it so bad if human intelligence
isn't what proliferates the future? Perhaps AI is a kind of memetic evolution
for our culture, evolving into a form of life which can achieve sustenance
with much less waste

~~~
jdietrich
>It also comes off as very monkey-centric. Is it so bad if human intelligence
isn't what proliferates the future?

For humans, it's an absolute disaster - look at what happened to the lesser
apes when Homo Sapiens became dominant. The odds are very good that a sentient
AI will decimate us or make us extinct, simply by adapting our habitat to its
own needs.

~~~
AndrewKemendo
Are you expecting that Sapiens Sapiens would somehow last as a species
forever?

------
vowelless
I would love to see a new Jobs Gates style rivalry. Maybe Musk - Zuckerberg?

~~~
maxerickson
How about Elon vs Musk?

~~~
loceng
Elon's Musk would win.

Edit: No playfulness on HN allowed today apparently

------
macmac
Musk is being very generous in his assessment.

~~~
loceng
From my perspective, Mark has always seemed like a controlled thinker -
including controlling his actions, behaviour, etc. in a very fixed way. In
contrast, Elon's always seemed to be a holistic thinker, starting from
founding principles and then understanding what his actions will lead to.
Elon's openness to thought leads me to believe his thinking style would lead
to understanding how AI could evolve better than Mark's more controlling
behaviour.

------
mindcrime
Is there any particular reason to think that Musk's understanding of AI isn't
also "limited"? I don't recall him being known as an AI researcher.

I mean, there's no doubt he's smart, even brilliant... but being brilliant in
one field doesn't necessarily mean you are an expert in others.

------
tarr11
Here is a pretty good article where many of the players are interviewed in
depth.

[http://www.vanityfair.com/news/2017/03/elon-musk-billion-
dol...](http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-
crusade-to-stop-ai-space-x/amp)

“If you want a picture of A.I. gone wrong, don’t imagine marching humanoid
robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made
of diamond, with tiny onboard computers, hiding inside your bloodstream and
everyone else’s. And then, simultaneously, they release one microgram of
botulinum toxin. Everyone just falls over dead.

~~~
pesenti
That example has absolutely nothing to do with AI...

------
danso
As overloaded as the term "AI" has become, to the point where mainstream
discussion is unlikely to be substantative, I'm glad a tech-business leader
like Elon Musk is at least taking a public position of some skepticism. AI can
be as empowering or as dangerous as we design and constrain it -- but to a
layperson, AI and "algorithms" seem like inevitable magic, in the way that
iPhones have steadily "improved" in hardware specs and in life-augmenting
features.

~~~
rspeer
Musk's view of AI relies on AI being magic, too. He's using the definition of
"AI" that exists only in hyped-up promises about the future, and evaporates
when applied to present technology.

Musk dismisses AI experts who tell him that his claims bear no resemblance to
real technology, because Musk isn't talking about the same AI as them. He's
talking about sci-fi AI.

There are no experts in sci-fi AI because, if there were anything there that
one could understand well enough to be an expert in it, it wouldn't be sci-fi
AI anymore.

~~~
hndamien
Don't underestimate the exponential.

~~~
rspeer
And don't overestimate the S-curve, which looks the same when you're in the
first half of it.

------
notwhiteknight
Even billionaires end up wasting their lives bikeshedding.

------
Asdfbla
Musk maybe oversells the state of AI a bit when he says it's potentially an
existential threat to mankind - but on the other hand, the advances of the
recent years alone are enough to disrupt the daily lives of almost everyone.
In that sense, some alarmism might be appropriate, especially considering
control of the best machine learning systems will likely be concentrated in
the hands of few powerful players.

------
yumraj
Is it "limited" or "biased" \- as in biased so it serves the purpose one is
trying to achieve.

It would apply to both though :)

------
josh2600
What does it mean to have an understanding of AI that's limited?

Have you ever talked to someone who is way smarter than you? Have you tried to
imagine how they think? I suspect thinking about the future of AI is a little
bit like that, insofar as it's hard to model the future state intelligently.

------
drivelous
I understand that Musk's ideas of doom and gloom concerning AI are far into
the future (loceng drops the Andrew Ng quote that draws parallels to people
worrying about the overpopulation of Mars) but even if that is the case isn't
it still worth noting now?

My simple understanding of this all is that once we create AI that surpasses
the intelligence of the human race, we as inferior beings will no longer be
able to predict what they will do. If that's the case and the desire of AI
(can AI have desire...?) runs contrary to human will, then there's no way to
cut it that bodes well for the continued existence of the human race. And once
that line is crossed, it's never going back.

Is that not a reasonable thing to be worried about even if it's 200 years
away? Even On the Origin of Species was published only ~160 years ago.

------
nibstwo
Musk and Bezos are clearly smarter than Zuck and Dorsey.

------
wwwhatcrack
I'd say both of these guys have limited understandings of everything.

------
JCzynski
FIGHT! FIGHT! FIGHT!

------
infimum
Elon Musk's recent comments on the topic makes me say that his understanding
of AI is similarly 'limited'...

------
phasnox
I love Musk. He is brilliant and I think he has truly shaped the future.

But he is wrong.

Artificial Conscience(what he is referring to) is NEVER going to be achieved.
I repeat NEVER. Its impossible.

Conscience is in the form and not in the matter. In one word, there is never
going to be an AI with free will.

However, eventually we may build a super intelligent system with great power
over our lives, that goes wrong, has bugs, or misbehaves. But, since the fact
that such as system will never be conscious, we are always going to have power
over it.

So yeah, Musk is being an alarmist, and I believe he is the one with a limited
understanding of AI.

~~~
sebular
Let's put aside your absolute and close-minded certainty about the future for
a minute. Unless you can provide some quotes, you're putting a lot of words in
Elon Musk's mouth. He's made some goofy pop culture references during
interviews, and it seems to be a calculated decision on his part to flirt with
sensationalism in order to publicize the issue, but he's never seriously
argued that the movie Terminator is a documentary about the future.

Call it conscience, intelligence, whatever you want. The danger isn't that
some ominously calculating and murderous robotic mind is going to spring into
being and hide from humans while plotting our downfall. In order to understand
the danger, all you have to do is look at what the US military is already
doing with autonomous weapons. Killer robots aren't hypothetical, they're
historical.

In fact, you're the prime example of the danger of AI. You probably have a
stronger than average understanding of computers, and maybe you know a lot
about actual AI implementations, which is what makes you so self-assured that
there's nothing magical about them. So you trust them, and you're a big fan of
throwing an ever-increasing amount of trust into AI.

You even admit that there will be bugs (there always are) and misbehavior (now
who's humanizing programs?), but you fail to see why that's a problem when the
stakes are raised from "crap, an app leaked private data online" to "crap, the
autonomous weaponized drone mistook backyard fireworks for an attack and
bombed a family."

The way I see it, Musk is far from alarmist, and all the kidding about
"summoning demons" is almost a way of coping with what's starting to seem like
a terrifying inevitability. We've been building "dumb programs" for decades,
and there's still a constant stream of breaking news about software that
didn't do what it was supposed to. And you want to believe that there's no
danger in building software that has increasingly fuzzy logic and connecting
it to real-world I/O?

I'm guessing that Elon Musk saw this problem when he first started toying with
the idea of self-driving cars. You say he has a limited understanding of AI,
but Tesla autopilot says otherwise. And he probably became keenly aware of the
stakes when he realized that people would willingly entrust their lives to the
decisions made by his company's software, and that without regulation, it's
entirely up to Tesla's QA process to make sure their cars don't accidentally
kill people.

