
Elon Musk, Zuckerberg Trade Barbs Over Artificial Intelligence - SirLJ
https://www.bloomberg.com/news/articles/2017-07-25/elon-musk-zuckerberg-trade-barbs-over-artificial-intelligence
======
ilaksh
I don't think people like Musk or the others that he gets some of these ideas
from, such as Kurzweil, are actually saying we shouldn't pursue advanced AI.
Are they? I mean Musk helped fund OpenAI right?

He is just saying that it is actually an existential threat or will be soon
enough that we need to consider the problem seriously.

I think the point of him mentioning that besides just worrying about it, is to
motivate efforts like Neuralink that will allow for high-bandwidth
communication between the brain and AIs.

There is some misunderstanding on the part of Zuckerberg and others that
saying that AI is an existential risk is the same thing as saying 'we should
suppress AI research'. You would not think that was the implication if you
understood from the perspective of Kurzweil/Musk/other people who have read
Kurzweil's/Bostrom's book(s) or think the same. The idea is that we cannot
possibly stop advanced human-like and then human-surpassing AI from
developing. No matter how hard we try to slow down AI development.

So when they say that AI can be dangerous, they do not mean that we should try
to stop developing AI, since that is impossible. What they mean is that we
should try to mitigate the risks to some degree. One way, which Musk and
others are pursuing, is to try to create strong but friendly AI before some
less friendly but equally potent variant becomes dominant. But if we do not
develop some kind of high bandwidth interface with computers and AI that we
control, eventually we should expect computer-based intelligence to
significantly surpass our capabilities, to the point where we become
irrelevant.

If you disagree that there is any possibility of strong AI or any existential
threat, what harm is there in pursuing strong but friendly AI, or in
developing the BCIs (brain-computer interfaces)? The only 'harm' I can see is
if this gets oversimplified as it has been to the idea of "AIs could be bad,
we must stop AI" \-- which they are not trying to say.

~~~
bredren
Wasn't Elon's original point that he has current access to working AI
demonstrations that personally frightened him, and served as the basis of his
call for regulation?

~~~
tim333
Kinda though his arguments seem largely from first principles that robots will
get smarter than us so we'd better be careful what they get up to. Here's him
chatting a bit at "National Governors Association Streamed live on Jul 15,
2017"
[https://www.youtube.com/watch?v=2C-A797y8dA&feature=youtu.be...](https://www.youtube.com/watch?v=2C-A797y8dA&feature=youtu.be&t=48m9s)

------
hb3b
Elon's tweet:
[https://twitter.com/elonmusk/status/889743782387761152](https://twitter.com/elonmusk/status/889743782387761152)

~~~
dsacco
I'm normally a huge fan of Elon Musk, but that tweet comes across as pretty
hostile to me. Neither party seems to have critically backed up their beliefs,
but I didn't get the sense that Zuckerberg was attempting to single out any
individual for a lack of understanding.

------
mncharity
For a bit more thoughtful and nuanced discussion, the recent _AI Now 2017 -
Public Symposium_ [2] stream[1] (2.5hr) wasn't bad (at 1.5x speed).

[1]
[https://www.youtube.com/watch?v=npL_UsK_npE&t=285](https://www.youtube.com/watch?v=npL_UsK_npE&t=285)
[2]
[https://artificialintelligencenow.com/schedule/2017-symposiu...](https://artificialintelligencenow.com/schedule/2017-symposium)

------
nradov
This debate seems so premature. It's hard to take seriously when so far no one
has even come close to building an AGI equivalent to a _mouse_ , let alone a
human.

Here is a broader set of perspectives from people working in the field.

[http://spectrum.ieee.org/computing/software/humanlevel-ai-
is...](http://spectrum.ieee.org/computing/software/humanlevel-ai-is-right-
around-the-corner-or-hundreds-of-years-away)

~~~
_archon_
If we have an AGI equivalent to a mouse, wouldn't it iterate itself to a human
intelligence pretty rapidly? Safeguards and intelligent thinking now could be
the difference between:

Mouse -> cat -> shark -> Evil human mastermind -> problems

and:

Mouse -> cat -> dolphin -> helpful friendly human -> better world

The end situation is highly theoretical, but a self-modifying AGI will think
differently than a human does, so it's hard to predict what could happen with
our existing preconditions. With such a broad range of potential divergence,
and a steep difference in desirability of potential outcomes, it seems
reasonable to ask some pointed questions sooner rather than later.

~~~
nradov
No there's no evidence to indicate that would happen. Mice have been around
for a long time. How much smarter is a mouse today relative to a mouse a
million years ago? And why would a simple mouse level AGI have the
intelligence to improve its own abilities?

------
ExactoKnight
No, it's Elon Musk whose understanding of AI is limited.

I doubt Musk has ever actually built a neural network, or got close to coding
one... because if he did, he would realize just how fragile and hard to
generalize they are.

Stacking a ton of data together to build predictive models that can produce
intelligent-like outputs, is a far, far cry from a robot gaining consciousness
or a sense of agency.

~~~
r_singh
With Musk talking about AI the way he is, seems to me that he wants to use his
reputation and the public's naivety to instill fear and have some control over
the distribution of AI that works for general purposes...sounds like a great
opportunity.

If he stays consistent with what he says about AI, I think he might be
successful in deterring people from using useful and reasonable AI for a
while. I wish he would take the dangers of new features like Autopilot in a
production car as seriously though.

~~~
TeMPOraL
> _If he stays consistent with what he says about AI, I think he might be
> successful in deterring people from using useful and reasonable AI for a
> while._

What exactly did he say that would imply that? As far as I know (and I did
watch all the recent interviews), his approach boils down to:

\- more open access to AI is less dangerous than it being developed in secret
by corporations or governments; hence OpenAI

\- suggesting that regulators start paying attention (and explicitly advising
them _not_ to start regulating outright; just to form an agency that would
"gain insight" into the field)

~~~
ExactoKnight
I do agree with this. One of the clearest inequalities emerging in A.I.-driven
machine learning is how useless those techniques are unless you have access to
a monstrously large amount of well categorized data. Currently you need to be
one of the big four to have that access, and this is highly monopolistic.

~~~
tim333
Cruise Auto managed to do quite well without being one of the big four.

------
richev
Sounds like Musk has been reading the 1997 book written by my old Cybernetics
professor Kevin Warwick (aka Captain Cyborg)...
[https://en.m.wikipedia.org/wiki/March_of_the_Machines](https://en.m.wikipedia.org/wiki/March_of_the_Machines)

------
bryanrasmussen
That article just seemed to run out at the end, it seemed like there was some
previous problem between the two related to the Shuttle? And then the article
didn't describe it where I was reading.

------
shusson
As Rodney Brooks was recently quoted: "There are quite a few people out there
who’ve said that AI is an existential threat: Stephen Hawking, astronomer
Royal Martin Rees, who has written a book about it, and they share a common
thread, in that: they don’t work in AI themselves" [1]

[1] [https://techcrunch.com/2017/07/19/this-famous-roboticist-
doe...](https://techcrunch.com/2017/07/19/this-famous-roboticist-doesnt-think-
elon-musk-understands-ai/)

~~~
WheelsAtLarge
It doesn't take much AI to create smart weapons. A smart drone can do a lot of
damage. I don't even want to think about a level 5, completely automatic, car,
which is what tech visionaries are aiming to create.

Tech creators look at the bright side of their creations. They have tunnel
vision and can't foresee the harm. If they can't others need to.

The criticism that Musk does not know enough about AI ignores the fact that he
is trying to create a fully automatic car with AI and has experience with its
limitations. Yes AI(a catch all process for making computers do things that
require intelligence when done by humans) is a process and it has no
intelligence in itself but you can say the same thing about C++. It's just a
language but look at what's possible to do with it.

~~~
dsacco
I'm struggling to understand your point, because it just sounds like FUD. I
mean that sincerely - you're not really backing up your concerns here, other
than to (essentially) say the technology is new. Yes, emerging technologies
can be risky. But as you just said, a programming language is also capable of
a lot of harm.

So what exactly are you proposing, and what _precisely_ is your grievance?
You're _going_ to have to "think about a Level 5 autonomous vehicle" if you
want to convince opponents of your position, because you'll need to come up
with a more convincing argument than, "I don't want to think about how
dangerous it could be."

~~~
blackhawk95
Hadn't there been an incident where Microsoft had a chat bot that started
swearing on people after having conversation with internet people. So like
that, if say a caretaker humanoid uses machine learning to learn from human
and if these robots learn that humans fight to protect their loved
ones...maybe the bot could also pick up a fight with other humans for wrong
reasons.

I think its one way to explain what i felt about rouge AI, its not actually a
bad AI, but a bad dataset used by AI.

~~~
d0lph
There would probably be hard coded rules that could not be overriden by the
AI. The 3 laws are pretty popular[0].

1\. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.

2\. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.

3\. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws

[0]:
[https://en.wikipedia.org/wiki/Three_Laws_of_Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

------
WheelsAtLarge
Advances in tech should not overpower Society's needs. The first thing people
do with tech is to use it to create or enhance weapons. Anyone that denies
that needs to look back at history and see how technical innovations have been
used. It's time to discuss the dangers of tech and how to limit its ability to
create massively dangerous weapons. We all need to take the rose color glasses
off and deal with the problem.

~~~
nnfy
>The first thing people do with tech is to use it to create or enhance
weapons.

That's only true if you're cherry picking. Plenty of tech doesn't even apply
to the defense industry.

Moreover, I'd posit thay any limitation placed on tech to limit danger will
almost certainly limit potential benefit as well.

Ignorance is never the solution.

~~~
WheelsAtLarge
No, pick every advancement/breakthrough in technology and you'll see it either
enhanced a weapon or created a new class of weapon. No cherry picking. If it
can be used in society it can be used in war.

~~~
nnfy
Cotton gin

Hypodermic needle

Fracking

Solar panels

This argument is a little pedantic, but here are some examples that contradict
your assertion.

Consider this: without the bomb we would not have nuclear power. Without
modern warfare we would not have modern medicine. There are other examples,
but you would have been worse off limiting these. Not to mention non-
technological benefits like geopolitical stability from MAD.

~~~
WheelsAtLarge
> Cotton gin

Better clothing for soldiers, tents, and more clothing for them. Plenty more
I'm sure. Early war planes used canvas instead of metal as outside cover as
they went out to fight each other.

> Hypodermic needle

Enhancement of battlefield medicine. Fix them faster so they can continue to
fight.

> Fracking

Cheaper and continued source of fuel for war machinery

> Solar

Spy satellites, power for field communications. It's only the beginning.

Nuclear power is a perfect example of the dangers of technology. The world
powers have decided to limit its spread because it's so dangerous to let
everyone have it. If nuclear bomb tech was free to run its course they would
be small, cheap and every country would have some. And most would not hesitate
to use them.

~~~
dsacco
"Making clothes for soldiers" is reaching. You're expanding "use technology
for war" to the point of nonsense and diluting the point. I can just as easily
say that the cotton gin enhanced every thing ever because it allowed every
human involved in every endeavor to wear clothes.

Furthermore, to circle back to your original point: even if we accept that the
cotton gin is a technology "used for war", it most certainly was not the
_first_ nor the _primary_ use of that technology (the first use was clothing).

~~~
WheelsAtLarge
Ok don't take that as an impact but tech does not live in a bubble. The cotton
gin was a contributor to the civil war. It made cotton processing much easier
and more cotton was grown which required more slaves. Which inflamed the
debate between the ones that wanted slavery and those that did not.

So from my view it not only aided the war once it started. But it was also a
contributor to its beginning.

But whatever I say you'll find fault. You'll focus on the small details. I'm a
techie too and I sometimes find myself arguing the smallest detail whether it
has consequences or not. It's that type of attention to detail that keeps us
employed when we need to make sure things are working.

The bottom line is that you think tech is great while I think it's great but
we have to be careful with it's development and use.

All I can say is that it's worth being careful where we step.

