
Elon Musk, Stephen Hawking and fearing the machine - cryptoz
http://www.cnbc.com/id/101774267?__source=yahoo%7Cfinance%7Cheadline%7Cheadline%7Cstory&par=yahoo&doc=101774267%7CWhat%20Elon%20Musk%20and%20Stephe
======
pjc50
Arguably we already have hostile AI: bureaucracy.

It doesn't even have to be automated, but it increasingly is. It ends up
producing harmful outcomes that can't be readily ascribed to the moral choices
of individual humans within it.

Or consider that _hostile_ is a transitive verb: hostile towards whom? The
military already have automated systems connected to fire control without a
human in the loop, although at the moment these are limited to missile defence
systems. Consider the possibility of plugging the NSA's metadata based system
for calling people terrorists into an _automated_ airstrike or assassination
system.

~~~
jqm
LOL@bureaucracy as hostile AI.

Spot on comment. But... are you sure "Intelligence" is a fitting description?
"Artificial" for sure.

I like the part about producing outcomes that can't readily be ascribed to
individual moral choices. It seems this describes human culture in general. So
here we all are. Semi-autonomous little components of the larger algorithm
called "civilization" which we cannot truly hope to transcend.

~~~
mercer
> I like the part about producing outcomes that can't readily be ascribed to
> individual moral choices. It seems this describes human culture in general.
> So here we all are. Semi-autonomous little components of the larger
> algorithm called "civilization" which we cannot truly hope to transcend.

Wow! That's a nice paragraph that highlights very accurately why many of us
feel 'alienated' in our society.

In other cultures and times one might not even consider this description and
instead describe us as useful organs in a greater body, or valuable organisms
in a greater ecosystem. Which 'feels' very different.

------
worklogin
>"I don't think just being a billionaire means that the things you think out
loud are important," the Elevation Partners co-founder said. "I would like to
worry about the problems that are killing us today as opposed to the ones that
may kill us in 20 years."

Investor I've never heard of telling me that money alone doesn't make your
opinion important. I wonder if he sees the irony. Also shouldn't be surprising
that a modern investor would be so short sighted. 'Do not worry about 20 years
from now, only think about today!'

~~~
Shivetya
I read it simply as, he isn't thinking about what I think is important. I
think he wants to dismiss Musk because he is rich because well, that is easier
than dismissing the points Musk raises.

Usually you can tell which side to listen to by watching which discusses the
question at hand versus the one who goes after the other party

------
martythemaniak
If we're talking about bad AI, shouldn't we ask _why_ it would want to kill
us? Pretty much all of our human conflicts are over resources - land, water,
energy, etc. Unlike us meatbags, an AI would not be confined to the thin layer
of the troposphere - it could grow and harness energy and material pretty much
anywhere - the Moon, Mars, Mercury, a Dyson sphere etc. My question is, why
fight over _this_ particular rock, when there's essentially an infinite amount
energy-matter out there?

The other thing is, it's hard to imagine something qualifying as AI without it
understanding things like ethics, morals, humility, aesthetics etc. At our
best, our species worries about preserving our natural world, extending our
care to other species, etc. Can something supposedly smarter than us be
completely lacking in principles which are at the core of our "smartness"?

~~~
joe_the_user
" _If we 're talking about bad AI, shouldn't we ask why it would want to kill
us?_"

I think the folks mentioned in the article were talking about the _broad
threads_ posed by AI. I agree we have little basis besides movies for
autonomous AI just deciding to attack for no reason or for human-equivalent
reasons. But we have every reason to think that _neutral AI_ , AI that still
follows orders, could be used by humans to continue all the conflicts that
they already pursue with horrific consequences.

 _The other thing is, it 's hard to imagine something qualifying as AI without
it understanding things like ethics_

Depends what you mean by "understand". Many highly intelligent humans acts
unethically but even more highly intelligent humans have gleefully _followed
unethical orders given them by their superiors_. Consider you'd be designing
your mind rather than just finding it after millions of years of social
evolution, it seems like you could create an intelligent mind quite capable of
following orders nearly blindly.

If anything, the whole "it might go insane and kill us" schema is implausible
enough to detract from the immediately obvious danger - "it might sanely
follow the insane orders of humans in the fashion that human society has seen
over and over again but this time with superhuman power".

~~~
dragonwriter
> I agree we have little basis besides movies for autonomous AI just deciding
> to attack for no reason or for human-equivalent reasons.

Of the very many species that have become extinct as a result of human action,
how many were the result of a deliberate attempt by humans to eradicate the
species? Very few.

The others are, however, still just as extinct. AI doesn't have to "decide to
attack us" to be a danger.

------
ForrestN
I think the assumption that AI would have a natural motive to conquer things,
or even to expand beyond the confines of its current resources, is flawed.

The absurdity of simultaneously fathoming the eventual death of the universe
and also caring about accomplishing things and making decisions and living the
particulars of one's life is uniquely human. We all know we're going to die,
but we still spend lots of effort deciding about pizza toppings. (See Thomas
Nagel's _The Absurd_ ).

Why would an AI care to do anything we didn't tell it to care about? There's
no inherent link between being sentient and having the same psychological
priorities as humans. We're irrational, and I don't think they will be.

~~~
mercer
Is it not plausible that an AI modeled after humans is also likely to have
human-like flaws such as the desire to conquer? The better we get at creating
complex, human-like AI, the bigger the likelihood of unintended side-effects
(leaving aside whether this is feasible in the first place, of course).

~~~
ForrestN
I think we'll figure out how to make an intelligent being before we're able to
figure out how to make an intelligent being that also harbors the subtle
contradictions that make us human.

------
eli_gottlieb
Ok, who planted mind-control chips in famous sci/tech celebrities to make them
start pimping for the Machine Intelligence Research Institute?

More realistic hypothesis: _good job_ MIRI, as your reorganization from SIAI
to your current incarnation seems to have _massively_ increased the
respectability of our cause and the range of people who acknowledge it as a
serious problem.

~~~
gress
I'm not suggesting that the cause is unimportant, but maybe the increased
respectability has nothing to do with MIRI, and more to do with the increased
obviousness in the public sphere of both rapid technological advancement, and
increased awareness that there is no guarantee that any given technology
produces a net benefit.

It's still a credit to MIRI/SIAI to have identified and taken this risk
seriously.

~~~
eli_gottlieb
If I had to identify a human-life or public-policy issue on the immediate
horizon related to computing technologies, and I'd never heard of MIRI, I
would have picked automation.

Stephen Hawking, at least, when he gets cited, actually referred explicitly to
MIRI and FHI in his own editorial on the subject. So he _is_ talking about
them specifically.

But yes, there are plenty of _other_ important causes, including several
(ecological devastation, economic disaster and related wars, automation
crises) that stand a decent chance of totally fucking up society before anyone
at all gets to the point of "switching on" a dangerous-level AI, Friendly or
not. _Those should definitely be addressed._

------
jqm
Technology is sometimes slowed for a time but it never stops. Many of the
horrors envisioned will probably come true at some point.

For instance, I don't see any reason that semi-autonomous killer robots need
to be confined to governments. It seems that (in time) a small group of people
could eventually build a substantial number of semi-autonomous or remote
controlled killing devices on their own and use them for crime or terrorism. I
expect something this will happen at least once at some point and probably
more than once.

Frankly, the the future availability of "hacker" bio-engineering scares me
much more than A.I. or robotics.

But, there will be incalculable benefits to humanity from both as well. So I
think we should embrace that and adapt. The future is coming.... like it or
not. It is good to be warned of dangers, we probably won't avoid them anyway
given our past behavior (have they figured out what to do with spent nuclear
material yet?), but a warning is good. I think our best option is to be
prepared to adapt and recognize that things are going to change substantially
relatively soon. Maybe this will be the time humanity recognizes the world is
a small place and we are all on it together and problems need to effectively
be addressed at a global scale. But how much trauma there will be before this
recognition occurs I don't know.

------
izzydata
I find it hard to believe AI will ever get to this point. I'm skeptical that
is even possible, but even if it was it isn't like you are going to
immediately put it inside of a fully functional robot with the means to do
whatever it wants. You'd have it inside of a computer with no robotic
components to control.

Edit: Looks like someone is down-voting all mentions of this not being a
problem. Good luck to you sir.

~~~
indrax
I didn't downvote you, but your argument is terribly flawed, and has been
answered thousands of times.

* You give no justification for rejecting powerful AI

* Many AI projects use robots in development

* An intelligent agent does not need a robot to get things done.

* Many AI projects supply Internet access to the AI.

* A sufficiently intelligent agent might circumvent protections you think are adequate, or manipulate you into doing so.

~~~
rimantas

      > You give no justification for rejecting powerful AI
    

I wouldn't go that far as to reject it completely, but it will take a really
really long time, orders of magnitude longer than Krzweil imagines. And the
reason is very simple—we have no idea how mind really works. Our current
knowledge is like the knowledge of the early alchemists compared to the
current chemistry, only millions times worse.

~~~
namlem
Speaking from a background in neuroscience, that simply isn't true. We are
still very ignorant when it comes to the workings of the brain, but we are far
ahead of alchemists, simply because we are approaching the problems
empirically, which alchemists did not do. Of course, the brain is many orders
of magnitude more complex than chemical processes, so in that sense we have a
much longer road ahead of us. We do, however, have the advantage of modern
technology to accelerate our progress.

But the most important factor by far as to why I believe you're wrong is that
we do not have to understand how the mind works on a deep level to create AI.
Our brains do a ton of stuff that an AI doesn't have to. We only need a fairly
basic understanding of the brain to create thinking machines.

~~~
JoeAltmaier
Seems like it can't really be all that complicated (really!) The brain was
invented by evolution (the lowest bidder, essentially) which operates blindly.
The thinking mind was invented maybe 1 million years ago, by an evolutionary
drunkards walk around the neural-connection-space. There's got to be some
relatively simple structure involved, recursive or iterative or just random
connections that learn?

~~~
namlem
The "thinking" mind is one of the simplest parts of the brain. Ironically,
what philosophers call the Hard Problem of Consciousness is much easier than
the "easy" problem. The mechanisms by which we perceive the world are vastly
more complex than the mechanisms by which we process and store our
perceptions, and those parts of the brain have been evolving for hundreds of
millions of years. The sheer amount of optimization is what makes it so hard
to replicate. We can make artificial systems that outperform tiny
invertebrates, but we still have a way to go. Fortunately, our rate of
progress is rapidly accelerating, so I'm still confident we'll be able to
figure it out in the next few decades.

------
moyix
Actual _malice_ on the part of an AI isn't necessarily required for
catastrophic consequences. The canonical example in this school of thought is
the Paperclip Maximizer [1].

[1]
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
touristtam
Should the Three Laws of Robotic prevent this?

[http://en.wikipedia.org/wiki/Three_Laws_of_Robotics](http://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

------
GrantS
Watching the video, it was odd that he had absolutely nothing to say about
potential uses of AI when asked that question. I'm assuming it was either a
question so out of left-field that he just wasn't prepared or interested in
discussing it and distracting from Tesla/SpaceX, or that he has lots of ideas
he doesn't want to make public yet.

More understandable that he repeatedly says he doesn't know how to stop
unfriendly AI, but there is always this:
[http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)

------
Lambdanaut
> It's kind of an ironic comment from him, since he just invested in an
> artificial intelligence company, Vicarious

It's not ironic. His quote was taken directly from an interview where he was
responding to the question of why he invested in Vicarious. He did it to keep
an eye on AI tech because he was worried about the possibility of hostile AI.
They must have known this when they wrote this article.

This is an awful article.

------
melling
The best line comes at the end: "The AI will chase us there pretty quickly."

~~~
JVIDEL
John Connor: So this other guy: he's an AI like you, right?

The Musk: Not like me. A D-Wave advanced prototype.

John Connor: You mean more advanced than you are?

The Musk: Yes. A quantum computer.

John Connor: What the hell does that mean?

The Musk: Nobody knows but it sounds cool.

------
at-fates-hands
Well, at some point we have to consider Humans will longer be at the top of
the food chain. It's natural to think something like AI would supplant us for
many reasons.

Like Hawking has stated many times, when you have one technologically advanced
society coming in contact with one less advanced, the less advanced society is
the one who always is enslaved and overrun.

------
IanDrake
Elon is a very smart guy. I think he was playing a bit dumb there on purpose.
Probably a pull sales tactic. The more vague he is about what Vicarious does,
the more everyone is going to be talking about it. Now instead of him having
to promote (push) his new investment everyone will be begging him for details
(pull).

------
protonfish
I have little fear of unfriendly AI for two reasons. One is that there is no
reason to think artificial intelligence would work differently than natural
variety and therefore could be be policed in similar ways: limiting access and
physical opportunity to cause trouble. Plus keeping a watchful eye and
deactivating anything worrisome.

The other is that the fear of unfriendly AI is being taken unaware by a sudden
implementation of hyper-intelligent AIs. So far we haven't made anything
smarter than a crab so I doubt we are in imminent danger. There may be a
lingering belief that hyper-intelligence could be acquired in a quantum leap
forward - skipping past all intermediate levels of intelligence. If we have
learned anything from the eternal AI winter it is that advancements in AI are
done via painstakingly small improvements.

~~~
eli_gottlieb
>One is that there is no reason to think artificial intelligence would work
differently than natural variety

Of course it will work differently from the natural variety. You have dozens
of different modules cooperating in your brain to form your mind. It has only
a small handful.

Your mind runs on heuristics and biases, rarely employing its full power for
energy-consumption reasons. An AI just runs at full brainpower all the time
and pays for electricity.

Go read about how reinforcement learners actually work.

~~~
protonfish
Yes I am aware of all of this and I disagree. Maybe we need to admit that our
failure to develop any decent AI is a direct consequence of flaws in our
current thinking. Or maybe you should continue to make the same mistakes and
expect a different result.

~~~
eli_gottlieb
Who says that we've terminally failed to develop "any decent AI"? This once
again indicates that you don't keep up with the field.

~~~
IanCal
Usually, "decent AI" is defined as "stuff that humans can do that computers
can't currently do". This list is updated as and when computers become capable
of doing something, sometimes just to add "Computers can't do X _the way a
human does X_ ".

------
hyperion2010
I find this fear completely unfounded and reveals a striking blindness to
reality and possibly even racism. There is no thing in the universe more
hostile to human begins than themselves. To this day we kill, enslave and
oppress our fellow man and have done so for tens of thousands of years. Thus I
have no idea why people worry about "artificial" intelligence since regular
old organic intelligence (or lack there of) seems to be exceptionally hostile
already.

~~~
capisce
Maybe it's worth worrying about because AIs might eventually become orders of
magnitude more powerful than humans.

~~~
hyperion2010
And somehow being more intelligent makes them a threat? We have absolutely no
evidence about any of this, it is pure fear of the unknown and instead of
imagining a peaceful and cooperative future they somehow imagine that AI will
treat us like we treat our various 'lesser' races. But that is just us
projecting our own current and historical behavior onto a class of beings that
doesn't even exist yet and who WE have to bring into being.

If you imagine that your children will be evil and kill you you have damned
them before they have even been born. Not only that but the analogy sucks even
more because in this case there isn't even the excuse that we don't understand
how biology leads to behavior because we will have build the systems
ourselves.

~~~
capisce
Even if you imagine just a small risk of an AI being hostile or indifferent
toward us, is it worth gambling our future on that risk? Where is your
evidence that a vastly greater intelligence would have a moral that is
perfectly aligned with ours, unless we programmed it perfectly for that
purpose and made sure any self-modification by the AI preserved those moral
invariants?

[http://kajsotala.fi/2007/10/14-objections-against-
aifriendly...](http://kajsotala.fi/2007/10/14-objections-against-aifriendly-
aithe-singularity-answered/#hostile)

------
Futurebot
What we need is a new law for robotics: No AI should ever be made that can
ever understand its own 'enslavement' or be able to suffer. Of course,
enforcement of said rule is a whole lot easier said than done, but culturally
we should perhaps use that as a starting notion. This becomes more important
the closer we get to that level of sophistication.

------
Bangladesh1
Agree with worklogin. lets think about today.

------
jpkeisala
Questions about AI asked from Elon Musk and Stephen Hawking. Wow... that is
really big honor for Stephen Hawking.

~~~
jacquesm
[http://en.wikipedia.org/wiki/Poe%27s_law](http://en.wikipedia.org/wiki/Poe%27s_law)

------
rsl7
We are speculating about something that does not even work in theory. We have
no idea what "it" would be like.

