
Stephen Hawking: AI, Potentially the Worst Thing to Happen to Humanity - jaequery
https://uk.news.yahoo.com/stephen-hawking-artificial-intelligence-potentially-worst-thing-happen-110523390.html#r6ovIz7
======
javindo
I don't really want to come across as hostile to the views of such a well
respect physicist, but this article represents the view which is often held by
those who are very intelligent yet lack the actual study of AI.

Anybody who has actually, or is currently, studying AI will know that there
are fundamental differences in programming "AI" and some self aware magic
computation which is almost entirely unfathomable. With current knowledge, at
least, the only way to have any sort of learning is to design and implement
algorithms to do so. There can of course be incredibly abstract knowledge base
designs and the computation can assimilate incredibly complex knowledge from
those, however it is almost a logical fallacy to suggest that we would ever
lose control of something autonomous due to some sort of "rogue agent".

Of course, there are very real risks with something like AI, but it's much
less sinister than what the article suggests. For example, faulty code or
errors in the knowledge base could lead it to make a bad decision, but humans
can also do that.

I don't think we should be dismissing the risks necessarily, but I do believe
that it is completely far fetched to say we would be making the worst mistake
in history if we are to dismiss a film as science fiction!

~~~
marvin
On my side, I am surprised that so many AI researchers are unable to take the
long-term view in this discussion. It's presumably because AI in the
laboratory is still relatively primitive.

No one is talking about magic. The human brain is not magic, neither is that
of chimpanzees, rats, dolphins or gorillas. Intelligence is a purely physical
phenomenon, which means it can be emulated by computers. Natural brains are
also a product of evolution, which means (1) the development happens very
slowly, (2) development is directed only towards evolutionary success and (3)
there is no flexibility in how the thinking organs are constructed. Computer
intelligence does not in principle have these limitations. It would be
terribly anthropocentric to believe that humans are the most sophisticated
intelligent entity that can exist in the physical world - after all, we are as
far as we know the first such entity to emerge, so from our perspective the
evolution of intelligence has now stopped.

That's the feasability argument. The risk argument is that the consequences of
an independent, runaway intelligent entity significantly more capable than
humans would have such devastating consequences for humanity's future that
even a small risk merits a significant effort to map out the territory.
Respected scientists have said "it is impossible" to hundreds of things that
proved to be quite simple, so this is not an argument. Even if you don't buy
the previous part of my reasoning, there is still risk here. In principle,
there are any number of things that could preclude advanced AI in the near
future even if the above reasoning is correct (too difficult, requires too
much computing power, uses different computational techniques), but seeing as
we don't know the unknowns here, taking the cautious route is the correct
thing to do. This has been a scientific principle for decades, no reason to
drop now.

Do AI researchers have any arguments opposing this that don't amount to "the
AI we have created up until now is not very good"?

~~~
sampo
> _" I am surprised that so many AI researchers are unable to take the long-
> term view in this discussion."_

In my opinion, the Middle Age alchemists who made gunpowder and first
primitive bombs, didn't need to establish research programs to worry about
advances in bomb-making leading to the threat of Mutually Assured Nuclear
Destruction.

If they had tried (maybe they did?), maybe they would have come up with ideas
like very heavy regulation of the trade of saltpeter, so that no one has
enough to make a verybigbomb.

I agree that everything you described will happen in the future. But in my
opinion we are at a Middle Age alchemist's level in AI research (no offense to
AI researchers) and we can safely wait a 100 years, and let those people worry
about the existential threat. They will be in a much better position to do so
appropriately, because they will know much more than we do. And they will not
be late, either.

~~~
rsaarelm
In this case the worst case would be very fast development of dangerous AI,
and I'm not sure the 100 years is anywhere close to the lower bound. Low-end
estimates for the computation power needed to run something equivalent to full
human cognition are around 1 petaflop [1]. Google's total computing power in
2012 was estimated at 40 petaflops [2]. Of course it's split wide to a wide
network of computers, but the human brain we're looking for comparison looks
like a pretty parallelizable design. So we seem to already be at point where
it might just be the lack of very clever programming that keeps us from
getting a weakly superhuman AI running in the Google internals.

It looks like we've got ways to go there though, current programs don't seem
to even begin to act anything like an adult human. So if the problem was to
engineer an out of the box adult human level AI, we might again assume that
there's obviously decades of work left to do before anything interesting can
be developed. The problem now is that that's not how humans work. Humans start
out as babies and learn their way up to adult cleverness. I can tell that an
AI is nowhere near having an adult human intelligence out of the box, but I'm
far less sure how to tell that an AI is nowhere near being able to start a
learning process that makes it develop from something resembling a useless
human baby towards something resembling an adult human in capabilities.

[1]
[http://www.nickbostrom.com/superintelligence.html](http://www.nickbostrom.com/superintelligence.html)
[2]
[https://plus.google.com/+JamesPearn/posts/gTFgij36o6u](https://plus.google.com/+JamesPearn/posts/gTFgij36o6u)

------
harshreality
This is what MIRI (Machine Intelligence Research Institute) is investigating.

[http://intelligence.org/research/](http://intelligence.org/research/)

The scariest part of the problem to me is this:
[http://yudkowsky.net/singularity/aibox](http://yudkowsky.net/singularity/aibox)

If an AI can't be contained, then there has to be some failsafe that prevents
_any_ self-modifications which could turn it evil OR remove the failsafe. Or
you have to believe that you can neutralize an evil AI before it destroys
humanity.

~~~
deutronium
Regarding the AI box experiment, I'm somewhat dubious about it, considering
there doesn't seem to be any transcripts from when the AI 'escapes'.

Which makes me think they use blackmail or something similar in the
transcript.

~~~
danbruc
I couldn't imagine how the AI could escape an hour ago, but after thinking
about it for a while I am more and more convinced that it is actually hard to
contain the AI. Containing the AI is like imprisoning a human for no reason
and this will usually conflict with the gatekeepers moral values. Just imagine
talking to human suffering in a tiny prison cell without having done anything
wrong instead of the trapped AI. Can you see how it would be the right thing
to open the cell's door? You are cruel not opening the door.

------
DigitalSea
I share Stephen Hawking's concerns. I think AI is already at a point where
maybe 10 or even 15 years down the road, artificial intelligence will be
everywhere. Autonomous public transport, taxis, driver-less vehicles,
autonomous crime prevention systems. The military is said to always be at
least 10 years ahead of public tech research and development, look what
they've already got now, imagine another 10 years.

We've become a society that is too big and intelligent to fail, but that
ignorance of being too smart and big to fail could be that our downfall is.

~~~
sidcool
I share the concern partially. It's similar to other technologies which have
downsides. Nuclear power for example. AI will have the power to help humanity
prosper or cause its downfall. The balance will be for us to decide, again
just like Nuclear power; for with great power comes great responsibility.

------
daemonk
What would the AI's motivation be? Will it understand what existence means and
have the "desire" to survive? Is it possible that a self aware AI has already
occurred in some form many times in the past, but since it doesn't have the
anthropomorphic trait of wanting to exist, it never actively persisted?

So what we are really afraid of is developing an AI with human motivations.
But do the intelligence we want to artificially develop require human
motivations? Aren't they two mutually exclusive traits?

Maybe part of the fear is how we train the AI. If we are to imbue it with the
total knowledge of our history, then maybe it will become more "human" and
have those motivations. Is it possible to train it with hard data that aren't
anthropomorphized in any way?

~~~
marvin
An AI's motivation would be whatever it is programmed to do. Which is why it
is so important to get this right the first time. Human motivations are
incredibly complex, human moral systems have not yet been formalized and any
attempt at doing this breaks down at the edge cases.

This is not by any means a solved question, but there have been quite a few
publications on this already. There are a hundred subtle errors of reasoning
that ruin naïve and anthropocentric reasoning around morality and AI safety.
Have a look at the lesswrong blog, for instance. Lots of interesting reading.

~~~
jeremiah256
If human motivations can be quantified and programmed into an AI system, my
guess is 'loyalty' or 'patriotism' will be one of the first.

"It is lamentable, that to be a good patriot one must become the enemy of the
rest of mankind." Voltaire

------
bad_alloc
I recommend reading "Shadows of the Mind" by Roger Penrose. In that book he
argues that strong artificial intelligence isn't achievable on classical
computers, since the human brain seems to have the ability to do quantum
computing in certain molecular structures. These processes probably can't be
simulated efectively. So AI in today's computers probably won't work at all
(although quantum computers or artificial neural networks might).

~~~
dlss
This has been discredited (along with Penrose's incompleteness theorem based
"AI is impossible" conjecture before it)

[http://en.wikipedia.org/wiki/Quantum_mind#Roger_Penrose_and_...](http://en.wikipedia.org/wiki/Quantum_mind#Roger_Penrose_and_Stuart_Hameroff)

~~~
ronaldx
Although specific claims might have been discredited, quantum processes are
relevant to biology, e.g.

[http://www.nature.com/nature/journal/v446/n7137/abs/nature05...](http://www.nature.com/nature/journal/v446/n7137/abs/nature05678.html)

~~~
dlss
This is true, and it's also true that such quantum processes exist in the
brain.

Penrose's claim was that those quantum processes are a significant part of the
computation that is consciousness. It was discredited by getting better upper
bounds on how much quantum computation the brain is capable of (ie very
little).

------
nroose
I have a great deal of respect for Hawking, but... Lately it seems like he has
said a couple of crazy things. We (as in most humans) are NOT going to get off
the earth. Earth will always be the best chance for a planet for humans. Let's
take care of it. And AI is NOT anywhere near becoming really intelligent.
Anyone who has interacted with Siri knows that she has no real understanding.
And the Google car and other autonomous driving systems are already safer and
probably better for traffic than almost all drivers, but they still, and will
most likely for a long time, need a human driver to take over under
circumstances that actually require thought. AI is advancing, but it has
always advanced slower than predicted. A lot slower than predicted. And while
there are signs that AI will continue to be more and more useful as time goes
on, and in some circumstances like recognizing driving, captchas and playing
chess and jeopardy it will outperform humans in simple tests, it is not
anywhere near, nor will it be for a long time, outperform the human brain in a
general way. And certainly it is NOT similar to an alien (as in from outer
space, not from another country on earth) civilization.

------
duckingtest
Intelligence is not magical. What matters is algorithmic complexity. At some
point only throwing more computing power can help. Today, yes, maybe we could
build an ai with close to human level intelligence, running on a very
expensive super-computer(s) eating enormous amounts of energy. So what?

Human-level ai isn't going to figure anything that humans can't figure out
themselves, so it's not going to design new almost-magical ways to enhance
itself. Especially because the entire human race is basically one giant
parallel data processing system, although communication leaves much to be
desired. Moore law isn't going to help - silicon technology is very close to
stagnation [1].

I don't think singularity is possible until potential ai-accessible artificial
computation capability is at least several orders of magnitude higher than
human brain's, which is impossible with silicon-based technology. Nothing to
worry about.

[1] [http://www.extremetech.com/computing/178529-this-is-what-
the...](http://www.extremetech.com/computing/178529-this-is-what-the-death-of-
moores-law-looks-like-euv-paused-indefinitely-450mm-wafers-halted-and-no-path-
beyond-14nm)

------
richardw
One thing I look forward to is greater logical input into high-level decision-
making. Emotions, egos, racism, discrimination, nationalism, aggression - all
these are things that we just don't need any more.

I do think we need exceptional care when working on such a system but I also
think the human mind of 50000 years ago leaves a lot to be desired when
running in the 21st century. A mind that is ultra-rational would be a good
adviser, helping to point out our blind spots and guide us to better thinking.
We shouldn't have to learn from all mistakes every generation.

(I'm certainly equally wary of blind acceptance of whatever science and
technology can create. A favourite article of mine is
[http://archive.wired.com/wired/archive/8.04/joy.html](http://archive.wired.com/wired/archive/8.04/joy.html)
\- has some very serious warnings and consequences, even about my
abovementioned thoughts.)

------
monotypical
Link to the actual article:
[http://www.independent.co.uk/news/science/stephen-hawking-
tr...](http://www.independent.co.uk/news/science/stephen-hawking-
transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-
we-taking-ai-seriously-enough-9313474.html)

------
grondilu
Even if AI is harmless, there is still the risk of brain emulation. To me this
is a greater threat, because if we don't know how a fully artificial mind
would behave, we can get an idea of how an emulated human brain could. The
problem is that human beings are potentially dangerous: they have volition,
and they can enjoy destroying things. Basically they can be narcissistic
psychopaths.

All it would take would be a human being rich enough to have his mind uploaded
in a machine, and then this man will get greedy for more computing power, even
if to do so he would have to take over the World and extinct humanity in its
organic form.

Sounds very far-fetched, but as Hawking puts it: there is nothing in the
physical laws that prevents a man-made device to perform as well or better
than a human brain.

~~~
T-A
The computational requirements are overwhelming though. There is a short
discussion of this in section VIII of
[http://www.fqxi.org/community/forum/topic/2077](http://www.fqxi.org/community/forum/topic/2077)
(click the PDF link); the currently top-ranked supercomputer turns out to be
roughly five million times too slow to simulate a human brain in real time.

~~~
threatofrain
This might be a retarded question, but a simulation of a thing must always run
more expensively / slower than the reality of the thing, right?

And couldn't they build an organic material AI with inorganic interfaces for
external manipulation?

------
owens99
Anyone know of a larger discussion related to this issue?

~~~
timClicks
The Less Wrong community, various subreddits (/r/singularity, etc). Good terms
to search for are "x-risk", "Jaan Tallinn" and "Nick Bostrom".

------
beejiu
I think the biggest problem as far as AI and automation is concerned is, what
is left for us humans to do? At the moment, we work 42 hours per week (UK full
time average), with many millions of people unemployed. Ultimately, our goal
as a species, should be to build our infrastructure to be as efficient as
possible. In an ideal world, we wouldn't have to go to work for 42 hours per
week and many wouldn't need to work at all. We'd just enjoy ourselves 24/7\. I
think this shift from the 'work hard mantra' to an efficient world where this
is much less 'manual' work to do (that includes many office jobs), is one that
society, government and business underestimates.

------
tim333
Assuming strong AI arises, which I think likely, and given human nature I
imagine people will build all sorts of AI's, 'evil' and 'good', though those
terms are a bit subjective. Probably the likes of Kim Jong Un would love to
have an army of self reproducing killer robots and Google will try to build
nice ones and the US defence department will continue with drone like things.
I agree with Hawking it could be a problem and hope the nice ones win out as
they probably will. Fingers crossed.

------
gitaarik
Bill Joy's article Why The Future Doesn't Need Us comes to mind:
[http://archive.wired.com/wired/archive/8.04/joy.html](http://archive.wired.com/wired/archive/8.04/joy.html)

------
nanomage
so, Mr. Hawking hasseen 'The Animetrix,' I gather?

------
tmikaeld
Simplistically:

If Win = Continue

If Loose = Change

Basic evolutionary principle.

Then giving the tools of doing, would be required to consume the formula.

I argue that an AI with limitations would not work.

