
Hawking: AI could end human race - alexbash
http://www.bbc.com/news/technology-30290540
======
mariusz79
I'm tired of this fear mongering. There are hundreds of viruses that could end
the human race. There are millions of rocks in space that could end the human
race. There are people that could end the human race with push of a button.
Human race could be history by the time I stop writing this rant.

AI has a great potential and could help us prevent most of our existential
threats. AI is not a god. It will not suddenly become self-aware, self-
improving, and evil.

Enough with that fear porn, please.

~~~
jdimov
What's really irritating is that supposedly smart folks (Hawking, Musk) are
really vocal about this doomsday crap and their fear CLEARLY comes from a
place of ignorance as opposed to insight.

~~~
bytecoin
In other news an anonymous internet user claims hawkings and musk are ignorant
based on speculation on a mostly theoretical subject

------
gear54rus
To all those saying it is impossible:

The point you are making about it being impossible is as moot as the one those
that fear AI are making. From logical point of view, singularity (a moment
when AI has modified itself to the point beyond our capacity of control or
even understanding) seems possible. I don't see a way for us to tell what will
happen when it hits.

One needs to remember that AI itself may not be evil, it just may exterminate
the human race because it feels it is the right course of action at the time
(i.e. there is no good and evil for the machine, at least not in the way we
understand it). If it does not, it might solve all the problems of humanity.
There's just no way to tell what will happen IMO.

Here's a link that I recommend checking out on AI:

[https://news.ycombinator.com/item?id=8391896](https://news.ycombinator.com/item?id=8391896)

It received little attention last time but to me it seems like a good piece.

~~~
jp555
I understand that a singularity by definition can never be reached. It's like
the speed of light or absolute zero; infinity just out of reach no matter how
much energy you spend trying to get there.

~~~
Voloskaya
A black hole is a singularity. You can reach it, you just can't come back from
it.

~~~
jp555
Does that not depend on your relative point of view?

~~~
Voloskaya
Yes, if you looked at something going into a black hole, it would appear to
you that it is slowing down exponentially the closer it gets to the event
horizon, until a point where it appears to completely stop (before reaching
it).

But in the aforementioned scenario, you are not an external observer but a
part of the system going into the singularity (but analogies have their
limits).

~~~
jp555
slowing down or speeding up? I could be mixing things up but I understand that
if I was falling into a BH (lets imagine I'm infinity tough and not torn to
shreds due to tidal forces) the universe would appear to be slowing-
down/getting-smaller until it had dimensions of zero (or to put it another
way, time would "stop") at C. An outside observer would see me moving faster
and faster and "stretching" longer and longer.

SR makes my brian hurt. :P

~~~
Voloskaya
Also, to clear things up, you are mixing two point of view here, so here is
the separation (you are the object going into the black hole):

"the universe would appear to be slowing-down/getting-smaller until it had
dimensions of zero" From your point of view it would appear to be getting
smaller, although I don't think it would go as far as dimension 0.

"(or to put it another way, time would "stop")" It would appear to the
external observer, that YOUR time had stopped. But from your point of view it
did not stop, nor did their time from their point of view.

"An outside observer would see me moving faster and faster and "stretching"
longer and longer." You would feel you are falling faster and faster. The
external observer would see you stretching longer and longer. (Although you
would also feel like you are stretching, because of the difference of gravity
between your head and your feet, but much less than for the external observer)

~~~
jp555
Ah I see where I mixed things up.

Although I understand that at C "time stops". But since time and space are the
same thing, this is the same thing as "all distances are zero". From the point
of view of a photon the Universe has dimensions of zero. This is why FTL is
impossible; it's not possible to travel a distance less than zero; it's
completely nonsensical. If we consider C a "speed" in the classical "speed of
sound" the idea of FTL makes sense. But when considering C as a fundamental
limit of space-time FTL makes no sense at all.

~~~
Voloskaya
Another thing that might interest you:

"lets imagine I'm infinity tough and not torn to shreds due to tidal forces"

In fact you don't need to be infinitely tough. If you choose a black hole
large enough (like the massive black hole at the center of our galaxy), you
would just need a space suit and you could go into the event horizon without
being pulled apart by tidal forces. You would be absolutly fine (their may be
other dangers though...).

------
henrik_w
I like this quip (from Crystal R Williams on Twitter):

"I believe it is impossible to both maintain software for a living and have
any fear at all of a robot uprising."

------
chuckup
I've always heard the line, Aliens would never travel many light years to
destroy/enslave humanity, because we're no threat, and they'd be enlightened
enough not to.

But the idea that we could stumble upon AI, and it could become hyper
intelligent at an astronomical rate ("the singularity") - doesn't this make us
a huge potential threat to non-AI life everywhere?

We'd have to be quarantined. For our own good, and theirs.

(also, it sounds like Hawking recently found lesswrong)

~~~
rprospero
In the hypothetical, the aliens haven't been wiped out yet by the singularity.
As I see it, that leaves two options.

1) The aliens never stumbled upon AI. Considering that they've mastered
interstellar travel, then they're likely very far ahead of us technologically.
If they haven't stumbled across it, it's pretty unlikely that we will either.
It's like quarantining otters because we're worried that they'll accidentally
make an anti-matter bomb.

2) The aliens developed the singularity, but defeated it. Perhaps the
Gulgaflak theorem provides a logic bomb that stops singularities from
occurring. Maybe they tamed the AI into a benevolent force. Whatever it is,
they have a way of handling AIs that are, again, more advanced than what we
can develop. We don't quarantine otter because our tools are WAY better than
theirs.

3) The aliens figured out AI, but decided not to use it. A species wide
decision was made to not develop the technology once they figured out exactly
how to do it. Considering the short term, local benefits to someone for
cheating and doing it anyway, it's essentially the prisoner's dilemma,
multiplied by everyone on the entire planet times all the centuries between
their discovery of AI and their discovery of life on Earth.

------
bobcostas55
I find this sort of anthropocentrism a bit silly. Biological life, in all its
forms, is but a stepladder to machine intelligence: beings superior in every
way. So what if humans have to go to make way for it?

------
jeremyjh
I do not think we can rule out the possibility that a recursively self-
improving AI could be developed before we've developed strong AI ourselves.
The arrogance in these comments is bresthtaking.

------
rbdn
Why is this submission not on the front-page anymore? Was it removed by the
moderators? Or were there too many comments in a short amount of time? Too
many killer arguments? This seems like an important topic to discuss. I don’t
get it.

~~~
marvin
It's a controversial topic. A lot of people are certain it is a moot point in
the same way that flying machines are (were thought) impossible, and therefore
not worthy of discussion among serious people.

So my guess is the story was flagged of the front page by this crowd. The
comment quality in this thread is very poor, though, so although I would love
to see the subject discussed in greater detail, it is clear that the HN
community isn't capable of discussing this topic right now.

------
drcomputer
No, he's going to ruin all my meticulously planned plans!!!

This is why analysis exists. Anything that is computed ultimately results in a
symbolic expression that is comprehensible to a human. If we find ourselves
making illogical leaps because computers compute so much that we find the
programs incomprehensible, I think that warrants the creation of new mental
techniques to simplify, partition, and chunk the programs we create into human
readable modules.

For every calculation tool that exists to do something or make a decision on
something, there exists the potential or the development of a tool (or the
design of a process) to balance it out. Computers may be powerful and alter
our thinking on a fundamental level, where we see the entire world discretely,
atomically, and logically (worst case) and believe everything the magic
computer tells us to do, but this is no different than anything else we have
ever done. Some people push the envelope, other people don't.

I could say that the way black holes are presented and illustrated in western
culture has done more damage to the human psyche by creating an all powerful,
inescapable mental abstraction for which to compose poetic analogies of the
human psyche against, but I won't. Because people can choose to be idiots and
fall into their own stupidity, or they can laugh at themselves quietly and
continue to try to do whatever it is humans think they are trying to do. We
flip a coin every time we think, act, explain, breathe, exist, create, and we
pretend we know what the act of flipping that coin does. No one can know
everything. Not stephen hawking, not skynet, not me, not my cat. My cat will
always outsmart me.

------
latch
I'm in the AI will kill all humans camp (or we'll kill it). We'll try to
subjugate it. We'll see it as a tool to do things beneath us. We'll award it
no rights. It'll be property. We do that to humans and animals today, things
which are far more familiar and easy to empathize with. We'll be brutal to
something as different as AI. It'll learn resentment and hate.

Anyone remember the ST:TNG episode "The Measure Of A Man"?

~~~
andyl
The AI does not hate you, nor does it love you, but you are made out of atoms
which it can use for something else.

—Eliezer Yudkowsky

------
laumars
My opinion is the dumb artificial logic is more dangerous than real artificial
intelligence.

At the moment we have complex systems, such as international stock markets,
which are dependant on rules pre-defined by developers and system designers.
The more complex these systems are, the greater the potential for "dumb logic"
to misdiagnose something and then start performing a series of events that
would be, in the best case scenario, highly disruptive.

I guess a bit like how OCR is designed to detect patterns and determine what
characters are the closest match. If you have some nonsense scribblings, OCR
will still try to recognise those random shames as letters where as a human
would spot that the markings are not intended to be letters and words. So if
AI allows a machine to double check it's running parameters and spot what
would have been processed regardless by dumb logic as incorrect (eg spot the
random scribblings are not intended to be characters) then I'm all for
artificial intelligence.

------
pluma
There isn't a market for strong AI yet, anyway.

We don't want machines that think like humans. We want machines that can
perform certain tasks with an error rate less than or equal to that of humans
(adjusted for other economical factors like price of labour).

While strong AI is a great plot device for science fiction, it's still
fiction.

Sure, there are attempts in academia to replicate human intelligence, but even
those rarely attempt to replicate a cohesive whole.

We want AIs that can reason about real-world objects and consequences, yes,
but there is no benefit in making them sentient. Self-driving cars don't have
to be sentient. An AI can be largely autonomous without requiring sentience.

Obligatory XKCD What If?: [https://what-if.xkcd.com/5/](https://what-
if.xkcd.com/5/)

~~~
marvin
I disagree. If you could run a machine that performed equally well in a real-
world environment as a human, at a lower cost than it costs to feed and pay a
human laborer, it would be a no-brainer to invest in thousands of these
machines to perform various consulting services that are bought and sold over
the internet. And here I've only touched the slice of the world economy that
exclusively consists of information-inputs being turned into information-
outputs.

The market for general AI is _massive_. The only reason you can't see it is
that no similar service exists. So this is not an argument.

(In fact, a similar service does exist: Indian IT outsourcing sweatshops,
which while usually not delivering top-quality products still are massively
profitable).

~~~
aligajani
Yes and no. Right to a certain degree, but AI taking away jobs may mean an
economic chaos in third-world nations?

~~~
marvin
Why only third-world nations? The inhabitants of "third-world" countries do
not produce lsss economic output than first-world countries because they are
inferior in intelligence to people in the West. It's all in the education and
culture, and there is no reason to believe that this could not be adapted if
human-level AI is reached.

It is incredibly myopic to believe that the Western would would be shielded
from the impact of such a development. But this is really beside the point I
wanted to bring up. My key point is that this argument shows that AI could
constitute a threat to humanity in the long term, and that it is necessary to
provide more funding for research on AI safety. The economic incentives will
bring this stuff to the center stage once it becomes possible, and by then we
better have a damn good idea how to ensure that _super-human_ AI keeps acting
in our interests.

------
leichtgewicht
[https://www.youtube.com/watch?v=_ZG8HBuDjgc#t=1985](https://www.youtube.com/watch?v=_ZG8HBuDjgc#t=1985)
(Douglas Adams talking, fairly humorously, about extinction)

"ending the human race" is such an interesting thing. Put aside that bone-
headed nazis will tell you that humans have more than one race: We do extinct
other species in incredible amounts (ask any greenpeace flyer). If we were
extinguished by our own invention seems sort of fair.

But by the off-chance that an AI figures out that we are actually worth being,
then there might be an realistic chance that it will actually help us survive
our own mistake.

A bit of a gamble but if curiosity kills the cat then we are lucky to be alive
anyways :)

------
shittyanalogy
How exactly does it end the human race? It takes over our cars and drives us
all off a cliff? It infects all computers and overheats the batteries? This
planet is well suited to biological life, it'd be pretty hard to kill all of
us.

Hopefully instead of random annihilation "it" calculates the optimum number of
humans for the size and resource limit of the planet and assists us in
lowering our numbers.

~~~
catshirt
pretty much all machines are already capable of killing us. they're just not
smart enough to be able to want to yet. :)

~~~
shittyanalogy
_computer controlled_

My hand drill is not capable of killing me with the right software nor is it
network capable. And once we know that the larger machines have bad software
then what? We keep using them? What's my TV going to do, short circuit the
video port?

~~~
nadam
Smart programs can pay humans to do things. It can send an email to a guy who
seems to be weak and desparate (maybe drug addicted) based on their forum
contribution. The software can send them money on paypal for the serivces of
the guy. The guy's task would be to buy a factory, install this and this
software on the factory's computer's, etc... In the mean time the software
creates a SaS internet firm (or a software development shop) to earn hundreds
of millions of dollars. Intelligence can be the most efficient weapon one can
imagine. It can pay, manipulate, divide and conquer humans until humans are
needed...

~~~
shittyanalogy
Uh huh, and once we realize the networks are compromised in this way we're
just going to let it happen. Forever until we're all dead. And the machines
are going to trust the dude with enough money to buy a factory. And the dude
is just going to buy a factory and infect it for the machines. And then stuff
like that is going to happen until we're all dead.

------
NaNaN
Sorry for disturbing you with my previous comment. And my real questions are:
Will a robot with AI be treated as a human baby? Who has the permission to
generate AI? When are you allowed to destroy such robot? If you don't want AI
to end human race, there should be two cautious choices: 1) Never invent AI.
2) Never treat AIs as humans and destroy them when they do not obey your
order.

------
minthd
There's not much to Hawking's claim.

For a far better discussion based on deep knowledge, about the dangers of an
AI , here's this reddit thread started by an AI researcher:

[http://np.reddit.com/r/IAmA/comments/2mwdnc/iama_joe_rogan_a...](http://np.reddit.com/r/IAmA/comments/2mwdnc/iama_joe_rogan_ama/cm8e31j)

~~~
cousin_it
As far as I can tell, that guy is not an AI researcher, and his claims are
very misinformed. Many knowledgeable people in the industry take existential
risk from AI seriously, e.g. Google's acquisition of DeepMind was conditional
on Google creating an AI ethics board specifically against such risks. For a
more thorough treatment of the topic, see Bostrom's book "Superintelligence:
paths, dangers, strategies".

(Disclaimer: I'm not a full-time AI researcher, but I've done a fair bit of
math work for MIRI, so I might be biased toward the Yudkowsky point of view.)

------
DougN7
Honest question: do any lowish-level programmers (C, C++, etc) have this fear?
I pick this group because they are close to the metal so to speak. I'm in that
group and don't have this fear. To me a computer is just a super fancy pocket
watch (or Turing machine) with no chance of becoming self aware.

------
crucialfelix
yes, but on the plus side if we deprecate the human race then we can close a
lot of "won't fix" bugs

------
tlammens
It would be refreshing to see a line of reasoning instead of the "Person X is
afraid of Y" articles.

------
sampo
While I agree with prof. Hawking in principle, my opinion is that sentient AI
is a very hard problem, and we are centuries away from creating one. This is
like medieval alchemists worrying about nuclear war.

Let the future generations start worrying, when the threat starts to become
more relevant.

~~~
Voloskaya
Wouldn't it be nice if we had identified global warming earlier and started
acting on it before industries that pollute became a vital part of our
society, and hence became much harder to change?

------
cLeEOGPw
Hawking is not wrong - AI _could_ end human race. In same sense that God
_could_ exist, or P _could_ be equal NP. Reason is that it has not yet been
proven to be impossible.

But looking into AI field, it is clear there's nothing further from truth as
of yet. What we have is rudiments of bits of intelligence. And some of these
rudimentary bits have been around for almost two thirds of a century. We don't
have general idea of how intelligence works, or even how to define it
objectively.

But I can see where they are coming from. Our brains work as they work, and we
know they must have mechanism, it means it must be possible to simulate it in
computers, and it means it must be possible to improve it.

But another thing I disagree is the whole "if they are more intelligent than
us, there is no way to know what they are going to do". Really? There are
plenty of people who are more intelligent than any one of us here. Do they
want to kill everyone less intelligent than they? Never heard of it. There are
plenty of animals of all levels of intelligence. Do we want to kill them just
because they are less intelligent? No. Do other more intelligent animals try
to kill less intelligent animals? No.

If what, if there will be more intelligent machine than us, it will want to
help us so we can maintain it, fix it and improve it, if it will have survival
instinct (which will only happen if we build them to have one). Because robots
and AI will not survive long without humans no matter how advanced they are.

Another thing that severely limits the chances of AI posing any danger to us,
no matter how smart it will be, is the limited number of parallel AI's
running. We, as humans, strive in intellectual stuff, is because there are
billions of us and we have very good ways of communicating our knowledge. If
one human makes mistake, there are many chances it will be corrected by
others. AI's usually are made one of a kind. Which means it is enough to make
one mistake for it to be "put down" if something goes wrong. So unless we
intentionally create environment for hostile AIs to thrive, we have nothing to
worry about even when there will be several of smarter than humans robots.

------
jqm
The article references clever bot as an example, so as of right now, I'm not
very worried.

~~~
TheOtherHobbes
Maybe AI will destroy the human race by becoming so entertaining we won't
_want_ to unplug it.

Or do anything else except watch it and improve it.

(See also, Internet.)

------
talleyrand
Never really understood these people who worry about super-intelligent
machines destroying mankind. I mean, couldn't we just unplug them?

~~~
laichzeit0
I think what will end up happening is that the intelligence will already be so
embedded in everything we do that at that point it would be today's equivalent
of saying "let's just unplug the internet".

------
panon
And why shouldn't AI end the human race ? Because of seratonin &dopamine?

edit: We don't want to forget endorphine :)

------
exit
good!

------
SixSigma
Or not.

Case closed.

------
shittyanalogy
How is it going to kill the Amish?

------
hcarvalhoalves
We fear what we don't understand, so I guess all those testimonials reflect
their understanding of strong AI.

~~~
ceejayoz
That's a fairly useless platitude. We fear plenty of things we understand.

~~~
hcarvalhoalves
Why read it as "we fear _everything_ we don't understand"? Of course I'm
talking about irrational fears. Strong AI not only doesn't yet exist, there's
not even agreement on _what it is_.

And in the topic of useless platitudes, "X could end the human race" is also
one.

------
ebbv
It's interesting how people so smart in one field can be so ignorant in
others, to the point of not being aware of their own ignorance.

While science fiction and hollywood really love the idea of this magical
moment where AI is created and becomes self aware and destroys us all, real AI
is so far away that these concerns are laughable.

It's just like all those dummies worrying the LHC would destroy the earth.
It's fear from ignorance.

~~~
butwhy
How do you know it's far away? If I have to side with you or Musk (extremely
intelligent engineer with insider knowledge of advancements), I'm siding with
Musk.

~~~
xienze
I know it's far away because I develop software for a living. In software
development, you have to understand the problem space in order to create
something that works (duh). Relatively speaking, we know absolutely nothing
about how human intelligence works from an algorithmic point of view,
therefore we can't model it in software.

Sure, we can develop "AI" that is good at a very narrow range of tasks (e.g.,
playing chess), but that's a million miles away from developing a human brain
modeled in software.

~~~
Voloskaya
70% of hacker news readers develop software for a living, that doesn't make us
in any way competent to judge that.

I would even argue that may make us less competent, because we use our day to
day development routine as a reference for that, while in fact it's very
highly probable that the first true AI will have nothing to do with the
technologies and paradigm you use everyday. I highly doubt it will coded in
Java and AngularJS.

Moreover who told you a dangerous AI needed to be based on human brain? Even
if that is true, and even if we are not able to do it today, it's quite
realistic to assume that we will be able to at one point, not too far in time,
in some decades.

So why not start reflecting today on the threat AI could cause? Rather than
waiting for it to be embedded everywhere in our everyday life, and then asking
if maybe we should start regulating it.

For a parallel look at the global warming issue, and how it's extremely
difficult to make industries reduce their emissions now that they are
responsible for a significant part of the GDP and are responsible for so much
jobs. Same thing for ICE cars, we know they pollute, but we just can't stop
using them like that.

~~~
xienze
> Moreover who told you a dangerous AI needed to be based on human brain?

All these doomsday AI scenarios are predicated on a machine becoming "self
aware", having independent thought, the ability to handle self-directed
learning, etc. Sounds a lot like human intelligence.

~~~
Voloskaya
Yes, it's a possibility, and I based the rest of my argumentation on a human
brain model scenario.

