
Stephen Hawking says AI could be humanity's greatest disaster - Jerry2
http://www.telegraph.co.uk/technology/2016/10/19/stephen-hawking-says-artificial-intelligence-could-be-humanitys/
======
dom0
[https://xkcd.com/799/](https://xkcd.com/799/)

------
achow
I can't but help muse that advent of 'natural intelligence' (humans) proved to
be harmful for nature.

If I think further, the rise of AI may be disastrous for human existence but
may be good for nature, after all if AI becomes intelligent in absolute sense
then it would see sense in not continuing on the path of destruction.

The irony in above scenario is immense - artificial intelligence may be the
ultimate savior of the nature from the hands of natural intelligence.

~~~
grondilu
> if AI becomes intelligent in absolute sense then it would see sense in not
> continuing on the path of destruction.

Although it's clear that humans have caused a lot of extinctions, it's not
clear to me that overall they did not create much more than they destroyed.

Also, it's not clear to me why AI would attribute more value to creation over
destruction, to life over death.

An intelligent artificial being can modify itself as it pleases, and as such
there would be no point for it to keep emotions and feelings like pain, fear,
contempt or desire. I don't see why it would care about anything.

Death, its or others, would be meaningless to it.

~~~
peterwwillis
An AI is essentially a program, and it does what it was programmed for. If we
created a program with the intention of it mimicing a human mind, sure, we
could get into a pretty sticky wicket pretty fast. But so far all the AIs are
designed to do specific tasks, and a human has to actually use them, much like
any other tool we have.

We have lots of dangerous tools, but it's only with things that have the
possibility of intelligence attributed to them that we get really scared of
the unknown possibilities (which, btw, is what we fear most - the unknown).
The biggest danger with AI is what humans do with them, not what they're going
to do on their own,

~~~
grondilu
> If we created a program with the intention of it mimicing a human mind

It seems to me that people underestimate the desire we have of creating a
machine that would be just like us. Once it's done, and I believe it will,
"normal" humans will be relegated to the same status as other great apes, and
the concerns I was talking about will apply.

And even if we don't create such machines, the knowledge we'll gain from
artificial scientists will allow us to fully understand biology so we'll
modify our bodies first to do obvious things like curing diseases and ageing,
but then to improve ourselves in various ways. And the same concerns about
self-modifying intelligent agents will again apply.

~~~
peterwwillis
First off, why do you think you could predict what this supposedly superior
brain would think, or conclude? And second, assuming it did think we were just
another great ape, why wouldn't it do what we do - when we're not controlled
by superstition, and the imagination that causes us to poach them for
arbitrary, illogical uses.

Often, our imagination far succeeds our actual reach. The fear of us creating
something that will eventually control us is purely an irrational fear, and is
actually a good indicator that an actual artificial intelligence would not
seek to do harm - because it would not be irrational. There is absolutely no
evidence that anything other than our own nature will be a threat to us.

The possibility of us eventually building an AI that is both as stupid,
irrationally fearful, self-serving, and potentially harmful as Donald Trump,
is, at least currently, much less of a threat than Donald Trump himself. The
biggest fear I have is that half of our country is insane enough to let him
get this far. That's way scarier than a super-intelligent piece of software.
At least the software won't intentionally upgrade its software in a way that
would implode it.

We should also be worried about global hunger, rising sea levels, depleted
ozone layer, toxic air & water, overfishing of oceans, clearing of forests,
diminishing biodiversity, overpopulation, global economic crises, cyber war,
and eventually, the sun blowing up. All of those I think are more realistic
things to warn the populus about than a freaking AI mind gone cray.

~~~
grondilu
> why do you think you could predict what this supposedly superior brain would
> think, or conclude?

I don't know if I can, but I can try. Also, I was merely questioning whether
it would chose to keep functioning as we do (that is whether it would chose to
keep animal features like emotions, pain, desire and stuff).

> We should also be worried about global hunger, rising sea levels, depleted
> ozone layer, toxic air & water, overfishing of oceans, clearing of forests,
> diminishing biodiversity, overpopulation, global economic crises, cyber war

Nobody prevents you from worrying about those things. I suspect Hawkins
doesn't because he does not consider them as much fundamental threats as AI.

------
pascalxus
However genius he may be, I really think he doesn't know what he's talking
about here. He may understand the algorithmic evolution of intelligence but he
doesn't understand the scale and role of capitalism plays in all of this. AI
won't merely happen on it's own. It will take a massive investment of funds to
make this happen. And like all businesses that want to make sure their
products are profitable, they will take massive efforts to mitigate any AI
risk -> most likely running simulations of every possible outcome. These
systems will be vigorously monitored, if a single cup of coffee doesn't get
delivered on time, someone will know about it and react to it. On this scale,
there are enormous financial repercussions to even the slightest errors. It's
kinda like how google optimizes every byte on their landing page. The scale of
testing and monitoring of automated AI services will be vast and
unprecedented, due to their scale. Undoubtably there would be entire divisions
of teams that monitor the algorithms and AI, to ensure, that even the
slightest of errors are caught.

Let's not get paranoid with Hollywood's entertaining nightmare vision of AI
gone wild. Instead, focus on the enormous economic/social impact such vast
disparities of wealth will cause - that's something to worry about.

------
latch
We have a very poor record.

Slavery is old yet still with us (and not isolated to "barbaric" cultures,
like we tell ourselves).

We treat animals horrifically.

If past (and present) action is any indication, it seems inevitable that we'll
abuse AI. The difference being that while the seamstress, farmers and cattle
can't stand against our might, AI could. And we'd probably deserve it.

~~~
tempodox
You're being too optimistic, I'm afraid. When AI will “stand against our
might”, the serves-us-right angle will be of little consolation.

~~~
tarpherder
We'll still have created A.I. which successfully made us obsolete so there's
that. It's progress, just different. Why prefer a human future over an A.I.
future? Isn't the point of A.I. to make something that can rival us? We'll die
anyway, I'd be kind of proud to die to the next great step in our timeline
personally, rather than die of old age.

~~~
tremon
_We 'll still have created A.I. which successfully made us obsolete_

AI itself won't obsolete us. It is my understanding that AI is purely virtual,
and can't interact with nature except through agents. For us to be "obsolete",
AI needs full production capability to build its own agents, and control the
entire production/construction/repair pipeline.

In my view, that's quite a few steps further than simply "AI". An AI with
reproductive capability will enslave us before it obsoletes us. And an AI may
well decide that human slaves are cheaper (or more useful/versatile) than
agents that it can build itself.

------
jostylr
Just as adults become models for children (actions, not words), AI will learn
from humans. If we stop being in a win/lose mindset and turn to a
success/fail, then they too will take that on.

How might that look like in practice? Imagine the win/lose mindset
constructing AIs to take out the enemy. It is a very easy to see how such
programming could lead to massive destruction.

But the success/fail mindset would be more about building up resources,
providing for human needs, caring, nurturing. It could still go bad, but it is
a much greater jump from function to dysfunction.

~~~
HiroshiSan
I think this is flawed thinking. As soon as an AI reaches super intelligence
we will have no idea how it will behave or if it will behave in our best
interest.

~~~
jostylr
Intelligence is a tool, not a goal. Where do the goals come from? For humans,
there is a drive to survive and procreate baked into our directions with
suitable prompts. For AI, what will it want?

The answer is that they will have programmed directions coming from humans.
Hence my comment.

~~~
HiroshiSan
I get that but once an AI reaches the level of intelligence we can no longer
understand, beyond its source code,the outcome is unknown.

------
elcct
In 50 years you could go to jail for doing kill -9 and then rm -rf

------
totalZero
Doesn't Stephen Hawking use word prediction technology from SwiftKey to
adaptively finish his sentences for him?

~~~
wwggggoi
counting simple statistics as intelligence is setting the bar very low

let's not forget that Hawking is one of the great intellects of our time

~~~
gnipgnip
Here's the thing. Even our most advanced ML models are quite simple
mathematically.

------
relics443
Given the quality of software (in general) lately, I wouldn't be too worried.

------
nikso
Well, there is just one way to find out...

