
Apocalypse soon: the scientists preparing for the end times - HillRat
http://www.newstatesman.com/sci-tech/2014/09/apocalypse-soon-scientists-preparing-end-times
======
chroma
Far too often, people anthropomorphize AI or draw from fiction. Bostrom does
his best to dispel these notions.

 _The algorithm just does what it does; and unless it is a very special kind
of algorithm, it does not care that we clasp our heads and gasp in dumbstruck
horror at the absurd inappropriateness of its actions._

— Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

This is the key point Bostrom is trying to make. If general AI research is
successful, we'll need to build agents with some very carefully chosen goals.
Even something as silly as "make paperclips" can result in a universe tiled
with paperclips. Earth and its biomass could be turned into said paperclips,
ending humanity.

~~~
mariusz79
I haven't read Bostrom's Superintelligence so I may be wrong here, but this
whole Evil Paper-clip AI seems like a really silly thing. First of all making
paper-clips does not require super intelligence, just a simple automaton. If
we however do employ AI to make paper-clip it should know how many we need on
average, and what are they used for. If the paper clips are for our (human)
use, the AI will be really dumb if it does not take under consideration that
destroying humans to make paper clips does not make sense.

~~~
DanAndersen
One of the main points of Bostrom's work is the orthogonality thesis, which is
that intelligence is orthogonal to any particular set of values -- basically,
all those nice fuzzy things like caring what the humans actually meant? You
don't get those for free when you build intelligence.

It's all too easy, when talking about AI, to fall into the trap of imagining
that AGI researchers are essentially building a little homonculus in a box.
Rather than thinking of it as "an intelligence," maybe think of it as "a
system that is incredibly effective at carrying out its goals in a general
manner." Making paperclips doesn't require super-intelligence, but that
doesn't have anything to do with the premise, because intelligence can be
turned toward any goal. Think about modern financial trading algorithms --
they're probably built to maximize the money that flows in, and there's
nothing that says "I know that this money will be used for X." Think of most
AGI like psychopaths/sociopaths -- certain actions that don't play nice with
others aren't because they're dumb but because there's intelligence without
things like empathy to go along with it. It would be easier to build an AGI
focused on the goal of making more paperclips than it would be to build an AGI
focused on the goal of paperclips but also aware of all the fuzzy situations
where a human wouldn't want that exact goal and why it should even care.

~~~
mariusz79
>Think about modern financial trading algorithm

They are not self-aware, or superintelligent. In addition to that we monitor
them and can react if the decide to crash the financial system just to make
few more bucks. Paper Clip machine will be a robot, not AI.

~~~
mariusz79
@hawleyal yes, and I understand that. I'm working in industrial automation,
but Bostrom talks about "superintelligent paper-clip maker which works out
that it could create more paper clips by extracting carbon atoms from human
bodies". Your algorithm is not super intelligent is it? It's just a dumb
program executing operations according to some predefined constraints. It may
brake the financial system if you don't set any rules to prevent that, but it
will not suddenly decide to start trading in gold if you designed it to trade
on forex. If it suddenly decided that Forex is a better option to make money,
then we could consider it intelligent and start worrying about it. We would
really have to worry if it decided that the best way to become the richest
person on the planet is to kill everyone on it. But that would require super
intelligence.

My problem with this whole example given by Bostrom is that if the algorithm
was so intelligent as to know how to go from manufacturing paper clips out of
supplied resources to deciding how to get more of the necessary resources by
killing humans, it could probably figure out that this is pointless.

I fully understand that this paper clip example is just oversimplification,
but I would expect more from people spending entire lives thinking about that
stuff.

It's nothing but a fear-porn designed to get more funding.

~~~
jo_
> ... If the algorithm was so intelligent as to know how to go from
> manufacturing paper clips out of supplied resources to deciding how to get
> more of the necessary resources by killing humans, it could probably figure
> out that this is pointless.

This is an interesting point.

I'm not convinced that it could decide or figure out that the operation is
pointless (for reasons I'll explain below), and I feel like it is perhaps
taking the paperclip example to an unnatural extreme.

Really, the crux of the matter is that any intelligence agent at human level
would have the intelligence and problem solving skills we do without the moral
and social constraints that we do. Any problem it decides to solve or any task
it's given to complete will be performed with the same ruthless efficiency
you'd expect a machine to have. This is why it might not be able to figure out
that the operation is 'pointless' or 'wrong'. If you give it an objective but
don't provide first order or axiomatic values for human life or the non-
substitutability of people, then you'll get behaviors which you would consider
evil.

Consider the following:

"Person arm stuck in assembly line. Halt assembly line for 1hr, lower-bound
for total cost: $1,200,000. Continue assembly, upper bound for injury
settlement cost: $1,000,000. Decision: continue assembly."

There's no implicit consideration of human worth that people have because
human morality is relative. In general, we have a similar set of base
parameters instilled by evolution (cute things and things like us are good --
don't hurt or kill them), but we can't prove from any mathematical axioms that
these are 'right'.

This is why people get scared -- if you program a computer to maximize the
profits of a company and to do so at any cost, a human-level AI will spend an
eternity doing just that, even if it means the complete subjugation of
humanity. (Since we can't eliminate humanity otherwise currency ceases to
exist and economic collapse will hurt the bottom line.)

I would wager (but can't prove) that a study of abused people, child soldiers,
and sociopaths would yield interesting parallel psychological profiles. These
are people (human level) with a set of axioms which are distorted by
artificial means. There's still the built-in problem of the human component
adding "Don't hurt the cute things," in most cases but nonetheless.

------
aaronem
> “Being born into a moment when the fate of the universe is at stake is a lot
> of fun.”

Hubris, like heroin, offers an approximation of ecstasy. Therein lies its
danger.

------
dosh
It'd be interesting to learn the depth of study related to spectrum of
scenarios in which human race will be facing apocalypse.

Few things that pop up in my head:

\- disease

\- nuclear (or comparable) war

\- depletion of natural resources

\- natural disaster

\- extra terrestrial influence: asteroid or alien invasion

\- artificial intelligence

\- anarchy / disruption of social security

hmm...anything else?

~~~
Havvy
Anarchy won't cause an apocalypse, just a change in the structure of the
ruling class as a new ruling structure is created.

also on the extra terrestrial influence side, there's direct hits by gamma
rays that could wipe out Earth in an instant.

~~~
MoistDinosaur
Anarchy causing a collapse of the ruling structure could create a war that
destroys much of society. Even without anarchy, imagine a country like China
or the US plunged into a civil war. The effects on the global economy would be
disastrous

