
Nick Bostrom: ‘I don’t think the artificial-intelligence train will slow down’ - jonbaer
http://www.theglobeandmail.com/globe-debate/munk-debates/nick-bostrom-i-dont-think-the-artificial-intelligence-train-will-slow-down/article24222185/
======
watson
I've been interested in AI for a long time. But the last year AI discussions
seem to have really hit mainstream, which we can also see here on Hacker News
with a lot of posts about deep learning etc.

Lately I read this two piece story on Wait But Why which I really would
recommend to anyone wanting to get a better overview of the topic:

\- Part 1: [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

\- Part 2: [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-2.html)

~~~
alanpost
The author goes to great lengths to provide examples of what he calls a "Die
Progress Unit (DPU)," then vaguely but not explicitly implies it is the
measure for what he calls "human progress;" leaving the reader to assume that
the Y axis labeled human progress is plotted in DPUs on his two graphs.

He then outlines the mechanism for all of these progress-(non?-)deaths[1] to
occur, which he calls superintelligence. I find that explanation as either the
cause or solution to all these DPUs very unsatisfying compared to the
historically supported causes of death on that scale: disease, natural
disaster, famine, war. I may be the "stubborn old man" called out in the
article, but I don't on the face of it believe superintelligence will
eliminate most deaths in all four of those categories at the same time. It's
positing a mechanism seemingly immune to the selection pressure that got us
here.

But even if I put aside my doubts about superintelligence, I would find it
significantly more helpful to see a hypothesis as to why increasing
computational power is a general mechanism whereby there will be fewer deaths
from disease, natural disasters, famine, and war. I suspect it is more
fruitful to focus on how computational power will help solve problems with
critical infrastructure (shelter, supply, safety, communication, transport,
resource control, &c) rather that puzzling over how it may create a new cause
of death exceeding the magnitude of disease.

1: The author seems to switch signs here and imply these will be non-deaths,
which is supported by actual population growth, so let's proceed with that
assumption.

~~~
wcarss
Hi, I might have you or the article wrong, but I think you have misinterpreted
the whole "Die Progress Unit" thing, in a way that alters the fundamental
premise of the article to the flawed one which you have called out.

The interpretation I believe you've developed is connected to the number of
people who die prematurely at a given point in history, and that the author's
point then is something about how superintelligence will impact the number of
people dying of disease, famine, etc, either raising or lowering that number.
I can't find any spot in the article where the author raises the concept of
the rate of people dying prematurely at a given point in history, and
definitely not any place where he connects that concept to DPUs.

My understanding of the DPU is as the amount of change required in daily life
for a single time travelling individual to "die of shock" upon experiencing
another moment of time. In the examples provided, 100,000 BC to 12,000 BC was
enough change in day-to-day life experience to cause a person from 100,000 BC
to "die of shock" if they were transported to 12,000 BC instantaneously. The
same assertion was made for 12,000 BC to 1750 AD, and 1750 AD to 2015 AD, with
the author's conclusion being that there has been an exponential shortening in
the timescale required for enough change to occur in the daily experience of
human life to cause a time traveler to "die of shock", and that such
shortening will continue into our future -- possibly to the point of allowing
such a level of change to occur multiple times within our own lifespans.

I took the entire discussion of DPUs solely as an exercise in generating an
evocative image for illustrating the increasing rapidity of change in our
qualitative experience of life. I don't think the "die of shock" idea is meant
to be taken literally; it's just a convenient stand-in for "extremely
shocking, to the point that the experiencer may be incapable of processing the
instantaneous change rationally", not a measure of people actually dying.

(Sorry for going to such length, I just wanted to be precise. This is a
perfect illustration of [https://xkcd.com/386/](https://xkcd.com/386/))

------
coldtea
I don't think the artificial-intelligence train ever picked stream.

What we have with Watson (the Jeopardy model, because the name is used by IBM
as an umbrella for staff) etc is the same kind of number-crunching, dumb-smart
AI we always hand.

Without any qualitative steps that wont fly.

~~~
watson
Watson is what I think is called ANI (Artificial Narrow Intelligence). There
is a lot of things we need to figure out to move from a narrow intelligence to
a general intelligence (AGI) and then (quickly) to a super intelligence (ASI).
The big question of cause is how far we are from AGI - i.e. an intelligence on
par with a human. No one knows of cause but a lot of smart people say there is
a good chance it's happening in around 30 years. It's a development I for one
will be following closely - with both fear and excitement ;)

~~~
jacquesm
> No one knows of cause but a lot of smart people say there is a good chance
> it's happening in around 30 years.

I've been hearing that since I started using computers, 36 years ago.

~~~
jsolson
To be fair, today we have what we believe to be (much) more accurate models of
how much computation a human mind is capable of and how long it will take to
build computing machines operating at that scale.

36 years ago the argument that AGI was coming soon could be made in tandem
with the argument that we'd make some fundamental advance that allowed
computers to express intelligence with less computational capacity than humans
(by orders of magnitude). Today we can make an argument that we'll achieve it
(at least initially) by leveraging computational capacity on par with or
orders of magnitude greater than a human mind.

~~~
jacquesm
> To be fair, today we have what we believe to be (much) more accurate models
> of how much computation a human mind is capable of

How much, yes, how, not so much and definitely not at the powerbudget the
brain has.

> and how long it will take to build computing machines operating at that
> scale.

We don't actually know that. There have been some WAGs but so far those
appeared to be totally off based on the developments since.

> 36 years ago the argument that AGI was coming soon could be made in tandem
> with the argument that we'd make some fundamental advance that allowed
> computers to express intelligence with less computational capacity than
> humans (by orders of magnitude).

Yes, that was a crucial mistake and it led directly to the AI winter.

> Today we can make an argument that we'll achieve it (at least initially) by
> leveraging computational capacity on par with or orders of magnitude greater
> than a human mind.

Chances are that we're missing a very important piece of the puzzle for which
there is no known solution even in theory. The problem is that there are many
candidates for that important piece _none_ of which have currently proposed
workable solutions no matter what the computational budget or the accepted
slowdown (they are equivalent).

So I think some caution when throwing around projections numbering 'just a
couple of years' is warranted, after all it's 'merely a matter of programming'
but in this case we don't have a working model that we understand.

------
rsp1984
> If we were to think through what it would actually mean to configure the
> universe in a way that maximizes the number of paper clips that exist, you
> realize that such an AI would have incentives, instrumental reasons, to harm
> humans. Maybe it would want to get rid of humans, so we don’t switch it off,
> because then there would be fewer paper clips. Human bodies consist of a lot
> of atoms and they can be used to build more paper clips.

That's a funny example but seriously, a machine smart enough to build paper
clip factories would certainly be also smart enough to be able to avoid doing
things that harm humans. The argument sounds a bit silly to me.

~~~
jacquesm
> That's a funny example but seriously, a machine smart enough to build paper
> clip factories would certainly be also smart enough to be able to avoid
> doing things that harm humans.

Why do you think this? Building paperclip factories is straightforward
execution of a recipe, defining 'harm to humans' is a problem smart people
alive today can't even figure out for themselves, I can easily see how that
might be a problem for computers.

~~~
rsp1984
Yes, theoretically any action, no matter how small and remote, can eventually
lead to harm of a human, i.e. the butterfly effect, but that's not what I
meant.

What I meant was that a machine capable of building a paper clip factory on
its own would certainly as well be capable of avoiding doing obviously bad
stuff like killing people to turn them into paper clips or melt down buildings
and bridges for the iron.

Such a machine would probably also be smart enough to read the law, to have a
framework of what it can do and what it can't do.

~~~
Jach
Is this your first exposure to the paperclip thought experiment? You can find
lots of things to read about it here:
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

The general reply for you is that the generally intelligent paperclip machine
can understand law, can weigh consequences of potential actions, and can, _if
it wanted to_ , make paperclips without harming others. The key phrase is "if
it wanted to". Its only goal is to make more paperclips, it simply doesn't
care about anything else. When it recursively improves itself (makes itself
smarter), the only thing it cares about for its successor version to do is to
also care about making paperclips, and to make them more efficiently.

The problem of programming general intelligence seems to be orthogonal to the
problem of programming goal selection, goal preservation, and beneficial goal
changes, and making sure goals lead to actions which benefit humanity. That's
the main point of the thought experiment.

~~~
rsp1984
Yes it is my first exposure to this thought experiment and I do not have
trouble understanding the thought experiment. I am just going ahead and
applying some common sense and logic, to conclude that the argument is pretty
much entirely theoretical.

Yes, optimizing only for maximum number of paper clips could potentially have
some bad side effects, I get that. If that's the point of the thought
experiment, fine. However that's not how the author of the blog post put it:
He expressed concern that this could happen in real life, in the future. And I
don't think it could.

Why? Because in real life we'd not invent a super intelligent machine and then
feed it with some objective function to maximize and then let it do its thing,
watch it go out of control and destroy earth. In real life we'd make sure
we're in control over that machine. In real life we'd make sure we put very
clear and enforceable mechanisms into that machine to stop it from doing
anything harmful in the first place while it is carrying out steps to reach
its objective. In real life should we still see it doing something funny we
pull the plug. End of story.

In addition: Implementing above mentioned mechanisms is probably the easier
part of the whole exercise. The hard bit is inventing a machine that can build
a paper clip factory. If we can invent such a machine by then we certainly
have also invented mechanisms to control that machine and only have it do
"good" stuff.

~~~
Jach
The point of the thought experiment is that there's a difference between
intelligence and goals. There are other thought experiments (and just general
study of human cognition) whose point is that accurately capturing human goals
and values is _hard_ , possibly harder than making a general intelligence with
X goals in the first place. (See
[http://wiki.lesswrong.com/wiki/Complexity_of_value](http://wiki.lesswrong.com/wiki/Complexity_of_value))
So organizations like MIRI exist
([http://intelligence.org](http://intelligence.org)) to try and solve this
problem sooner rather than later, because once a more-than-human-intelligent
agent is running, despite the controls, if its values aren't precise enough
and if they aren't stable enough on improvement, there is immense possibility
for failure. It's also sort of questionable to talk about effective controls
of something that's smarter and faster than you are. (See the AI Box
Experiments, it's suggestive that you don't even need anything beyond human
intelligence to subvert controls you don't like.
[http://www.yudkowsky.net/singularity/aibox](http://www.yudkowsky.net/singularity/aibox))

These solar system tiling examples are just a dramatic case for something
terrible that could happen given a generally intelligent machine with non-
human-friendly goals, or even friendly-seeming goals (like "make nanomachines
that remove cancerous cells") that are improperly specified to cover corner
cases, but if you spend time analyzing more mundane ways things could go
slightly wrong to terribly wrong given an honest but flawed attempt at making
sure they go right, carry your analyses years into the future after the
intelligent software is started where things continue going right but then go
wrong, you might come to agree that the most likely outcome given present
knowledge and research direction will be bad for humanity.

------
thomasfoster96
I brought Bostrom's book _Superintelligence_ and got about halfway through it
before I moved into something else - it was a little disappointing. Throughout
it seems that he just doesn't get what makes something 'intelligent' \- while
a machine that optimises paper clip production might be an application of
artificial intelligence, it's not artificial general intelligent, and it's
hardly any more advanced than some particularly well automated factories that
already exist today. Artificial general intelligence at s human level implied
to me that such a machine can think and consider things at least as much as a
person - and therefore probably understands that making lots of paper clips
isn't the be all and end all of existing.

~~~
Jach
That's some useful feedback to know as I've considered recommending that book
but haven't read it myself as it's redundant to what I've already read, but if
Bostrom doesn't seem to explain very well that there's a distinction between
intelligence and goals/values, I'll just keep linking to some basic online
texts. (Try
[http://wiki.lesswrong.com/wiki/Complexity_of_value](http://wiki.lesswrong.com/wiki/Complexity_of_value))
The paperclip superintelligence will indeed be able to reason more effectively
than humans about what's the be all and end all of existing -- the problem is
that its conclusion will always be "to make more paperclips", because that's
the overarching value it uses to frame all its thoughts on future actions, and
its reasoning will be air tight. It will also be capable of explaining to any
human wondering why its being torn apart for paperclip conversion that human
values are different (they are such and such) and because it does not share
those values, it comes to a different conclusion for the meaning of life, and
it will also be able to generate great arguments for why its value system is
superior. But it probably won't bother to do so...

~~~
thomasfoster96
It's probably still worth reading the book if you're interested in a
philosophical view of super intelligence, but if you're looking for a detailed
look at the philosophical issues concerning artificial general intelligence
(which I thought it would be), it's probably not the book for you.

------
emptybits
It's a shame the headline stopped where it did. The article continues, "... or
stop at the human-ability station." This makes for a much more interesting
discussion (only because we're human).

We all measure AI progress and its rate of progress differently. That's the
common debate, isn't it? But however we arrive at that axis, human ability is
a point on it. It _will_ be a point in time. And there's no reason to believe
it's a special point that machine intelligence would notice or throttle itself
at. So as progress goes rushing, indifferently, past... don't things get
interesting?

------
personlurking
Related to AI but not this post. I just got back from the new film Ex Machina
and it was very good. As one IMDB user states, "it's beautifully shot,
fantastically lit, intelligently written, brilliantly cast." In the film, they
blend Mary's Room w/ a bit of Plato's Cave.

[http://www.imdb.com/title/tt0470752/](http://www.imdb.com/title/tt0470752/)

[https://www.youtube.com/watch?v=PI8XBKb6DQk](https://www.youtube.com/watch?v=PI8XBKb6DQk)

------
Kenji
I recommend anyone who is making outrageous AI claims (we will have an AGI in
X years, AI is getting dangerous, etc.) to take an introductory course in
AI/Learning/Intelligent Systems. Trust me, a single course on this subject
will be an instant cure for all doomsday imaginations. It'll take the magic
out of those "intelligent" programs. People who make these claims about AI
show a remarkable amount of ignorance about the subject.

And, quite franky, I'm tired of this subject. It's dumb and boring, everyone
is warning and fearmongering, and nobody is presenting any facts at all.

~~~
Houshalter
Apparently not:

>We thus designed a brief questionnaire and distributed it to four groups of
experts in 2012/2013\. The median estimate of respondents was for a one in two
chance that high-level machine intelligence will be developed around
2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems
will move on to superintelligence in less than 30 years thereafter. They
estimate the chance is about one in three that this development turns out to
be ‘bad’ or ‘extremely bad’ for humanity.

[http://www.nickbostrom.com/papers/survey.pdf](http://www.nickbostrom.com/papers/survey.pdf)

The people warning about the future of are pretty familiar with AI. Your
accusations that they are all idiots who have no understanding of AI is way
off the mark.

A number of notable people also signed the future of life institute open
letter warning about AI:
[http://futureoflife.org/misc/open_letter](http://futureoflife.org/misc/open_letter)

~~~
jacquesm
They're not idiots but their funding depends on them being able to move the
needle within their career window.

As for the warning letter: Asimov and other SF authors have been writing such
letters for the longest time, there is nothing new there that hasn't been
covered many times over.

------
baxter001
The narrative of the dangerous AI exciting for the layman, but it's ultimately
just a good story, more of a danger to our current economic model than it is
to our species.

