
The Doomsday Invention - pc
http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom
======
industriousthou
I'm not really well read on this stuff, but do most people think that deep
brain computer interfaces are farther off than self-improving artificial
general intelligence?

I usually see arguments about harmful AGI based on the idea that AGI will be
able to optimize itself and acquire computational resources to become powerful
in a scary way.

I just sort of think that the human brain is already a tremendously powerful
computational resource and augmenting it with weaker AI is going to be easier,
cheaper, and ultimately, much powerful than AGI. But, of course that depends
on the development of really powerful BCIs.

Why not develop AI systems that complement the already insanely sophisticated
computational system of the human brain instead of trying to replace it? Or
enable groups of humans to work together along with AI to solve really hard
problems?

People often seem to think of AI as an arms race, but is it possible that
systems that leverage human intelligence may have an insurmountable advantage?

~~~
mirimir
I agree. Any system with potential for strong AI will be more immediately
useful for augmentation. Although it's dated, I recommend _Disappearing
Through the Skylight_ O. B. Hardison, Jr.

------
ecksor
_" the universe appears lifeless not because complex life is unusual but,
rather, because it is always somehow thwarted before it becomes advanced
enough to colonize space."_

The universe appears lifeless because it is a Dark Forest.
[http://www.goodreads.com/book/show/23168817-the-dark-
forest](http://www.goodreads.com/book/show/23168817-the-dark-forest)

~~~
ajmurmann
That was a great book. My only issue was the phoney science around locating
where a signal comes from.

------
skybrian
I'm wondering how much Eliezer Yudkowsky and the LessWrong crowd were
influenced by Bostrom or vice versa. This all sounds very familiar, but the
dates seem earlier:

"Bostrom introduced the philosophical concept of 'existential risk' in 2002."

~~~
Animats
_" Bostrom introduced the philosophical concept of 'existential risk' in
2002."_

Oh, come on. This has been a theme in science fiction back to at least the
1940s. Jack Williamson's "With Folded Hands" (1947) is probably the clearest
early writing on the subject.

 _" The car turned off the shining avenue, taking him back to the quiet
splendor of his prison. His futile hands clenched and relaxed again, folded on
his knees. There was nothing left to do."_

~~~
j1o1h1n
Also Frederic Brown, "Etaoin Shrdlu", 1942, which I have only read synopses
of.

~~~
MichaelMoser123
Also Karel Čapek wrote R.U.R in 1920; (well its about assembled artificial
biological beings, so you can say that it is not about AI).

[https://en.wikipedia.org/wiki/R.U.R](https://en.wikipedia.org/wiki/R.U.R).

And there is the Golem, which is even earlier.

Also i find it funny how much attention this theoretical problem receives,
when we don't have a good explanations on what intelligence is and how it
works.

~~~
MichaelMoser123
Also all these stories seem to be projecting our own behavior on robots;
Humans and apes like to be alpha males, because it gives them an evolutionary
advantage; by extension it seems natural that Robots will also have the same
drive for power; is that really true?

I think that the Robot will not have such an in built desire - if it is based
on reasoning then the machine might actually be more reasonable. In other
words there is enough room for both Humans and Robots.

~~~
MichaelMoser123
Mr. Yudkowsky says: "Moore's Law of Mad Science: Every eighteen months, the
minimum IQ necessary to destroy the world drops by one point."
[http://www.azquotes.com/quote/819025](http://www.azquotes.com/quote/819025)

The point is that our gadgets become more powerful and more potent as weapons;
so as we have a good history of past abuses it is sort of easy to extrapolate
to the future.

What is still don't understand: would intelligence be an inhibiting factor -
like more reasonable Humans are supposedly less destructive; Maybe the same
goes for machines.

I don't buy the argument about competition for resources - With enough effort
you can always stretch it so that there is enough for everybody/everything.

Another interesting aspect: once upon a time people would become very agitated
when discussing politics (that was when we still had ideological differences
and when people thought that their stance does matter); In our time we have
discussions about sci-fi instead of that.

------
sgt101
Jaron Lanier is quoted here, but his compelling (to me) arguments on this
topic are not aired. I recommend : [http://edge.org/conversation/jaron_lanier-
the-myth-of-ai](http://edge.org/conversation/jaron_lanier-the-myth-of-ai)

As the gentleman says, a lack of Autonomous Agents is not the thing standing
between humanity and oblivion. The conspicuous surplus of super empowered
agents (as in "I could leave this room and 80 million people would be dead 40
minutes later") is.

As for the Royal Society stuff. For the record it's not learning, and it's not
the machines, I can say it again if anyone bothers to listen. Although given
that it's not what will get news print or sell books or get on telly I am
pretty sure that no one will.

Ho hum.

(edited 'cos I forgot a bracket, which was why I gave up c)

~~~
rictic
That's a false dilemma. It would be wise to defend against every existential
threat in proportion to its likelihood. The threat of nuclear war is well
known, and significant resources are spent on reducing its likelihood. Until
very recently the same could not be said for the threat of unfriendly AI.

We can do both.

~~~
kansface
> That's a false dilemma. It would be wise to defend against every existential
> threat in proportion to its likelihood.

Lives saved versus dollars spent is a better metric. Having said that, the
Singularity crowd tends to lump in all potential future lives into the saved
category. The question of allocating resources to avoid existential threats
very quickly turns into a Pascal's mugging in favor of AI safety (think of the
trillions of future children).

> The threat of nuclear war is well known ... until very recently the same
> could not be said for the threat of unfriendly AI.

Enough nuclear weapons exist to kill all humans forever. Strong AI does not
exist. Expert estimates range from decades to over a century. At the same
time, the people making the most noise seem come from the side of avoiding
existential threats and not AI research. I am (highly) skeptical they will
produce anything of value. Let me rephrase. Would you expect any productive
work in 1840 by philosophers concerning potential dooms day weapons yet to be
created by physicists?

~~~
vilhelm_s
There doesn't exist anywhere near enough nuclear weapons to kill all humans.
Even in the 1980s, people were predicting between 20 and 100 million immediate
American deaths from a full-scale Soviet attack, and since then there are much
fewer nuclear weapons. See e.g. the discussion in this comment thread:
[https://slatestarcodex.com/2015/10/31/ot32-when-hell-is-
full...](https://slatestarcodex.com/2015/10/31/ot32-when-hell-is-full-the-
thread-will-walk-the-earth/#comment-255524)

It has been suggested that humans recovered from a population bottleneck of
less than 30,000 individuals (the Toba catastrophy). A nuclear war would leave
billions alive.

------
TazeTSchnitzel
What makes artifical intelligence more likely to destroy humanity than a
person?

~~~
purpled_haze
It isn't AI specifically. Any intelligence capable of taking over or shutting
down a wide range of systems, or even just a few critical ones, needs to be
closely monitored if there is more than a small chance of it causing massive
harm.

Now, if that intelligence also far exceeds our own in certain aspects but is
not empathetic enough, and if it were to have a keen understanding of how to
control humans, and if it were much more able to quickly control and adapt
software and hardware, then it would become a much bigger risk.

But, you're right- in theory, there could be a natural intelligence that could
be the same level of risk as an artificial one. Most of us, including myself,
are just not aware of one.

~~~
fennecfoxen
> _Any intelligence capable of taking over or shutting down a wide range of
> systems,_

That's a thing I don't care for about most science-fiction. The AI entity can
just magically take over every technology it's connected to, because magic. No
concern about the computational complexity of breaking the code-signing
certificates on the affected computers. It's like everything on the Internet
of Things is hopelessly and completely insecure!

... okay now that I type that out, maybe that's more realistic than I gave it
credit for after all ... :b

------
bhouston
I instantly noticed a Three.JS / WebGL animated background. Sort of neat. I
think it is by Jono Brandel:
[http://about.jonobr1.com/](http://about.jonobr1.com/)

------
Animats
Wasn't this covered on HN a few days ago?

~~~
dang
It has been posted many times, but none of the posts rose above the
'significant attention' threshold described in the FAQ. This one does, though,
so it'll be treated as a dupe if it gets posted again.

[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)

Btw, we're working on a better approach to duplicate detection that will
reduce the number of reposts and will more often give credit to the original
submitter.

------
flashman
"Before the prospect of an intelligence explosion, we humans are like small
children playing with a bomb."

A bomb is dumb, its explosion just raw chemistry. I would argue we're like
small children playing with a sleeping parent: when it finally wakes up, it's
going to have its own agenda for us.

~~~
Filligree
That gets the 'intelligence', but misses the 'amoral'. Parents tend to have
their children's best interests in mind.

If we're playing this game, then humanity is more like a small child trying to
wake up Cthulhu in hopes that he'll play nice.

~~~
Retra
You're going to build an amoral, powerful computer and it's going to do what?
Slap the fork out of your hand every time you try to eat something?

We build machines using a highly human-dependent artificial selection process.
That's not going to change because the number one feature being selected for
is usefulness to humans. I'm confused about how that's going to result in an
even modestly-powerful, fully integrated machine that is not demonstrably
useful to us.

That's a bit like saying we should reconsider having sex because our babies
might evolve grenades for hands, and that would be suicidal.

------
mirimir
At least at first, the most powerful entities will be enhanced humans aka
transhumans aka human-AI hybrids. Indeed, we're already there, in the sense
that we rely on machines so heavily, and in so many ways. Interfaces will
improve, with greater integration, of course.

I've been reading Hannu Rajaniemi lately. He has in some ways a trippier take
on this stuff than even Peter Watts does.

