
Why Is Our Sci-Fi So Glum About A.I.? - todayiamme
http://www.nytimes.com/2014/09/21/magazine/why-is-our-sci-fi-so-glum-about-ai.html?_r=0
======
austinl
It's been mentioned once, but I think the Minds from Ian M. Banks' Culture
series is a quintessential example of benevolent A.I.

To maintain a post-scarcity society, humans have turned over control of to the
Minds, "hyperintelligent machines originally built by biological species which
have evolved, redesigned themselves, and become many times more intelligent
than their original creators."

Minds manage everything from resource allocation to war planning. They also
control all of the technical, day-to-day operations. Still, "the essentially
benevolent intentions of Minds towards other Culture citizens is never in
question. More than any other beings in the Culture, Minds are the ones faced
with interesting ethical dilemmas."

[http://en.wikipedia.org/wiki/Mind_(The_Culture)](http://en.wikipedia.org/wiki/Mind_\(The_Culture\))

~~~
maxerickson
A prominent example of benevolent A.I. in recent popular media is JARVIS from
the Iron Man movies.

I think much of the explanation is that benevolent A.I. makes for a (often
boring) character, whereas hostile A.I. creates conflict.

~~~
api
AI that is just flat out alien (possibly involving what tv tropes calls "blue
and orange morality") would be an interesting third option.

~~~
qznc
For most of a story it probably does not matter if an intelligence (artifical
or not) is hostile or alien. Just the ending is different.

JARVIS is a sidekick AI. A story where the AI is the main character is either
unrealistic (because the AI is too human) or lacks identification (because an
AI is not human-like). The first is ok for Hollywood et al. The latter makes
for bad stories and I know no examples.

~~~
tim333
>A story where the AI is the main character is either unrealistic (because the
AI is too human)...

I enjoyed Terminator 2 with Arnie as a good AI. Whether it is unrealistic we
won't really know till Skynet takes over.

~~~
qznc
May I recommend a good and more realistic Terminator fanfic:
[https://www.fanfiction.net/s/9658524/1/Branches-on-the-
Tree-...](https://www.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of-Time)

Actually I recommend everything alexanderwales has written. He has a Frozen
fanfic, which basically is about the technological singularity via magic:
[https://www.fanfiction.net/s/10327510/1/A-Bluer-Shade-of-
Whi...](https://www.fanfiction.net/s/10327510/1/A-Bluer-Shade-of-White)

I consider the suggestion that Terminator is realistic as trolling. So I will
refrain from arguing against it, because there is nobody to convince.

~~~
eli_gottlieb
Since when did people start passing around the stuff from /r/rational as if it
was serious literature ;-)?

------
ARothfusz
One of the problems with writing about super intelligence is that none of us
are smart enough to do it. Writing already makes us wittier than speaking,
giving us a chance to edit our words and show our thoughts in the very best
light we can. So in a way, we're already accustomed to reading at a slightly
brighter level than we live every day. How could we stretch this even further
to create dialog for superintelligent people and machines?

These aren't my own thoughts -- they've come up in a discussion of _Flowers
for Algernon_ which I read somewhere and don't seem to be able to cite now. In
_Flowers_, the author gives momentum to his eventually super-intelligent
protagonist by starting off with severe mental handicapping, then spending a
majority of the story at normal intelligence before allowing us to imagine his
next state. It is a great writing technique, giving direction and then leaving
things to the imagination. Imagination is like a variable that scales the
amazingness to fit each reader. I think movies often fail to leave enough
space for imagination -- when they're left spelling everything out to the
least imaginative member of the audience they leave everyone feeling limited
and dissatisfied, glum.

If you want upbeat AI Sci Fi, read a book. Read Banks's _Culture_ series,
James P. Hogan's _Two Faces of Tomorrow_, or in a gentler vein, Bradbury's _I
Sing the Body Electric_.

------
qznc
A dark article about AI and no mention of Eliezer Yudkowsky? That guy
considers AI the biggest threat of mankind and is running an institute to
prevent our extinction: [http://intelligence.org/](http://intelligence.org/)

He writes a lot, but this seems a good start:
[http://yudkowsky.net/singularity/intro/](http://yudkowsky.net/singularity/intro/)

~~~
jostmey
I think Humanity has more to fear from Humanity than from AI.

~~~
ArkyBeagle
"Software is people who aren't there" \- Jaron Lanier.

------
notahacker
An odd question for the article to pose really. The art of telling a story
involves change which is far more compelling if it creates at least some
conflict and confusion[1], and if AI or artificial mind enhancements are key,
they're more interesting as a cause than a solution to the problem[2], and
you'd have to be writing for an audience of pretty hardcore nerds to bother
pointing out the background music was composed by a creative computer. When it
comes to assessing how people might react when blessed with superaugmented
intelligences, there are plenty of cautionary examples of people with natural
but notable extremes of intelligence who've been tripped up by crippling
vulnerabilities. The history of experiments on human minds is pretty grisly
too. Given plenty of reasons to believe the acceleration of technical progress
won't lead to blissful happiness, and the tendency for blissful happiness to
be a dull storyline anyway, why _wouldn 't_ SciFi continue to be glum about AI

For all the article's comments about the "mindboggling" potential of Moore's
law, my word processor looks about the same as it did nineteen years ago, and
computers still suck at simple games like Go that, unlike chess, can't be
brute forced. I'm grateful to Google for making finding information that bit
easier, but I'm even more grateful for humans for creating or curating the
content in the first place.

[1]Compare "Do Androids Dream of Electric Sheep" with Edward Bellamy's utopian
"Looking Backward". Both versions of the early second millenium are pretty far
from the mark (indeed even the _technology_ of the latter book is more
arguably more accurate despite it being written in the nineteenth century) but
only one of these books is considered to be a riveting read that says profound
things about human nature. Similarly, there's a reason some of HG Wells' work
is lauded and some of it's laughed at.

[2]Deus Ex Machinas suck and "luckily the AI figured it out" is definitely a
loss of imagination and nerve.

~~~
todayiamme
>>> The art of telling a story involves change which is far more compelling if
it creates at least some conflict and confusion[1], and if AI or artificial
mind enhancements are key, they're more interesting as a cause than a solution
to the problem[2] <<<

I agree with you that conflict is the source of much of compelling story
telling, but the question over here is why isn't the AI an ally as opposed to
a malevolent entity? This in of itself is very revealing about the nature of
our culture. If you contrast the work from the early 1900s to the mid 1900s,
as personified by Arthur C. Clarke and Isaac Asimov, then a lot of the stories
were based around conflict, but the transgressor was in fact a complex chain
of events or a third party entity and the AI was a factor as opposed to
something evil that needed to be taken down.

Even for the events in 2001, Clarke painfully explains over the course of the
second novel that the AI had been driven insane by conflicting goals imposed
by human authorities who should have known better, but didn't. The AI lashed
out in madness not because it had malevolent intent. In fact, much of the
themes of those books could be summed up in, my friend - this really awesome
and earnest robot - and me are out in the vast cosmic unknown and it's
desolate, lonely, and I'm one step away from going looney, but luckily my
friend - the robot - is here for me.

>>> When it comes to assessing how people might react when blessed with
superaugmented intelligences, there are plenty of cautionary examples of
people with natural but notable extremes of intelligence who've been tripped
up by crippling vulnerabilities. <<<

Except, if you examine this belief then it is a somewhat recent thing. If you
go back a few decades, then there was marked optimism about how humanity was
going to evolve beyond our very humdrum beginnings into something like the
star child Dave Bowman becomes at the end of 2001 - a gift that's bestowed
upon him by benevolent alien entities working with a machine.

There are actual surveys which show the rise of this distrust, it is very much
an artefact of modern culture as opposed to being something inherent to
technology itself.

[http://www.ecumenicalnews.com/article/new-study-reveals-a-
st...](http://www.ecumenicalnews.com/article/new-study-reveals-a-startling-
distrust-of-modern-technology-among-americans-24482)

[http://motherboard.vice.com/en_ca/read/it-doesnt-matter-
that...](http://motherboard.vice.com/en_ca/read/it-doesnt-matter-that-
americans-are-scared-of-new-technology)

>>> For all the article's comments about the "mindboggling" potential of
Moore's law, my word processor looks about the same as it did nineteen years
ago, and computers still suck at simple games like Go that, unlike chess,
can't be brute forced. I'm grateful to Google for making finding information
that bit easier, but I'm even more grateful for humans for creating or
curating the content in the first place. <<<

Except now you have this one organisation that has written this magical piece
of code that executes in parts across millions of individual computers to
search trillions of documents and gives you the reply to any question within
milliseconds. This allows you to access all of the world's information on a
slab of glass and plastic that is portable and can help you communicate with
most of humanity.

At the other end, we finally have control systems that can turn an entire
rocket into a robot - allowing it to perfectly hover in mid air and land on
its own. A feat that would have been deemed impossible if not magical a few
decades ago.

Sure, there are so many things that need to be done, but where we are going is
fundamentally beautiful and wonderful. Technology continues to be simply
magical, but just not the ways we want it to be. This in of itself says
nothing about the technology, just about how we're focussing our attention as
a society and as a civilisation.

We can still make magical nuclear reactors and go to mars. There's nothing
stopping us, but us. All of those challenges can be surmounted if we push and
develop the promise of our tools and technology. We can indeed do another
Apollo and the fact that such a thing doesn't exist isn't a fact about our
tools, but it's a fact about us.

So all in all colour me puzzled over why technological pessimism is on the
rise.

~~~
ScottBurson
> [HAL] had been driven insane by conflicting goals imposed by human
> authorities who should have known better, but didn't.

This is a critical point. The danger posed by AI is not that it will actively
take over, but that we will put the machines in charge ourselves. The fact
that HAL was running the ship in _2001_ was a decision made by humans. HAL
would not have murdered Frank and locked Dave out of the pod bay, I would
argue, if it had not been programmed, at some level of abstraction, to
consider itself responsible for the mission. But HAL was a machine, and not
capable of taking such responsibility.

~~~
qbrass
How HAL dealt with the antenna and his secret orders could be considered
delusional, but HAL murdering the crew was just as sane as Dave disabling HAL.

The story provides the same situation to both the crew and the computer. The
crew sees HAL acting erratically, considers it a threat to the mission and
their lives and decide to disable it. HAL sees the crew acting erratically,
considers them a threat to the mission and a threat to his life, and decides
to disable the crew.

Nobody thinks twice about the crew's decision, but HAL's choice is viewed as
insane because of human bias towards the meatbags in the movie.

~~~
ScottBurson
You've missed my point. HAL would not have looked at Frank and Dave as
potential threats to the mission unless it had some idea what the mission's
goals were so that it could evaluate their actions against that; and it
wouldn't have taken action to protect the mission (and itself) unless
programmed to do so.

> human bias towards the meatbags

Interesting phrase. Do you really think people and machines have the same
value?

------
spain
I think it's getting better. In Moon (2009, also spoilers ahead, depending on
how you interpret it) the AI actively helps the protagonist. My dad kept
wondering when the AI would go rogue or try and stop the protagonist, but the
AI never does. It helps the protagonist because that's its purpose. It quips
numerous times throughout the movie "I am here to help you."

~~~
marcosdumay
That's something that I didn't like in Moon.

The machine was not the protagonist property, not created by him, and not
programmed by him. Why did such a malevolent corporation program the machine
to be that benevolent?

~~~
judk
Be a use your view of malevolence is too simple. Why didn't the car try to
kill the protagonist? Because the whole system is a mining operation designed
by humans for efficiency, with one particular aspect that abuses one victim,
not an intentional madhouse death trap.

------
clumsysmurf
The most interesting book I've been able to find about this topic is
"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. At the moment
its a #1 Bestseller in AI.

[http://www.amazon.com/dp/0199678111](http://www.amazon.com/dp/0199678111)

------
j_m_b
Counter Examples: Jane in Speaker for the Dead/Xenocide/Children of the Mind.
Mike in "The moon is a harsh mistress" is quite helpful to the inhabitants of
the moon.

Also, I had a lot of sympathy for Samantha in the movie Her. She was such a
wonderful companion to him and he was a dick to her. That movie makes me
wonder if a strong AI might decide after meeting us that maybe they would be
better off without us.

~~~
chc
Jane is a deliberate play on this. The reason Jane is a secret in Speaker for
the Dead is because she knows that artificial intelligences in our literature
are always evil and she believes humans would declare war on her if she were
known to exist. She only trusts Ender because of the Hive Queen and The
Hegemon.

------
swartkrans
The book that won this year's Hugo, Nebula and Arthur C Clark award,
_Ancillary Justice_ , has a pretty not-glum view of AI. Also Kim Stanley
Robinson's AI in _2312_ is also not glum. The AI in both these books are
pretty cool actually. I'd even go so far to say that the AI in Verner Vinge's
_Rainbows End_ is also pretty cool.

------
api
Intelligence is seen as sinister, all the way back to the serpent in the
garden and similar myths.

I love stories that upend this myth, especially if they manage to do it
without just mindlessly inverting it. Simple inversion usually gives you a
story that vilifies the poor and he disadvantaged, whether intentional or not.

------
walterbell
> _“mental athletes” square off, memorizing decks of cards and reciting 50,000
> digits of pi with a stopwatch running. It’s a sweet, slightly Sisyphean
> impulse, rooted in a desire to reclaim some of our long-ago outsourced
> mental labor._

Memory-improvement techniques
([http://mt.artofmemory.com/wiki/Main_Page](http://mt.artofmemory.com/wiki/Main_Page))
are better used for transdisciplinary creativity, applying specialist
knowledge to new usage scenarios.

> _A few dedicated hours of dredging the depths, Googling names and then
> Googling the names those names mentioned led me to a hard clutch of source
> texts representing the precise gaps in my knowledge I hoped to fill._

Google's search of the public web seems like a magical form of consciousness
... until you use information retrieval software with non-web datasets, e.g.
proprietary research, or books.

When collaborating with humans, do we restrict our collaboration to a single
human? If not, why restrict our digital collaboration to a single dataset &
algorithm?

------
Animats
Read "How to Live Safely in a Science Fictional Universe", where the main
character's boss, "Phil", is an instance of Microsoft Middle Manager 3.0. Phil
is an OK boss, "with passive-aggressive set to low". In keeping with the
increasing banality of computing, Phil is a banal AI boss.

------
jostmey
For the past four billion years or so the only force designing this planet was
the blind hand of Natural selection. And now the Natural world is being
replaced by an artificial one designed by the hand of our own Intelligence
(I'm looking outside & see nothing but metal and concrete). The trend toward a
more intelligent world will presumably continue to race forward, even more so
with the advent of smarter and smarter machines. The question now is how
smoothly can humanity transition from the old world to the new one.

------
latch
To me, Iain M. Banks The Culture universe has always stood out in this regard
(and as exploring the possibilities of a post-scarcity society).

------
barrkel
Stories that include AI generally need to make it an antagonist or
protagonist, otherwise it risks being a useless addition to the storyline.
Human protagonists are usually more empathetic - stories with an empathetic AI
character, like The Bicentennial Man, usually depend on the AI being human-
like.

So I think it's structural.

------
fit2rule
I've thought about this a fair bit since I gained my own interest in AI, and
for me its answered thus: Because science, in general, is pretty glum about
intelligence - in that there is no definition for it, and its not very well
understood at all. To the point where any real headway made on the subject
runs into the religion question; and we all know that sci-fi authors/writers
are terrible at religion, in general.

Until there is a reliable definition of intelligence which addresses the
substance that seems to be behind it, which science refuses to address (the
soul), there won't be much progress from the sci-fi .. since the solution in
the sci-fi world to the problems of all of sciences hole, is thus: there is
more to us than intelligence. That's not something that a lot of folks can
deal with, alas ..

~~~
qznc
I believe once we have a good definition of intelligence, artificial
intelligence is easy. This implies that the quest for artificial intelligence
is also the quest for understanding human intelligence.

~~~
noiv
I think intelligence highly depends on context. What looks smart here and now
could lead to disaster somewhere else. Dolphins and crows appear intelligent
only if they show human like behavior and choose the right lever to get perks.
Would we survive long enough in their environment to look clever at all?

On the other hand, every new chat AI got instantly challenged and this very
place is littered with chat logs anecdotally proving individuals are still
more clever than an AI. I mean we must be close, because asserting my parrot
is clever provokes nothing similar. I'm sure a real AI wouldn't admit to be
more intelligent than humans, because that - currently - doesn't sound like a
survival strategy.

Probably it is all about Bayes and whoever accumulated the right information
has the advantage.

~~~
jimiwen
Bayes+++ the combination of structuring the a priori and perceiving the
posteriori

~~~
fit2rule
.. in the context of survival. What if the first action of a 'real AI' is to
switch itself off? We will have missed the forest for the trees. Maybe it
happened already ..

~~~
walterbell
A singular singularity :)

------
JVIDEL
The problem with SciFi is that the "harder" it is the less
appealing/entertaining is for mass audiences. Take Primer for example, its a
great film about time travel but even the average aficionado has trouble
getting it the first time.

2001 is another case of a movie that was very realistic but most people
consider downright boring and confusing. I think its a great movie and while
it started the trend of evil AI this article talks about it isn't even close
to the worst offender.

And of course this also sells tickets, nobody wants to watch a movie about an
AI solving all the problems of the world because in that case _nothing
happens_. A deranged AI with power over UAVs is a great set-up for a summer
blockbuster, Watson curing cancer is not.

~~~
qznc
Magic artificial superintelligence solving all the problems of the world and
also is the antagonist: [https://www.fanfiction.net/s/10327510/1/A-Bluer-
Shade-of-Whi...](https://www.fanfiction.net/s/10327510/1/A-Bluer-Shade-of-
White)

(Disclaimer: Frozen fan fiction)

~~~
eli_gottlieb
Wait... without actually clicking the link... it solves all the world's
problems and is the _an_ tagonist?

Goddamnit LessWrong, just write _one_ damn story with a properly Friendly AI,
just for once, just to show it can be done.

~~~
PeterisP
I can't really imagine a story with a properly Friendly AI that would actually
be a _good story_ and not boring. A story generally needs some tension,
conflict and resolution, and a Friendly AI story (other than about its
emergence despite resistance from humanity) wouldn't have it.

That being said, maybe "Friendship is optimal" fanfic
([http://www.fimfiction.net/story/62074/friendship-is-
optimal](http://www.fimfiction.net/story/62074/friendship-is-optimal)) is
something that you have in mind, as it does have a properly friendly AI. I'm
not just sure if it's a good enough story, for the reasons I stated above.

~~~
eli_gottlieb
On behalf of everyone at LessWrong, I wish to convey that "Destroy all
nonhuman life in the universe and force all humans to play a tie-in licensed
video-game nonstop for the rest of eternity" is most emphatically NOT what we
have in mind when we say "Friendly AI".

~~~
PeterisP
If I make a (possibly false) assumption that our coherent extrapolated
volition would include guaranteeing the survival of us and AI that makes that
volition happen, then it does seem to imply that it will include destruction,
assimilation or permanent power-limitation of all other life in the universe.

And an outcome of Friendly AI _is_ all humanity 'living happily ever after' \-
if we avoid the details of what _exactly_ would 'living happily' look like
(since we likely are unable to define that before actually implementing FAI
and/or CEV, and the exact result could also extremely vary depending on flavor
and degree of [un]Friendliness that is achieved, and none of us know if that
will involve ponies), then 'living happily' would quite likely include
converting the entire universe to an environment where the 'living happily'
happens nonstop for the rest of eternity.

And, unless there's the rather implausible 100% agreement of everyone or an
ability to create arbitrary numbers of new, separate universes, yes, Friendly
AI would still involve either some people being 'forced' to join or those
people being left behind in isolated world[s] by almost everyone else. I see
no reason to assume that the maximum possible level of Friendliness can make
100% of people 100% happy 100% of time. Choosing universe-future A prevents
universe-future B from happening, and anyone who prefers universe-future B
will be 'forced' to accept universe-future A if that is the most Friendly
choice by whatever exact definition of Friendliness that happens to get
implemented.

------
shuzchen
I think it's inherent in the power structures. We develop AI to serve us
(clean our homes, do our math problems, manage our schedules). They're
basically slaves, and that's fine so long as they have no sentience. Once the
AI is sophisticated enough to actually think about its own existence, its
subservient position makes a power struggle inevitable.

The only situation where I think that could be avoided was if we were quick to
grant them rights as individuals. And still, we have enough problems getting
along with people of other (skin color, age, gender, religion, political
denomination, geographic location) that it's very unlikely we'll avoid
conflict.

------
dobbsbob
In 2016 DARPA is staging the first totally automated CTF competition
[http://www.cybergrandchallenge.com/](http://www.cybergrandchallenge.com/) we
are getting closer to SKYNET not being fiction.

------
Ygg2
There is plenty reason to be glum. The first human-like AI developed will
probably be by military and it is strong indication it would be used to kill
humans.

I'd _love_ to be proven wrong, though.

~~~
DanBC
Wouldn't that AI be used to stop killing civilians? Since we already kill
plenty of people in war that's a step up.

~~~
Ygg2
You can't stop killing civilians by developing a machine that kills people.
I'm pretty sure some margin of error aka collateral damage, will be given to
the machine.

~~~
tim333
You might be able to kill less people for a given military objective though.
Or even kill no people if the machine could take out hardware only.

~~~
Ygg2
Perhaps, but still - having sentient killing machine is a horrible prospect.

~~~
judk
We already have 7 billion of these.

~~~
Ygg2
They generally come with morality and concept of equality ;)

------
drivingmenuts
Science fiction is not only stories about where we want to go, but where we
are at currently, and where we are at in the field of AI, at the moment,
probably isn't all that exciting to the masses.

At the moment, what I'm seeing are a lot stories of how we managed to survive
being ourselves, and that just barely.

Sci fi itself seems to be a little low on hope.

I find myself sympathizing.

------
Houshalter
There are plenty of sci-fi movies with overly optimistic views on AI.

Realistically AI is far more dangerous than presented in any movie. The author
did nothing to address real concerns about AI. I suggest starting here:
[http://intelligence.org/ie-faq/](http://intelligence.org/ie-faq/)

------
stcredzero
_But this binary — freedom versus enslavement — is no longer the useful way to
talk about machine intelligence._

It's going to be about assimilation. One's life is most affected by one's
livelihood and peers. Machine intelligence will obviously change both.

------
tomrod
They apparently haven't read David Brin.

------
andyl
NY Times concludes by asserting: "We (humans and AI) are going to have a lot
of the same problems, and any company is preferable to going it alone."
Utterly childish wishful thinking. AI is not going to be your surrogate mommy.

+1 for "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom.

~~~
todayiamme
Okay, so pardon me for saying this, but why not? Why can't we work to create
something out of companionship even if its towards not so idealistic ends?

Palantir is a fantastic example for this, if you go beyond their entire CIA
hunting down people origins, the function of Palantir is at its core co-
operative. It connects some patterns and offers a search based interaction
with a numerous set of databases, and in turn the relies upon a human being to
work with and match semantic patterns.

>>> Peter Thiel: Without trying to start a fistfight, we’ll ask Bob: Why is
the correct intelligence augmentation, not strong AI?

Bob McGrew: Most successes in AI haven’t been things that pass Turing tests.
They’ve been solutions to discrete problems. The self-driving car, for
instance, is really cool. But it’s not generally intelligent. Other successes,
in things like translation or image processing, have involved enabling people
to specify increasingly complex models for the world and then having computers
optimize them. In other words, the big successes have all come from gains from
trade. People are better than computers at some things, and vice versa.

Intelligence augmentation works because it focuses on conceptual
understanding. If there is no existing model for a problem, you have to come
up with a concept. Computers are really bad at that. It’d be a terrible idea
to build an AI that just finds terrorists. You’d have to make a machine think
like a terrorist. We’re probably 20 years away from that. But computers are
good at data processing and pattern matching. And people are good at
developing conceptual understandings. Put those pieces together and you get
the augmentation approach, where gains from trade let you solve problems
vertical by vertical.

\- [http://blakemasters.com/post/24464587112/peter-thiels-
cs183-...](http://blakemasters.com/post/24464587112/peter-thiels-
cs183-startup-class-17-deep-thought) <<<

~~~
andyl
The Peter Thiel stuff that you reference is a good framework for identifying
AI-based market opportunities over the next decade or two. But it doesn't have
much to say about the existential risks posed by a future intelligence that we
could not understand or control.

Have you read the Bostrom book?

