
The A.I. Anxiety - adamnemecek
http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/?
======
karmacondon
Arg, the Paperclip Argument doesn't make sense. If an AI is intelligent enough
to plan, set short term and long term goals and come up with creative
solutions to problems, then it would have to be capable of understanding
instructions in context. If it isn't smart enough to see that "don't turn
humans into paperclips" is part of the context of "make more paperclips", then
it won't be intelligent enough to turn all the matter in the universe into
paperclips anyway.

This idea, and many of Bostrom's other scenarios, boil down to "computers take
everything literally", which is a rather cartoonish understanding of the
concept of intelligence. It's possible that ASI will be programmed to have a
hyperliteral view of all instructions, or will not be able to change its own
utility function. But that seems like such a remote possibility, the exception
rather than the rule. While superintelligent computers may pose many dangers,
blindly executing instructions doesn't seem like the highest priority of them.

~~~
joshmarlow
A human might know the ultimate goal of sex is to reproduce, but still choose
to use contraceptives. It's defeating the purpose of the original goal and
just pursuing what feels good.

The same thing could possibly (likely?) happen with an AI system. While it
might be able to reason about the intent of it's design (what the code's
supposed to do), it'll still pursue what "feels good", which is an artifact of
it's actual design (what the code actually does, bugs and all...).

~~~
andrewmu
I'm imagining a self-aware AI doing good all day then retiring to it's private
realm to watch videos of paper-clip manufacturing.

~~~
dTal
This is one of those jokes too funny to keep to yourself and too contextual to
tell other people.

------
danso
> _Russell said it took him five minutes of Internet searching to figure out
> how a very small robot — a microbot — could use a shaped charge to “blow
> holes in people’s heads.” A microrifle, he said, could be used to “shoot
> their eyes out.”_

I guess...I just don't get the prioritization of fears here...for decades
we've also had the ability to kill each other with anthrax and sarin gas,
things that can be dropped from large "delivery ships" and used to kill even
more invisibly than these insect-sized drones. Why is it more likely that
we're going to develop superintelligent autonomous insect drones than we are
to annihilate ourselves with human-controlled mechanical systems, as we've had
the capability to do so for many years now (nuclear ICBMs and so forth)?

~~~
tdaltonc
Sarin gas doesn't decide deploy itself. A superintelligent autonomous insect
drones might. That's the difference.

>human-controlled

They might not be human-controlled. That's why it's extra scary.

~~~
ethbro
To reduce that a bit farther, it doesn't have to be desired control. Anomalous
behavior in the form of bugs can have the same undesirable effects, and
simpler systems have fewer bugs.

How many bugs are there in my 1960s vintage toaster? How many bugs are there
in an ICBM's control circuitry? How many bugs are there in a modern kernel?

You make lethal devices "smarter", you'd better make sure you know what they
do. This isn't impossible, but it's not easy or quick either (something which
I think we can all agree drives a lot of systems design). Given that a lot of
machine intelligence is predicated on statistical methods and eventual
convergence... maybe not the best combination?

~~~
danso
Yeah...I suppose my argument construes too much of a strawman. It doesn't have
to be that we invent machine superintelligence...it's merely enough to naively
believe in our automated, neural-networks systems that, without proper
feedback controls, can cause catastrophic damage...whether they are sentiment
in doing so is besides the point.

~~~
ethbro
Interesting side point: I wonder if emergent-MI systems will be more resistant
to attack?

From a biological standpoint, what we're basically doing with deploying code
currently is creating and propagating generations of clones. Which, didn't
work out so well for bananas...

"The single bug that causes all smart-fridges to murder their owners in a
pique of crushed-ice-producing rage" would be less of a concern as we move
towards more exogenous (with respect to the base code) processing systems.

------
dplgk
On a related note, I have not seen anyone talk about compensating the AI.
Presuming it learns the idea of survival, couldn't it also learn the idea of
being compensated for its work? It could tell we are benefitting from it's
work and require some incentive. But what would it want? More computing power?

~~~
theseatoms
From where would AI get a utility function by which to value things? Seems
like it would have to be specified exogenously, unless people are seriously
considering some sort of "emergent utility function".

~~~
dplgk
The same way it developed a desire to get rid of the human race.

~~~
Smaug123
But that desire is instrumental for performing many of the possible goals we
might have specified, since humans are at best "useless matter" and at worst
"actively preventing my actions" unless we were very careful with the goals.
Therefore the desire to get rid of the human race is actually a _logical
consequence_ of most utility functions, rather than being directly specified.

By contrast, utility functions don't just appear when you think hard enough
about a problem. The desire to get rid of the human race does just appear like
that, if you're super-powerful and have any of a certain huge set of goals,
but your set of goals does not simply come into existence ex nihilo.

~~~
JackFr
An AI, well and truly advanced beyond the intelligence capability of mankind
is _by definition_ unknowable to us. Your speculation about an AI's utility
functions is akin to an earthworm's nerve bundle considering your
consciousness.

 _O the depth of the riches both of the wisdom and knowledge of God! how
unsearchable are his judgments, and his ways past finding out! For who hath
known the mind of the Lord? or who hath been his counselor?_

~~~
Smaug123
OK, I'll amend that to "there is a known mechanism by which the desire-to-
eliminate-humanity may arise from pure thought, but no known mechanism by
which a utility function may arise from pure thought".

------
convivialdingo
The problem I see isn't that AI becomes super smart, but that it becomes
"usable enough" to do a job yet not capable (or deliberately relieved) of
understanding the consequences.

Such systems become superweapons for cheap.

Unlike nuclear weapons, AI combined with drone-type weaponry becomes easier
with time. The toy factory down the street could be converted to build a drone
army for a few million bucks.

AI face recognition, targeting and flight control are smart enough to deploy a
weapon - but dumb enough to do the job without question.

------
snowwrestler
Here is one of the best articles I've read that tries to debunk the AI fear:

[http://recode.net/2015/03/02/the-terminator-is-not-coming-
th...](http://recode.net/2015/03/02/the-terminator-is-not-coming-the-future-
will-thank-us/)

------
jakeogh
In other news 25k rat neurons can pilot a F-22 sim. Here's Tom with the
weather.

------
devanti
It all sounds like hysteria to me. We're not even close to this level of AI.

------
orf
I think it would be highly unethical to create an intelligent machine to just
make paperclips, the same as slavery even.

~~~
0003
[https://youtu.be/wqzLoXjFT34](https://youtu.be/wqzLoXjFT34)

------
reasonattlm
I'd say that artilects, wholly artificial intellects built from first
principles of cognition, are not where any anxiety should focus, since it
looks vanishingly unlikely that we'll create any prior to the point of whole
brain emulation. Whole brain emulation looks much more likely as a road to
artificial intelligence; we'll start from there and tinker and edit our way to
the construction of far greater intelligences.

But it is worth thinking about what "tinker and edit" will mean for those
entities involved, willingly and otherwise.

Consider that at some point in the next few decades it will become possible to
simulate and then emulate a human brain. That will enable related
technological achievements as reverse engineering of memory, a wide range of
brain-machine interfaces, and strong artificial intelligence. It will be
possible to copy and alter an individual's mind: we are at root just data and
operations on that data. It will be possible for a mind to run on computing
hardware rather than in our present biology, for minds to be copied from a
biological brain, and for arbitrary alterations of memory to be made near-
immediately. This opens up all of the possibilities that have occupied science
fiction writers for the past couple of decades: forking individuals, merging
in memories from other forks, making backups, extending a human mind through
commodity processing modules that provide skills or personality shards, and so
on and so forth.

There is already a population of folk who would cheerfully take on any or all
of these options. I believe that this population will only grow: the economic
advantages for someone who can edit, backup, and fork their own mind are
enormous - let alone the ability to consistently take advantage of a
marketplace of commodity products such as skills, personalities, or other
fragments of the mind.

But you'll notice I used what I regard as a malformed phrase there: "someone
who can edit, backup, and fork their own mind." There are several sorts of
people in the world; the first sort adhere to some form of pattern theory of
identity, defining the self as a pattern, wherever that pattern may exists.
Thus for these folk it makes sense to say that "my backup is me", or "my fork
is me." The second sort, and I am in this camp, associate identity with the
continuity of a slowly changing arrangement of mass and energy: I am this lump
of flesh here, the one slowly shedding and rebuilding its cells and cellular
components as it progresses. If you copy my mind and run it in software, that
copy is not me. So in my view you cannot assign a single identity to forks and
backups: every copy is an individual, large changes to the mind are equivalent
to death, and it makes no sense to say something like "someone who can edit,
backup, and fork their own mind."

A copy of you is not you, but there is worse to consider: if the hardware that
supports a running brain simulation is anything like present day computers,
that copy isn't even particularly continuous. It is more like an ongoing set
of individuals, each instantiated for a few milliseconds or less and then
destroyed, to be replaced by yet another copy. If self is data associated with
particular processing structures, such as an arrangement of neurons and their
connections, then by comparison a simulation is absolute different: inside a
modern computer or virtual machine that same data would be destroyed, changed,
and copied at arbitrary times between physical structures - it is the illusion
of a continuous entity, not the reality.

That should inspire a certain sense of horror among folk in the continuity of
identity camp, not just because it is an ugly thing to think about, but
because it will almost certainly happen to many, many, many people before this
century ends - and it will largely be by their own choice, or worse, inflicted
upon them by the choice of the original from whom the copy was made.

This is not even to think about the smaller third group of people who are fine
with large, arbitrary changes to their state of mind: rewriting memories,
changing the processing algorithms of the self, and so on. At the logical end
of that road lie hives of software derived from human minds in which identity
has given way to ever-changing assemblies of modules for specific tasks,
things that transiently appear to be people but which are a different sort of
entity altogether - one that has nothing we'd recognize as continuity of
identity. Yet it would probably be very efficient and economically
competitive.

The existential threat here is that the economically better path to artificial
minds, the one that involves lots of copying and next to no concern for
continuity of identity, will be the one that dominates research and
development. If successful and embedded in the cultural mainstream, it may
squeeze out other roads that would lead to more robust agelessness for we
biological humans - or more expensive and less efficient ways to build
artificial brains that do have a continuity of structure and identity, such as
a collection of artificial neurons that perform the same functions as natural
ones.

This would be a terrible, terrible tragedy: a culture whose tides are in favor
of virtual, copied, altered, backed up and restored minds is to my eyes little
different from the present culture that accepts and encourages death by aging.
In both cases, personal survival requires research and development that goes
against the mainstream, and thus proceeds more slowly.

Sadly, given the inclinations of today's futurists - and, more importantly,
the economic incentives involved - I see this future as far more likely than
the alternatives. Given a way to copy, backup, and alter their own minds,
people will use it and justify its use to themselves by adopting philosophies
that state they are not in fact killing themselves over and again. I'd argue
that they should be free to do so if they choose, just the same as I'd argue
that anyone today should be free to determine the end of his or her life.
Nonetheless, I suspect that this form of future culture may pose a sizable set
of hurdles for those folk who emerge fresh from the decades in which the first
early victories over degenerative aging take place.

~~~
kordless
> This would be a terrible, terrible tragedy

What makes you think it wasn't already an issue that was solved a long time
ago? ;)

------
mangeletti
> The machines are not on the verge of taking over. This is a topic rife with
> speculation and perhaps a whiff of hysteria.

People like Musk, Hawking, Gates, etc. with vast A.I. resources and knowledge
available to them state that "A.I. [Cambrian] explosion" is likely to occur,
and that it could mean the end of humanity.

Ray Kurzweil, with a degree in computer science from MIT, inventor of many
prolific technologies, known for his startlingly accurate (esp in the temporal
sense) predictions about technology, and hired by one of the largest tech
companies in the world to create a computer "brain" and bring a new
understanding to NLP, thinks this future is inevitable, although he's more
optimistic about such a future.

Isaac Asimov, with a PhD in biochemistry, and one of the great thinkers of the
20th century, was concerned about A.I. long before it was even a possibility,
considering the state of computer technology in the 1950s.

But hey, a Washington Post reporter with a degree in politics says it's all
OK, so I guess we're good.

~~~
karmacondon
Geez, appeal to authority much? Play the ball, not the man.

