
Outing A.I.: Beyond the Turing Test - sethbannon
http://opinionator.blogs.nytimes.com/2015/02/23/outing-a-i-beyond-the-turing-test/
======
xianshou
The author's thesis is that, since "airplanes don't fly like birds fly,"
defining AI by human imitation or servitude is really a provincial kind of
speciesism. There's a very important distinction, however, between "acts
human," "obeys humans," and "acts in the interest of humans," which the
article glosses over.

While a strong AI wouldn't need to be designed expressly to mimic humans or
slave at their bidding (what the author argues against), it would absolutely
need to be programmed to _respect, support, and further human needs_. Almost
any other way of designing a true AI, if it succeeds, results in instant
extinction for humans. This is simply because if we are irrelevant, then we
and our environment represent a useful source of raw material.

The simplest and most commonly cited example is the paper clip maximizer,
which is seeded with making paper clips as its primary goal at some early
stage of its development. Even after achieving superintelligence, it keeps
that fundamental goal, and thus converts the solar system and whatever else it
can access into a giant paper clip factory. (See
[http://wiki.lesswrong.com/wiki/Unfriendly_artificial_intelli...](http://wiki.lesswrong.com/wiki/Unfriendly_artificial_intelligence)
for more.)

Many have in fact argued that the only moral way to design a superintelligence
is to imbue it with the desires of our more perfect selves. This theory is
called Coherent Extrapolated Volition
([http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)),
and the core idea is that the final goal of superintelligence should be
whatever we ourselves would think of, if we were allowed to keep iteratively
perfecting ourselves according to our current values until we reached a local
maximum.

So while the article is interesting from an egalitarian perspective, the final
issue comes down to this: any superintelligence had better be interested in
us, because due to its immeasurably greater power, either it likes us or we're
dead.

------
soup10
Intelligence is relative, not absolute. Passing the turing test is a water
marker.

------
DanAndersen
This was posted to HN a few hours earlier:
[https://news.ycombinator.com/item?id=9098280](https://news.ycombinator.com/item?id=9098280)

My comment there:

\---

I'm of two minds about this article. On the one hand, I think it does a great
job of emphasizing that AI can turn out to be really different from humanity,
and may not end up like a "human-in-a-box" like typical popular culture
imagines AI. You don't get human values for free in any intelligence; the
space of all possible minds is huge, and the space of minds that is human-like
is very small.

However, I think the author fails in truly appreciating just how different an
AI could be. I think there's a sort of Star Trek style of thinking about other
types of intelligence, where aliens are humans except for bumpy foreheads and
behavior that is still plausible for some hypothetical human culture. The
article's emphasis on rights-based Enlightenment-style universalism still
makes a lot of assumptions about the similarity of all possible minds.

I'd encourage him to read Eliezer Yudkowsky's article "Value is Fragile" (
[http://lesswrong.com/lw/y3/value_is_fragile/](http://lesswrong.com/lw/y3/value_is_fragile/)
) :

\---

>We can't relax our grip on the future - let go of the steering wheel - and
still end up with anything of value. >And those who think we can -

>\- they're trying to be cosmopolitan. I understand that. I read those same
science fiction books as a kid: The provincial villains who enslave aliens for
the crime of not looking just like humans. The provincial villains who enslave
helpless AIs in durance vile on the assumption that silicon can't be sentient.
And the cosmopolitan heroes who understand that minds don't have to be just
like us to be embraced as valuable -

>You do have values, even when you're trying to be "cosmopolitan", trying to
display a properly virtuous appreciation of alien minds. Your values are then
faded further into the invisible background - they are less obviously human.
Your brain probably won't even generate an alternative so awful that it would
wake you up, make you say "No! Something went wrong!" even at your most
cosmopolitan. E.g. "a nonsentient optimizer absorbs all matter in its future
light cone and tiles the universe with paperclips". You'll just imagine
strange alien worlds to appreciate.

>Trying to be "cosmopolitan" \- to be a citizen of the cosmos - just strips
off a surface veneer of goals that seem obviously "human"."

\---

A lot of science fiction isn't really about the future or the setting, it's
about the present and our society. This article is similar in that it's less
about AI itself -- at least I hope so because if not, then it makes the error
of only going halfway on the strangeness of minds that is the whole risk when
it comes to AI.

------
blumkvist
There is no definite proof of Turing committing suicide, and there is enough
evidence suggesting otherwise, not to presume this was the case.

