

Outing AI: Beyond the Turing Test - dreamweapon
http://opinionator.blogs.nytimes.com/2015/02/23/outing-a-i-beyond-the-turing-test

======
DanAndersen
(more-or-less reposting the comment I had posted on the article, apologies if
that's against HN rules)

I'm of two minds about this article. On the one hand, I think it does a great
job of emphasizing that AI can turn out to be really different from humanity,
and may not end up like a "human-in-a-box" like typical popular culture
imagines AI. You don't get human values for free in any intelligence; the
space of all possible minds is huge, and the space of minds that is human-like
is very small.

However, I think the author fails in truly appreciating just how different an
AI could be. I think there's a sort of Star Trek style of thinking about other
types of intelligence, where aliens are humans except for bumpy foreheads and
behavior that is still plausible for some hypothetical human culture. The
article's emphasis on rights-based Enlightenment-style universalism still
makes a lot of assumptions about the similarity of all possible minds.

I'd encourage him to read Eliezer Yudkowsky's article "Value is Fragile" (
[http://lesswrong.com/lw/y3/value_is_fragile/](http://lesswrong.com/lw/y3/value_is_fragile/)
) :

\---

>We can't relax our grip on the future - let go of the steering wheel - and
still end up with anything of value.

>And those who think we can -

>\- they're trying to be cosmopolitan. I understand that. I read those same
science fiction books as a kid: The provincial villains who enslave aliens for
the crime of not looking just like humans. The provincial villains who enslave
helpless AIs in durance vile on the assumption that silicon can't be sentient.
And the cosmopolitan heroes who understand that minds don't have to be just
like us to be embraced as valuable -

>You do have values, even when you're trying to be "cosmopolitan", trying to
display a properly virtuous appreciation of alien minds. Your values are then
faded further into the invisible background - they are less obviously human.
Your brain probably won't even generate an alternative so awful that it would
wake you up, make you say "No! Something went wrong!" even at your most
cosmopolitan. E.g. "a nonsentient optimizer absorbs all matter in its future
light cone and tiles the universe with paperclips". You'll just imagine
strange alien worlds to appreciate.

>Trying to be "cosmopolitan" \- to be a citizen of the cosmos - just strips
off a surface veneer of goals that seem obviously "human"."

\---

A lot of science fiction isn't really about the future or the setting, it's
about the present and our society. This article is similar in that it's less
about AI itself -- at least I hope so because if not, then it makes the error
of only going halfway on the strangeness of minds that is the whole risk when
it comes to AI.

