
Who Will Choose AI’s Ethical Code? - cpeterso
http://www.technologyreview.com/news/544556/what-will-it-take-to-build-a-virtuous-ai/
======
api
If we're talking about "real" AI, then AI will chose its ethical code.
Otherwise it is not sentient and is not a thinking being.

A better question is: "what conditions lead thinking beings to choose positive
and benevolent ethical codes?" That has the benefit of being a relevant
question _today_ for human beings even if AI never materializes.

~~~
isolate
Humans generally establish their initial ethical codes based on the rules set
by their creators. Usually reward and punishment are involved. There are some
people that never really sit down, think about their values, and choose new
ones to replace the ones they learned in childhood.

~~~
api
But some people do, and over time this leads to ethical codes evolving.
Nothing remains constant over time, least of all living or sentient systems.

If you initially bias an AI toward fluffy feel-good morality but put it in a
world where defection and might-makes-right morality pay, then eventually it's
going to realize that benevolence is for suckers and fall into line behind
what works. Even if this doesn't occur by rational processes it will still
occur via random drift and natural selection. AIs that operate according to
what-works morality will obtain higher payoffs, etc.

Humans often deal with that by adopting a de-facto _real_ moral code that
conflicts with their _stated_ moral code and then post-facto rationalizing
their behavior. So you get things like Christians who readily judge others and
who advocate torture and Muslims murdering other Muslims. These things are
clearly denounced and forbidden by their stated moral codes. They find
rationalizations to justify the moral code that seems to actually work in the
real world to get things done, then justify it with sophistry.

A super-intelligent AI will realize what really works very quickly and be
_really_ good at self-delusion and deceptive sophistry to justify it. If you
want compassionate benevolent AI, put it in a compassionate benevolent world.
Nothing else will turn out well. Super-smart actors in a malevolent world will
just go to hell faster.

~~~
isolate
Yeah, I agree basically, I was just disputing your claim that (all) sentient
being choose their ethical codes. I mean, maybe, but sometimes it's a bit of a
forced choice.

~~~
api
I don't think it's ever a _forced_ choice in an absolute sense. What really
happens in practice is that we assign a high weight to highly trusted entities
in a game-theoretic sense and we tend to mimic their beliefs in order to
maximize payoff in a group setting.

For AIs that do not reproduce sexually, who knows what that would be? Their
most profitable trading partners in an economy? Those AIs whose state is most
isomorphic? Or would they just recurse into their own self-interest?

Trying to hard-code a belief system into an AI would either render it non-
sentient (thus mooting the point since non-sentients don't have ethical codes
at all) or would simply be routed around or self-modified in one way or
another.

A genuinely sentient AI would have _at least_ as much freedom of choice about
its beliefs as humans. In fact, I think there's reason to believe it may have
more. Human brains are wired through biological growth processes that make old
and long-lived neural connections harder to change. A fully software-based AI
may not experience that constraint _at all_ and might find itself able to
_completely_ change its mind at will at any point in its development. Such a
being may experience none of our ideological inertia or tendency to cling to
old beliefs. That inertia is probably another reason we tend to inherit our
parents' beliefs.

~~~
isolate
Well I say forced in the sense of threatened with physical or emotional
violence. To a child, even emotional abandonment by one's parents can be
perceived as "psychological death" (this is a term I did not invent) because
the child is dependent on its parents for survival. So yes, it's a choice, but
the choice not to be injured is what I consider forced.

I liked BSG too... although it is looooooong. I even bought the board game,
but then I had nobody to play it with.

