
Would an intelligent computer have a "right to life"? - rbanffy
http://www.atarimagazines.com/creative/v9n8/149_Would_an_intelligent_comp.php
======
tfb
I think it will take quite some time for humans to accept robots as entities
with rights. There's a lot of fear surrounding the idea of intelligent
machines, and most deeply religious people (who I know, family included)
consider robots/A.I. to be (somewhat) evil. We mostly have Hollywood to thank
for that.

When the majority of the human population understands technology and its ever-
changing limits, the fear and stigma towards artificial intelligence
comparable to our own will subside and only then will sentient robots be
accepted.

It's human nature to fear the unknown. It's gotten us to where we are today.
But I think there comes a point in our timeline where we overcome all fear
with our ability to understand and predict. Ironically enough, this will
probably require the advancement of (and potentially cooperation with) A.I.

------
wukix
Rather than answer this right now, I would suggest that everyone go watch a
season or two of Battlestar Galactica, which probes this question deeply.
Cylons (the "skin job" kind) were severely subjected to a kind of anti-machine
racism, even though they were basically indistinguishable from humans.

[http://www.amazon.com/gp/product/B002HR17ZG/ref=as_li_ss_tl?...](http://www.amazon.com/gp/product/B002HR17ZG/ref=as_li_ss_tl?ie=UTF8&tag=wukixcom-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B002HR17ZG)

~~~
nextparadigms
There will probably be at least 4 stages, with the condition that their
intelligence will not be limited, and it will keep growing at least until it's
equal to ours:

1) No rights. We can do whatever we want, just like they are our property and
we can treat them as pure objects

2) Some rights. Think of it like animal rights, but we still own them

3) "Official" human-like rights. They are free, but there would be "racism"
against them at first

4) The exact same rights as a human, officially and unofficially.

Then things might start to change again when they start becoming drastically
more intelligent than us. Then it becomes much harder to predict. At first
glance, we could say it will be bad for us, but I choose to believe the
optimistic side that the smarter they will be the more tolerant they will be,
too.

------
jarin
I think it's pretty likely that people will say that very complete, high-level
AI (if it is ever invented) would have a right to life. Actually, there's a
fantastic exploration of this from an AI's point of view in Life Artificial by
David A. Eubanks:

<http://lifeartificial.com/>

I think a far more interesting question is: will a human ever be put to death
for "killing" an artificial intelligence?

~~~
nextparadigms
Even if the robots themselves won't have many rights, humans will try to get
them more rights by electing the politicians that will do it. Why would humans
do that? Because I believe we will become emotional towards them. We can
easily become emotional towards our pets, so it should be even easier with a
robot that is even smarter than that. Heck, some people are even somewhat
emotional towards their iPhones.

The first step towards becoming more emotional will be naming them. I remember
reading a couple of years ago about that cleaning robot Roomba, and how people
started naming them, and when they were giving them away for repairing, they
demanded the same robot back, not a replacement.

[http://www.msnbc.msn.com/id/21102202/ns/technology_and_scien...](http://www.msnbc.msn.com/id/21102202/ns/technology_and_science-
tech_and_gadgets/t/roombas-fill-emotional-vacuum-owners/)

[http://gizmodo.com/5483750/peoples-emotional-attachment-
to-r...](http://gizmodo.com/5483750/peoples-emotional-attachment-to-roombas-
bodes-well-for-inevitable-sex-robot-industry)

~~~
eipipuz
I kinda agree with you but then, there's the uncanny valley. I think slavery
ended by slaves fighting for their rights more than the 'masters' being the
source behind the movement. I read many 'masters' were emotionally and even
romantically involved but didn't fought so much as cope with the system.

~~~
nextparadigms
You raise an interesting point with the slavery. I'm not sure if it's such a
good analogy, but saying it is, Abraham Lincoln did try to free the slaves.
There will be people, probably a lot of people, who will feel threatened by
them, just like now there are people being afraid of robots taking their jobs
(which is another interesting debate), but things will still change gradually,
as robots become smart enough, "lovable" enough, and human-like enough, and
some people will even want to marry them.

A few years ago I was reading sort of a joke, that by 2050, Massachusetts will
be the first state that will allow human-to-robot marriages.

There's already this:

<http://www.youtube.com/watch?v=9q4qwLknKag>

------
egypturnash
You know the dating site OKCupid? It matches people by their answers to user-
submitted questions.

One of about a dozen questions on there I've marked as "important to me" is
one on AI rights. And interestingly enough, I'd say the site's suggestions
have become markedly better since I did that; I think that question is a
barometer for a lot of things about one's morals and interests.

------
pica
Awesome! We haven't solved that topic for humans, now comes the next load.

------
duncan_bayne
Yes, definitely. Unless it bore a passing resemblance to an Atari game, in
which case Atari and Apple would drag it outside and put a bullet in its head.

------
rbanffy
I'd like to add a second thought: what if the intelligence is undeniably
sophisticated _and_ very non-human?

~~~
PotatoEngineer
Isn't that the most likely result? There are a _lot_ of possible things we
could call intelligent, and only a few of them could be "like us".

(Granted, anyone trying to _make_ an AI will aim towards something like us,
but considering how we still haven't developed AI yet, I imagine the first
successful one will be more of a lucky accidentally-on-purpose than anything
strictly-designed; even with humans making it, the first success will be quite
non-human.)

~~~
rbanffy
Yes, but how would we recognize it for what it is?

------
hastur
Isn't it obvious?

A truly intelligent and self-aware computer will be created by simply
simulating a human brain on a powerful computer. I can't see how that mind
would have any less rights than its biological counterparts.

So the answer is: Yes, it'll have the exact same rights, as we do. Morally,
that is, because the law might take time to catch up.

~~~
extension
If the simulation decides to copy itself, and allows the copy to run
independently, are there now two autonomous people?

If the copy is allowed to advance one nanosecond and is then deleted, leaving
the original running, has a murder been committed? What if the copy never runs
at all? What if the original is deleted instead of the copy?

What if the copy receives identical input, and thus has identical state as the
original, up to the point it is deleted? What if some aspects of the copy are
"optimized" by reusing the results of the original? What if the entire copy
just mirrors the state of the original, without recomputing anything? How can
we say exactly how many copies actually exist?

What if the original simulation modifies the copy so that it wants to kill
itself, which it does, once it starts running? What if it's modified to just
not care whether it lives or dies? What if the copy is modified in various
other ways? Are some kinds of modifications ethical, while others are not?

Point being, this course of AI development does not spare us having to
completely rethink our models of ethics.

~~~
hastur
Very interesting questions, I'll be happy to try to answer them:

1) yes

2a) yes

2b) It never lived, it was a "frozen corpse", and since you can make copies at
will, just having a copy doesn't constitute a new life until it's run.

2c) If the original has been deleted at the same state as the copy is started,
i.e. there's no interruption from the subjective point of view of the copy,
then it's just a transfer.

3a) Since there's zero information difference, I'd say it is OK.

3b) They're two separate people with equal rights.

3c,d) No computing = no thinking = no life to speak of.

4a,b) The same as if you kidnapped someone from the street and convinced them
to kill themselves (say, by drugging them).

4c,d) Very interesting. On the one hand, in case of many modifications, you
can't really foresee the consequences, so all should be considered unethical
to prevent unintended suffering.

On the other hand - what is human parenting and bringing up if not shaping (or
"modifying") a not-yet-sovereign mind. And we have no problem with that. But
then, what if some psychotic parent brings up a kid with an implicit
conviction that he should kill himself upon reaching adulthood (i.e. legal
sovereignty). We'd put them for life in jail. But what if a parent has no bad
intentions, but is just a very crappy parent and his kid grows up to be very
miserable and in his 20's commits suicide because he can't deal with life.
Well, we could pass moral judgment on the parent (if we somehow had the
knowledge and thus certainty that it was a result of bad bringing up), but we
couldn't really convict them in a court of law.

So yeah, we're having a serious split here between moral judgment and what's
practical. (Much like pro- and anti- abortion.)

So I'd say playing with the brains of sentient beings is unethical in general.
Practically though, there's no way to stop a "virtual" person from running all
kinds of crazy stuff on their private machines. Unless you want to police the
internals of all such brain-simulating computers, but then you're conducting
mind-control of people, and that's totally unacceptable.

5) Agreed, but I think if we engage in it already, we can get to some very
interesting conclusions that we might find useful even in today's world.

~~~
extension
_2b) It never lived, it was a "frozen corpse", and since you can make copies
at will, just having a copy doesn't constitute a new life until it's run._

How long does it have to run before it's alive? If the interval is short
enough then there will be no change in state whatsoever. So in what way does
the mind state have to change in order for it to have lived? What if that
change is so simple that you can calculate it in your head? Have you created a
new life just by thinking about what will happen next in the simulation? By
writing it down?

 _3a) Since there's zero information difference, I'd say it is OK._

So you would have no objection to being killed if there was an identical copy
of you in existence somewhere? Why does that matter? Why do you care more
about this copy of you than you do about anybody else? Why should a stranger
care that there is at least one copy of you?

 _4c,d) Very interesting. On the one hand, in case of many modifications, you
can't really foresee the consequences, so all should be considered unethical
to prevent unintended suffering._

But you are modifying your own mind all the time, by forming memories and
learning skills. To some degree, you choose what you think and experience.
Imagine that you have ultimate control over every facet of your mind. You live
in a computer, so there is no "natural" mental life. You _must_ choose exactly
how your mind will progress.

If you make a frozen copy of yourself, it hasn't diverged yet, so you can
modify it however you like, since it's _you_.

And if new, unique people are to be created at all, someone will have to
choose how their mind works. There is no longer a "natural" reproductive
process to decide this. How can choices be made in the new person's interests
when it is those very interests that are being chosen?

~~~
hastur
2b) I would assume that the machine on which the AI is run is digital (i.e.
discrete), so yeah, a single cycle (tick of the processor) could very well
mean change. And since you've got to draw the line somewhere, I'd say that's
enough.

3a) By killing me and starting an identical copy of me somewhere else, you're
creating a discontinuity that infringes upon my identity (or same-ness)
because there would be a very strong discontinuity in inputs simply because of
the different location.

Moreover, a third party exercising control over any of my copies (and
especially the one that is _running_ ) is in violation of my sovereignty.
[That's also why a stranger should care about my copies.]

4c,d) That's precisely it: I am doing it! I can do with myself whatever I
want. And yes, I do imagine having the control about every facet of my mind. I
actually dream about it. :)

Now, when I manipulate a copy of me, that's a little different, because as
I've stated earlier, a copy spun off to run independently is sovereign from
its "original". That's why manipulating own copy should morally be considered
equivalent to manipulating a third party, assuming that copy will be let run
separately.

To your last point, I agree, that's the most delicate part - creating new
beings. As far as copies are concerned, my immediate instinct is that only
identical copies should be considered ethical, since any modifications can
have unpredictable consequences. But I'm not sure this should be enforced by
the AI community / data center, because I value personal sovereignty and
habeas corpus virtualis above all else. ;)

I won't speak of creating completely new, original beings, because that topic
deserves a ton of thought and a philosophical treatise of its own. But
theoretically, you could try to mimic natural processes of child brain
development. Of course, as you've pointed out, someone has to _choose_ the
parameters of that developmental algorithm, just as a search engine is never
really objective, because _people_ wrote it and made some choices in the
process. But that last example also shows we can have some trust in such
choices for practical reasons.

On a side note: I would highly recommend the book "Diaspora" by Greg Egan. The
first chapter (Orphanogenesis) speaks about the development of a virtual mind
and to me is among the best works of science fiction of all time.

------
majmun
if the machine is capable of surviving on its own or in groups then it has
right to live. doesn't matter how "intelligent" or "self aware" it is.

~~~
geoffschmidt
Does malaria have a right to live? Are we wrong to try to eradicate it?

~~~
majmun
well it is hard to tell without predicting the long term future. I mean if we
eradicate all malaria something else may also happen as a consecuence that we
were not aware now but is bad for us.

~~~
kd0amg
That's a rather different standard of right and wrong than you seem to support
in your previous post.

