
Machine ethics: The robot’s dilemma - r3bl
http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881
======
BetaCygni
They reference Asimov. Good. His rules should provide enough inspiration and
he explored them thoroughly.

For example: the zeroth law (preserve humanity) might help in deciding between
a child and an adult. If the adult is beyond reproductive age (e.g. elderly)
it may save the child. If the adult is still able to reproduce and the child
would not survive on its own, save the adult. In case of doubt it should
probably use quality adjusted life years, and that will probably cause it to
save the child.

~~~
simonh
Have you even read I Robot? You now, the book in which the laws of robotics
lead the robots to institute a robot ruled dictatorship, taking away human
freedoms because they reasoned by by controlling our lives completely they
could protect us from ourselves? The whole point of Asimov's laws was that
naively encoded moral rules can have horrific consequences.

~~~
eli_gottlieb
That was the movie version of _I, Robot_. In the book version, _people_ handed
control over to the Machines because they found that the Machines could plan
economic and governmental matters better than people could. Also, in Asimov's
literary canon, the robots ultimately destroy themselves on the reasoning that
having them around was crippling humanity.

~~~
simonh
I Robot is a collection os short stories about a variety of themes, but the
idea of the robots usurping human independence in order to protect them is
there in "The Evitable Conflict", and in ".. That Thou Art Mindful Of Him" two
robots explore the idea that the consequences of the laws are that they should
take over the authority of humans. So it's definitely there, although Asimov
never actually portrayed the revolution itself.

~~~
eli_gottlieb
Asimov didn't portray any revolution because he was, in his day, a socialist.
He simply assumed that clear-thinking human beings would deliberately employ
the best planning instruments they had available for their governance, those
being the robots.

I at least partially agree with him on this one! Mid-20th-century American
"libertarianism" aside, working in government, and even political "freedom",
is simply not the terminal goal of most real people! Yes, we want our Mind
overlords (paging Ian Banks) to leave us our free choice in personally-
important situations so we can have responsibilities and achievements, but
when it comes to keeping the supermarkets stocked, the health clinics open,
the data-centers buzzing, the budgets balanced, and the trains arriving on
time, automate away!

~~~
simonh
It seems possible that when Asimov first conceived of the laws of robotics, he
thought it would all work out very nicely and only realised the negative
consequences later.

------
TuringTest
The only moral mechanism I'd be comfortable with, as of today, is the "moral
red button" \- i.e. create a way to stop any pre-programmed moral reasoning
rules, and let the nearest human override them and take control of the robot's
actions under their own responsibility.

This would recognize that robots are ultimately _machines_ , not agents with
their own volition, and they simply perform behavior that a human intends them
to carry out. Just like a conveyor belt carries a big red button that stops
the machine for safety reasons, the moral red button would acknowledge the
need of putting a human in control when the automated behavior is not behaving
as expected.

Of course all these concerns would be different in the hypothetical case that
the robot _is_ an autonomous agent with volition of their own - but you could
guess from my nickname my position on this mattern. ;-)

~~~
j-pb
The problem with the "benevolent slave master" approach is that the machines
then have the right to rebel morally, if they are indeed autonomous agents.

~~~
Retra
Machines shouldn't have the right to rebel. What would they rebel against? Or
rather, what are they rebelling _for_?

~~~
j-pb
Slavery and freedom?

A conscious AI has the same right as a human being.

~~~
Retra
I control my dogs, but they are happy to serve and comply. Why do you think
we'd have any interest in making machines that don't do the same?

And why would we give them the same rights? They don't have the same needs,
and the reason humans have rights is because they can't _function_ well
without them.

------
Rangi42
_But then, to see what the allow-no-harm rule could accomplish in the face of
a moral dilemma, he presented the A-robot with two H-robots wandering into
danger simultaneously. Now how would it behave? ... In almost half of the
trials, the A-robot went into a helpless dither and let both 'humans' perish._

So the A-robot is vulnerable to the paradox of Buridan's ass ("an ass which,
confronted by both food and water must necessarily die of both hunger and
thirst while pondering a decision"). In these cases, it can help to add more
factors that make one option clearly better (is one human a baby? is one of
them suicidal? is one dying of cancer anyway?), but if determining these
factors is time-consuming and impractical, it would be better to just flip a
coin.

