
Robo-Ethicists Want to Revamp Asimov’s 3 Laws - naish
http://www.wired.com/gadgetlab/2009/07/robo-ethics/
======
growt
This is pure Bullshit! Robots, or AIs don't have a conciousness (and probably
they never will). And as long as we've not created a robot/AI with a real
concious mind we need not to worry about 'punishing' robots or giving them
ethical rules. It wont work! I think people who write something like this, or
theorize around that topic have time to kill or other issues, since they're
solving a non-existant problem.

~~~
endtime
>AIs don't have a conciousness (and probably they never will)

"Computers can't beat men at chess (and probably never will)"

You're making a very strong (and in my opinion, unjustifiable) claim. You
really shouldn't state things like that without backing them up.

That said, it is very silly to suggest that robots today need a code of
ethics, because today they are certainly not conscious.

~~~
growt
Well the first half of my statement is obviously true (todays computers are
not self-aware). The second half contains the word "probably". My point was
that it is silly to talk about robot ethics or something like that, since it
just distracts from the real problem: the men building and using these
robots/machines need ethics!

------
jknupp
_"If you build artificial intelligence but don’t think about its moral sense
or create a conscious sense that feels regret for doing something wrong, then
technically it is a psychopath," says Josh Hall, a scientist who wrote the
book Beyond AI: Creating the Conscience of a Machine._

What!? How does this possibly work? Will the AI gun say "you still didn't fix
that null pointer dereference that caused me to go haywire and kill people,
but at least this time when I do it I'll feel bad?" This is one of the most
ridiculous quotes I've ever seen.

~~~
Periodic
Damnit! I should have built that moral and conscious sense into my Roomba! It
is a sociopath for not feeling regret when it repeatedly runs into the dog
while she is trying to sleep?

For that matter, I probably should build this into my car, that way it can
feel regret if the anti-lock breaks malfunction in a crash.

These people really have no idea what robots are like these days. They're just
complex tools. We set them in motion with rules we have described for them.
They don't have a consciousness, they don't have morals, they don't have
regret, and they don't even have a psyche to be psychopathic.

------
dpark
> Already iRobot’s Roomba robotic vacuum cleaner and Scooba floor cleaner are
> a part of more than 3 million American households. The next generation
> robots will be more sophisticated and are expected to provide services such
> as nursing, security, housework and education.

Riiiiight. Next-generation automobiles are also expected to fly.

------
jacoblyles
Can anyone come up with a situation involving robots that isn't already
adequately covered by current tort law?

~~~
rue
I hope this is tongue-in-cheek :) Just in case it is not, they are not
referring to that kind of law.

~~~
jacoblyles
They cite examples where a robot malfunctioned and hurt someone. I fail to see
how that is any different from other machine malfunctions, except
semantically.

------
mgenzel
The article has little to do with Asimov's laws. Asimov's laws address "three
fundamental Rules of Robotics, the three rules that are built most deeply into
a robot's positronic brain", i.e., an engineering construct. The article
mostly addresses a legal issue: "Accordingly, robo-ethicists want to develop a
set of guidelines that could outline how to punish a robot, decide who
regulates them and even create a 'legal machine language' that could help
police the next generation of intelligent automated devices."

I also love how the article ends with "Morality is impossible to write in
formal terms", and yet, it expects to see complex enough robots to warrant the
article in the first place.

------
philwelch
I posted this before but it also seems appropriate here. It's David Langford's
Three Laws for military robots:

1\. A robot will not harm authorized Government personnel but will terminate
intruders with extreme prejudice.

2\. A robot will obey the orders of authorized personnel except where such
orders conflict with the Third Law.

3\. A robot will guard its own existence with lethal antipersonnel weaponry,
because a robot is bloody expensive.

Langford is an SF writer. You might know him from his "basilisk" stories--if
not, go read them sometime.

------
leecho0
wow, this is seriously a bunch of bull. It doesn't say anything interesting
about the problem at all, and just claims a bunch of speculation and
improbable comparisions.

Like, "punishing a robot" ...? No AI scheme I've seen ever has a sense of self
worth. You can give it a negative reward, but the AI doesn't care much for its
_current_ reward, it only performs actions to maximize the expected reward.
Which means you can tell it to avoid doing these bad things, but punishing it
after the fact would just leave it confused about why its reward system
changed, and then go about trying to maximize its rewards again.

I for one, am all for the 3 laws of robotics, but it probably won't work for a
much simpler reason -- it can't identify the terms. How would an AI recognize
when a human is at harm? Would it show the drowning patient a picture of
distorted letters and ask what the word is? Or would it jump in to save
posters from being damaged? And that's the easy part... how would you define
harm? These questions need to be answered before anyone tries to figure out
the ethics guidelines for robots to follow.

You would seriously learn more about robot ethics from I, Robot than from this
poorly written article.

------
dantheman
Asimov's 3 Laws were there to provide a constraints for a story to exist i --,
they are the equivalent of a dead body found in a locked room.

~~~
roundsquare
Yeah, I find it hard to believe that people are considering elements in
fiction as a starting point for (what they perceive to be anyway) a large
ethical question.

------
BRadmin
I'm far removed from the AI industry, but even I was under the impression that
Asimov's 3 Laws had been considered outdated for decades.

~~~
aplusbi
I too am far removed from the AI industry, but I was always under the
impression that Asimov's 3 laws have never been considered by the AI
community.

Aside from the absurdity of implementing the laws, the whole point of the 3
laws in Asimov's stories were that they didn't work.

~~~
Periodic
Asimov's stories also pointed out that when we really do create conscious AI
we can't expect them to obey simple laws as if they were tools. They will be
just the same as we are and we'll either have to treat them as such or destroy
them.

Of course, being closer to the AI community I know that we aren't anywhere
near having to treat AI as conscious creatures.

Personally I think we have to realize the people are just complex biological
machines first.

------
ars
Is a dog a biological robot?

If I program a robot to act like a dog, and express pain when kicked - does
the robot actually feel pain? Does the biological dog actually feel pain?

~~~
321abc
How do you know dogs (or other humans for that matter) actually feel pain?

------
Periodic
Robots--for the foreseeable future--don't have a psyche, and thus cannot
become psychopathic.

