This is pure Bullshit!
Robots, or AIs don't have a conciousness (and probably they never will). And as long as we've not created a robot/AI with a real concious mind we need not to worry about 'punishing' robots or giving them ethical rules. It wont work!
I think people who write something like this, or theorize around that topic have time to kill or other issues, since they're solving a non-existant problem.
Well the first half of my statement is obviously true (todays computers are not self-aware). The second half contains the word "probably".
My point was that it is silly to talk about robot ethics or something like that, since it just distracts from the real problem: the men building and using these robots/machines need ethics!
... and if we ever create an artificial beeing that is self-aware we would have to solve many other issues before thinking of proper punishment. Like: how do we justify that we expect this self aware, intelligent beeing to clean our rooms or take out our trash? Wouldn't that be a new era of slavery?
I think implying a hard wired "want" means its not true consciousness - or at least not any better than in the future making humans who want to be garbage men (no disrespect intended)
"If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine.
What!? How does this possibly work? Will the AI gun say "you still didn't fix that null pointer dereference that caused me to go haywire and kill people, but at least this time when I do it I'll feel bad?" This is one of the most ridiculous quotes I've ever seen.
Damnit! I should have built that moral and conscious sense into my Roomba! It is a sociopath for not feeling regret when it repeatedly runs into the dog while she is trying to sleep?
For that matter, I probably should build this into my car, that way it can feel regret if the anti-lock breaks malfunction in a crash.
These people really have no idea what robots are like these days. They're just complex tools. We set them in motion with rules we have described for them. They don't have a consciousness, they don't have morals, they don't have regret, and they don't even have a psyche to be psychopathic.
I think you're conflating on logic bugs and programming bugs, so to speak. I can see why Hall would say that a real AI that would do bad things through its conscious decision procedures should feel remorse or be considered psychopathic. But if he's talking about "AI" today, then yes, he's clueless.
> Already iRobot’s Roomba robotic vacuum cleaner and Scooba floor cleaner are a part of more than 3 million American households. The next generation robots will be more sophisticated and are expected to provide services such as nursing, security, housework and education.
Riiiiight. Next-generation automobiles are also expected to fly.
They cite examples where a robot malfunctioned and hurt someone. I fail to see how that is any different from other machine malfunctions, except semantically.
The article has little to do with Asimov's laws. Asimov's laws address "three fundamental Rules of Robotics, the three rules that are built most deeply into a robot's positronic brain", i.e., an engineering construct. The article mostly addresses a legal issue: "Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a 'legal machine language' that could help police the next generation of intelligent automated devices."
I also love how the article ends with "Morality is impossible to write in formal terms", and yet, it expects to see complex enough robots to warrant the article in the first place.
wow, this is seriously a bunch of bull. It doesn't say anything interesting about the problem at all, and just claims a bunch of speculation and improbable comparisions.
Like, "punishing a robot" ...? No AI scheme I've seen ever has a sense of self worth. You can give it a negative reward, but the AI doesn't care much for its _current_ reward, it only performs actions to maximize the expected reward. Which means you can tell it to avoid doing these bad things, but punishing it after the fact would just leave it confused about why its reward system changed, and then go about trying to maximize its rewards again.
I for one, am all for the 3 laws of robotics, but it probably won't work for a much simpler reason -- it can't identify the terms. How would an AI recognize when a human is at harm? Would it show the drowning patient a picture of distorted letters and ask what the word is? Or would it jump in to save posters from being damaged? And that's the easy part... how would you define harm? These questions need to be answered before anyone tries to figure out the ethics guidelines for robots to follow.
You would seriously learn more about robot ethics from I, Robot than from this poorly written article.
Yeah, I find it hard to believe that people are considering elements in fiction as a starting point for (what they perceive to be anyway) a large ethical question.
Asimov's stories also pointed out that when we really do create conscious AI we can't expect them to obey simple laws as if they were tools. They will be just the same as we are and we'll either have to treat them as such or destroy them.
Of course, being closer to the AI community I know that we aren't anywhere near having to treat AI as conscious creatures.
Personally I think we have to realize the people are just complex biological machines first.