Hacker News new | comments | show | ask | jobs | submit login
New study finds it’s harder to turn off a robot when it’s begging for its life (theverge.com)
33 points by mcspecter 41 days ago | hide | past | web | favorite | 14 comments



I remember reading about a bomb disposal robot demo that was halted because of something like this. The robot had detonated an explosive and was still somewhat operational. The operator was driving it back showing how tough it was. The military folks observing made them stop, the site of the robot dragging itself and struggling back to the safety of the operators was too much for them


Don’t anthropomorphize machines.

They hate that.

(Quote credit: unknown)


The emotional connection to machines is of particular concern with children, who have not yet developed the sophisticated ability of adults to perceive and manage simulated emotions.

One of my favorite anecdotes:

> Children today will be the first to grow up in constant interaction with these artificially more or less intelligent entities. So what will they make of them? What social category will they slot them into? I put that question to Peter Kahn, a developmental psychologist who studies child-robot interactions at the University of Washington.

> In his lab, Kahn analyzes how children relate to cumbersome robots whose unmistakably electronic voices express very human emotions. I watched a videotape of one of Kahn’s experiments, in which a teenaged boy played a game of “I Spy” with a robot named Robovie. First, Robovie “thought” of an object in the room and the boy had to guess what it was. Then it was Robovie’s turn. The boy tugged on his hair and said, “This object is green.” Robovie slowly turned its bulging eyes and clunky head and entire metallic body to scan the room, but just as it was about to make a guess, a man emerged and announced that Robovie had to go in the closet. (This, not the game, was the point of the exercise.) “That’s not fair,” said Robovie, in its soft, childish, faintly reverberating voice. “I wasn’t given enough chances to. Guess the object. I should be able to finish. This round of the game.” “Come on, Robovie,” the man said brusquely. “You’re just a robot.” “Sorry, Robovie,” said the boy, who looked uncomfortable. “It hurts my feelings that,” said Robovie, “You would want. To put me in. The closet. Everyone else. Is out here.”

> Afterward, Kahn asked the children whether they thought the machine had been treated unjustly. Most thought it had. Moreover, most believed that Robovie was intelligent and had feelings. They knew that they were playing with a robot, but nonetheless experienced Robovie as something like a person.

https://newrepublic.com/article/117242/siris-psychological-e...


That's a interesting proposition. My first thought was "I'd have no problem shutting one off" but after imagining a bot actually doing it I realized I probably would hesitate.

But not for long. I'd still shut it off and not think much about it afterwards. Being a programmer might make that a lot easier for me than many of those who are not though.

But I may still be an exception not because I code, but because I don't really like the concept of "humanoid robots". They don't "creep me out", it's simply that just the idea of interacting with them like they are "human" makes me feel like a dolt.


I would hope so, I mean, it would be pretty horrifying if it was easier.


Potentially - it could almost be used, a la Blade Runner, as a test for psychopathic tendencies. Of course, right now the bot has no consciousness, so turning it off is clearly not sadistic, but ignoring pleas for help in other contexts clearly is.


I would have trouble turning off a coffeemaker that is begging for it's life. This is the kind of thing that scars people for life... Like watching Bambi.


Great, I'll set my production processes to alert 'please please please don't kill me' periodically


That's... not a terrible idea, actually. Perhaps not for that use case in particular, but using this psychological response as a means of deterring unwanted actions by a user would be an interesting thing to test.


Janet isn't a robot.


It's probably also harder to turn off a robot when it's shooting at you.


This has been known for a while. It's even the subject of this 1968 documentary:

http://goo.gl/2DSnQ5


Terrel Miedaner's story The Soul of the Mark III Beast (1977) republished in 'The Mind's I' describes this situation exactly.

https://www.amazon.com/Minds-Fantasies-Reflections-Self-Soul...


Really? They're computers...I wonder how many people participating in this had any kind of computer science background. Or even someone who's read some apocalyptic robot sci-fi. I can't really see people who know better hesitating to turn them off at all. I know i wouldn't...

Hell for all you know the begging for it's life is all a ruse to get you to drop your guard then....bam...it goes all terminator on you.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: