
New study finds it’s harder to turn off a robot when it’s begging for its life - pseudolus
https://www.theverge.com/2018/8/2/17642868/robots-turn-off-beg-not-to-empathy-media-equation
======
taylodl
From the article: _" In other words: get used to turning off machines, even if
they don’t appear to like it. They’re silicon and electricity, not flesh and
blood."_ So? I don't think that's the correct way of looking at this scenario.
What do we mean when we say it's silicon and electricity? How does that factor
into the decision as to whether we "kill" it? To me the answer comes down to
is the device conscious and does it have agency? Consciousness changes
everything for it's at that point that morals apply, it's at that point the
robot is no longer "just a machine." We can argue the possibility for a robot
to have consciousness as Penrose and Hofstadter have done, but I don't think
we can argue that should there come a time when a robot has consciousness that
the rules of engagement have changed.

------
pjctvnil
"Oh god no...please don't make me do it"

[https://www.youtube.com/watch?v=aPfci3bmcDE](https://www.youtube.com/watch?v=aPfci3bmcDE)

