
Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction - walterbell
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236
======
essayist
FAA and NTSB have had to deal with this for decades. Aviation safety has
steadily improved, because they were, generally, forward-looking. Not so much
"who do we blame?" but "what do we do to avoid this in the future?"

~~~
collectivized
That approach is fine for transportation, but not for nukes, and AI enhances
robotics is closer to nukes, in many ways, even though robotics in general
runs the gamut all the way down from replicants and Skynet to toaster ovens
and juiceros.

It's okay if some toaster ovens burn down a few houses, and then get recalled
due to a defective design, right?

But if you have a massive flotilla of self driving cars that go ape shit,
flipping over, slamming themselves into trees and catching fire, and suddenly
kill 10 million people overnight, around the world, due to an automatic hot
fix maliciously kspliced into running kernels by an advanced actor, targeting
systems are in motion in particular, that's more than a little terrible. Note
that the actor could be a synthetic entity or not.

So, if you think about trouble the with autonomous systems being turing
complete, engaging in tasks of arbitrary complexity, it doesn't seem that we
can expect a bottom to the worst pitfall imaginable.

AI is unlike aviation, in the sense that we can't trust a simple axiom like: "
_what goes up, must come down._ "

We need to be pessimistic. Sometimes fear is healthy. Caveman logic sometimes
am _good_. Fire _hot!_

------
nullc
When a self-driving car company tells you a human was in control at the time
of the accident: Ask how long had they been in control.

~~~
pbhjpbhj
You might also all "how much control?". Isn't one of the things about
autonomous operation that systems are specifically designed to function in
times when humans might fail.

Was the ABS operating? Oh, so the human wasn't in complete control. How about
collision avoidance, did the car steer or de-throttle? Minimum levels of
automation are probably did wipers operate, did headlights dip.

If the human was in complete control then that probably indicates some system
failures that may have contributed.

------
matt4077
It's creepy how well this anticipates the fallout after the Ethiopian Air
crash, only 10 days later.

~~~
walterbell
Looks like it was posted in 2016 and updated in 2019.

------
mcguire
Isn't this usually termed a "scapegoat"?

~~~
FakeComments
Yes — and organizations have been firing (and historically enough, killing)
middle managers to “crumple” diffused executive misconduct for a _long_ time.

But “moral crumple zone” sounds sexy, while “the human in an AI driven system
is just a scapegoat” is probably something the average person on the street
has mused about, extrapolating from purely human behavior or in the context of
“taking over” for self-driving cars.

~~~
joe_the_user
Well, very technically is a _euphemism for a scapegoat_ , which matters here
only because whenever humans are organizing to create scapegoats, having a
euphemism for the project is important.

\-- Which is to say I basically agree.

------
nanomonkey
Robin Sloan, the author of "Sourdough" and "Mr. Penumbra’s 24-Hour Bookstore"
apparently came up with the term Moral Crumple Zones, off the cuff at a
WeRobot conference.

~~~
gnat
His newsletter is A+: [https://desert.glass](https://desert.glass)

------
msandford
An excellent turn of phrase to identify a new phenomena. Glad to see scholarly
work on the subject instead of just internet arguments.

------
brianpgordon
Is there a direct link to the PDF somewhere?

~~~
unimpressive
At the top of the page, orange button. Took me a minute to spot it too.

~~~
brianpgordon
Ah, strange, for awhile I was getting a signup page. (Free, but with the
Elsevier logo at the bottom so, you know...) It seems to go straight to the
PDF now.

