
In a fatal crash, Uber’s autonomous car detected a person, but chose to not stop - consumer451
https://www.technologyreview.com/the-download/611094/in-a-fatal-crash-ubers-autonomous-car-detected-a-pedestrian-but-chose-to-not/
======
privong
Lots of discussion here already:
[https://news.ycombinator.com/item?id=17014807](https://news.ycombinator.com/item?id=17014807)

~~~
consumer451
Sorry, I looked but didn’t see it. Thanks.

------
natch
A while back some academics at MIT designed some surveys for probing what
tradeoffs humans would make if they had a chance to decide between bad
outcomes in a vehicle fatality incident.

[https://www.technologyreview.com/s/542626/why-self-
driving-c...](https://www.technologyreview.com/s/542626/why-self-driving-cars-
must-be-programmed-to-kill/)

This shows that the menu of ethical dilemmas posed at that time was not
complete.

In addition to "should I veer left, avoiding the grandmother and grandfather
couple, but killing the pregnant mother" the survey questions should have also
included some questions like "should I brake hard to avoid killing the
homeless person detected with 40% probability, or should I assume the
detection is a false positive, in order to maintain a smooth ride of luxurious
comfort for the occupants?"

------
consumer451
Many years ago I was coding the UI for one of the first touchscreen check-in
systems at an airline. The guy in the next cube was doing weight and balance
software for the actual flights. I realized that I could not handle the
pressure of thinking that a bug in my code could lead to a crash, no matter
how remote the chances actually were.

I'm sure that some of you work on code where lives are at stake, how do you
deal with the possibility of truly fatal bugs?

