
No Technology – Not Even Tesla’s Autopilot – Can Be Completely Safe - sndean
http://fivethirtyeight.com/features/no-technology-not-even-teslas-autopilot-can-be-completely-safe/
======
JoshTriplett
Humans can't be completely safe either. The important question isn't whether
the technology can be "completely safe", but whether it's safer than humans.

Even in the example given in the article of Three Mile Island: "and somehow,
during that test, the workers had failed to reset it properly.". The
technology absolutely should have detected and reported this human-caused
issue; however, that doesn't make it a technological error, but rather a human
error not mitigated by technology. (Of course, when talking about a nuclear
reactor, you want to mitigate as many human errors as possible, just as you
have technological components independently cross-check other technological
components, to push the likelihood of a failure even closer to 0 than any one
component could manage alone.)

It's not possible to make _anything_ "completely safe"; see also "0 and 1 are
not probabilities"
([http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/](http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/)).

Self-driving cars will be safer than human drivers even when interacting with
unpredictable human drivers, and will be even _more_ safe when they only have
to deal with other self-driving cars. Hopefully cases like this don't slow the
adoption of technologies that will, long-term, save lives (in addition to all
the other improvements they may provide).

