
Why We Should Adopt Driverless Cars That Kill People - sonabinu
https://medium.com/@lux_capital/why-we-should-adopt-driverless-cars-that-kill-people-9284f325ced0#.1nzg12205
======
cornellwright
>> Regulators test new products until they feel certain that people will not
be put at risk. Inventions from elevators to aspirin were subject to fierce
scrutiny before they were broadly adopted. We can’t apply the same level of
scrutiny to driverless cars.

This article is clearly written by someone who has never worked on a safety
critical system. There is a method for establishing risk and making risk-
benefit tradeoffs, not just feeling certain. If we did that many more people
would be killed.

If the author is arguing about public scrutiny, then I generally agree with
the argument, but existing safety engineering practices will work for
driverless cars.

Source: I work on hardware and software for surgical devices (among other
things).

~~~
terravion
I think he's arguing for virtual rather than real world testing to verify
extremely low probability behavior. At $0.55 a mile, it can be hard to test
out to five standard deviations, but if the question is, does the software
correctly handle these events, it is really just a test script.

~~~
AstralStorm
5 standard deviations? Who cares about handling lots of easy situations.

Just create a big old test suite, procedurally generated. It would make for a
nice game and driver training tool too. Verify against average and expert
drivers to set grading. Additionally get statistics about prevalence of bad
driving, which will allow you to check if the driverless system is safe enough
using plain old cost-benefit. Moreover, include assisted driving systems
(emergency takeover) in the evaluation.

No need to require perfection, failing gracefully is also acceptable. For
example, in a weird situation, the car could slow down and engage
notifications at drivers.

------
iraphael
> I encourage our greatest minds to build the tools that will computationally
> create almost all possible scenarios a driverless vehicle can encounter

That's not as simple as the writer seems to think it is. Autonomous vehicles
can only interface with the world via sensors, and sensor data that _looks_
real, but is actually computationally generated, is one of the best ways to
make a model overfit. It can learn the exact function and simplifications a
developer has made in order to generate the fake data.

Then you can say "alright, record the data and modify it". But you can't
actually modify, for instance, point-cloud data that much to account for
arbitrary viewpoints. What you can do, and what most companies do afaik, is
record the data as a "unit test". The car then responds to the test and, given
the result actuation, it is evaluated whether the test passed (e.g.: you want
to drive into the sidewalk? test failed). So if the car wants to do a correct
maneuver in the test (i.e., one that passes the evaluation), but it is
different than the maneuver the test has recorded, then you suddenly don't
have the data to assess how well the car did. I know there are ways companies
use to solve this but I assume it's not generating data, given the problem
presented above.

tldr: generated data isnt as straight forward and can lead to bigger problems
later on.

~~~
leereeves
> So if the car wants to do a correct maneuver in the test (i.e., one that
> passes the evaluation), but it is different than the maneuver the test has
> recorded, then you suddenly don't have the data to assess how well the car
> did.

Wouldn't engineers notice that when examining why the car failed the test, and
modify the test to allow the new correct maneuver?

------
gonvaled
> These learnings are immediately disseminated among the entire existing and
> unborn driverless car fleet

You can stop reading there. Unfortunately we are not living in an Open Source
world. Walled gardens will keep improvements for themselves, knowledge will
die with dying companies, version incompatibilities will create a complex
situation.

I am very willing to believe in the thesis of the article, but the way the
Software industry has been evolving recently does not give me much hope.

------
mannykannot
The author has seriously underestimated (or, more likely, overlooked) the
difficulty of a key step in his proposal: enumerating all possible
circumstances. How many different "truck crossing ahead" scenarios are there?
How many different ways might something be misidentified as the sky? or
something else? How many combinations of basic errors are there, in which the
basic errors themselves may be harmless, but the combination is not? Even
deciding whether two cases are the same or different is often a matter of
opinion, and it is further complicated by the fact that what is the same or
different to a given system depends on details of the implementation of that
system.

The author is not shy about making other assumptions, either: "Sebastian Thrun
postulates that, when subject to real-world conditions, AI can double its
performance every 18 months" "let’s assume that a driverless car is perfect at
navigating scenarios it has previously handled safely"... The foundations of
this article's argument are not nearly as sound as the author seems to think.

------
circlefavshape
What a crock of shit. Would we accept the same reasoning for some barely-
tested miracle drug? "If we do the testing, and the drug turns out to actually
be miraculous, then lives were needlessly lost during the testing period".
Indeed ... but what if the drug is _not_ miraculous, or has dangerous side
effects?

Hands up who wants to trust their health to the untested assertions of a
pharmaceutical company

~~~
Turing_Machine
"Hands up who wants to trust their health to the untested assertions of a
pharmaceutical company"

Well, that depends. Do I have a mild case of acne, or am I dying of cancer? My
willingness to try an untested drug is likely to be very different in those
two scenarios.

~~~
legolas2412
But what if you taking the drug for your cancer kills me? (I mean the self
driving car you bought for your inability to drive kills me).

~~~
Turing_Machine
Well, you hire a lawyer and convince a jury that's the case.

If the car's software is defective, the company should clearly be held
accountable.

------
rndmio
This is part of the wider issue of who is responsible for driverless cars. I
buy insurance for a car that I drive, and the company charges me based on risk
factors (age, where I live, how often I drive, etc) and the driver alone is
ultimately responsible for the actions of the car. If I hit someone I am
responsible and liable, if I'd been drinking I am prosecuted and the family of
whoever I hit has a focal point to blame. But with a driverless car there is
none of that, where is the agency, I may have asked the car to take me from A
to B, but would it be valid for me to be held liable for actions the car takes
on the route that I have no control over? This almost leads to fatalism, there
will be people injured and killed, but there is literally nothing we can do to
predict when or where these incidents will occur. I agree that there may be a
strict statistical case for them, but I can see why it will be very hard to
get past the human aversion to them.

------
eicnix
In Germany those decision violate the first article of its Constitution: Human
dignity is inviolable

I think those decisions are necessary are driverless cars but will start a
huge discussion about the ethic behind calculation the minimum victim. This
includes making decision if saving a child is more importing than saving a
elderly.

~~~
kminehart
This is why I think vehicle autonomy will never happen the way we want it to.

Collectively, the death toll of driving will be significantly reduced, but it
will only take a few people being killed by driverless vehicles for people to
make a fit about it, because there is a clear victim.

In an accident, it's exactly that: an accident. It wasn't an intended
decision, it just sort of happened. With AI and autonomous vehicles,
everything is a decision, and someone gets the worse outcome, and every little
bad thing that happens is going to cause an uproar.

I'm so sure that something similar has happened before, I'm struggling to come
up with it though.

~~~
marcoperaza
> _In an accident, it 's exactly that: an accident. It wasn't an intended
> decision, it just sort of happened._

I think that one of the issues is that it's often _not_ just an accident,
_not_ something that "just sort of happened". Safe driving practices do work.
E.g. checking your blind spot, not running stop signs, always looking both
ways before going when the light turns green, etc. Of course, you can only do
so much to protect yourself against other people's bad driving. But there is a
moral component to car accidents; they happen more often to bad drivers.

~~~
kminehart
You're absolutely right. I 100% agree, and there's a single person at fault
whom victims pursue. Who faces the blame whenever a driverless vehicle is
forced to make a decision to strike a pedestrian or hit a wall rather than a
head-on collision? Driverless cars do.

We may not like it but that's how the public will see it.

~~~
AstralStorm
You can show that the car did its best, statistically, or at least no worse
than a panel of drivers (jury). Black boxes have a long tradition and self
driving and assisted driving cars should have one.

The answer really sometimes is "you are out of luck".

For a bug or a really bad known issue, authors and designers (car company)
should be liable.

------
terravion
There is at least one start-up I know working on this very problem using
simulation to prove that automotive software can handle these extremely rare
events without having to do trillions of miles of real world testing.

~~~
brequinn
which one?

~~~
terravion
I assume they're reading this thread.

------
mcguire
" _To a first approximation, let’s assume that a driverless car is perfect at
navigating scenarios it has previously handled safely._ "

As a back of the envelope calculation, I was with you until that sentence.
That is an unwarranted and very aggressive assumption.

------
dao-
Speculative summary: because cars with drivers kill more people, and while
drivers can be assisted with technology, there's more opportunity to drive the
accident rate further down with driverless cars.

~~~
AstralStorm
This is assuming that assisted driving is worse than autonomous. Until a
serious study is ran and a good test suite designed, we are left with fear and
suppositions.

------
serge2k
Does this ever once make the argument suggested in the title?

------
Pica_soO
ph'nglui mglw'nafh Carthulhu R'lyeh wgah'nagl fhtagn

