
The Menace and the Promise of Autonomous Vehicles - raleighm
https://longreads.com/2018/06/12/the-menace-and-the-promise-of-autonomous-vehicles/
======
Animats
I stopped reading at "Central to AV testing is the “trolley question”. No,
that's not a central problem. If it was, it would be covered in driver's ed
and driver testing. It's not. Most accident avoidance is pure braking.

On the evidence, the central question in AV testing is "can we trust the human
to take over fast enough when the automation does something bad?" The answer
appears to be "no".

He also parrots Uber's whining about California's "regulation" of AV testing.
The California DMV autonomous vehicle testing rules mirror those for human
driver learner's permits. You have to get a license. You have to show proof of
insurance. You have to report accidents. You have to have a licensed driver on
board. DMV can pull your license if you have accidents. You can't carry
passengers for hire. You can't drive a heavy truck. No surprises there.

DMV does require reports of minor accidents and of disconnects, and they
publish that data. That's what irks the companies with bad automatic driving
systems - public disclosure that they suck.

~~~
imtringued
There is a valid trolley question but it's not about how the AV makes
decisions.

It's about whether the benefit of developing AVs as fast as possible outweighs
the risk of accidents caused by immature AVs.

If AVs can save 1000 lifes a year then going Uber style and risking 100 deaths
to speed up development of AVs by even just 2 months might save more lifes
than trying to avoid deaths entirely.

~~~
notahacker
The evidence of what actually happens when you "go Uber style" and cause a
death as a result strongly suggests that risking 100 deaths will not speed up
development of AVs...

Edit: genuinely perplexed by the downvoter who apparently believes the Uber
"move fast, break things, suspend testing for at least several months after
attracting widespread public opposition" approach has been a net positive for
their prospects of commercialising self driving cars in the near future?

------
jadedhacker
It would be much easier to countenance arguments that self driving cars will
save us all if humans weren't such good drivers.

[http://www.iihs.org/iihs/topics/t/general-
statistics/fatalit...](http://www.iihs.org/iihs/topics/t/general-
statistics/fatalityfacts/state-by-state-overview)

In Massachussetts, the number of deaths per 100 _million_ miles driven is
0.66. Those are the massholes. By contrast, it was reported that Uber required
human intervention once in thirteen miles while Waymo required human
intervention once every 5000+. Those are not directly comparable to fatality
statistics because the interventions are also for things like avoiding
disturbing events like hard braking, but they are still illustrative.

[https://arstechnica.com/cars/2018/03/leaked-data-suggests-
ub...](https://arstechnica.com/cars/2018/03/leaked-data-suggests-uber-self-
driving-car-program-years-behind-waymo/)

Let those orders of magnitude sink in. The humans are driving in all weather
conditions all over the country while self-driving cars are in (generally)
sunny relatively controlled areas.

Personally, I have hope that we can eventually beat humans, but these kinds of
statistics call for a careful, deliberative introduction of this technology.

~~~
TangoTrotFox
So you think it's reasonable to compare the rates that humans not only get
into crashes, but actually die (or kill) in crashes - to the rates that a
human takes over for an early alpha tech version of self driving vehicles?

Your stats not even remotely "illustrative" of anything other than what they
say of the presenter.

~~~
jadedhacker
This would be academic if someone hadn't actually died. I don't know the
numbers, but I'm fairly sure that just tanked the self-driving stats pretty
hard.

EDIT: Also, consider what avoiding "hard braking", which was a large reason
for human interventions iirc, actually means. It means either that the
computer jams on the brakes hard for no reason or that it only realized it
needed to brake substantially after a human would have realized the necessity.

EDIT2: Uber had driven 3 million miles at the time of the death according to
the below article. A single incident does not statistics make, but based on
the NSTB analysis so far, that version of the software made a systematic
error. That puts Uber's score at very handwavy 33.33 per 100 million miles.
I'm sure the number will change with software revisions and just from random
chance. [https://www.nytimes.com/2018/03/19/technology/uber-
driverles...](https://www.nytimes.com/2018/03/19/technology/uber-driverless-
fatality.html)

EDIT3: I should mention that I was initially hesitant to compute this
"statistic" because it's so raw and will wildly fluctuate. Interventions are
more solid statistically. I don't think it's crazy for someone to look at a
heavy box of metal that requires constant supervision to do the right thing
and conclude in the absence of (appropriately) extensive testing that it's
dangerous.

~~~
TangoTrotFox
And now you're just making up stuff. I'm shocked... Disengagements have
absolutely nothing to do with "avoiding hard braking". Disengagement numbers
and analysis are completely open you can find them here [1] along with the
reasons, miles driven, etc. A disengagement is defined as, _" a deactivation
of the autonomous mode when a failure of the autonomous technology is detected
or when the safe operation of the vehicle requires that the autonomous vehicle
test driver disengage the autonomous mode and take immediate manual control of
the vehicle."_

So for instance if the software is providing a buggy output and the car is
stopped for debugging, recalibration, etc that counts as a disengagement. And
that is indeed, by far, the most common occurrence. It's extremely rarely to
actually avoid a crash - let alone your fixation with "hard braking." Some
companies, such as Nvidia, specifically clarify that literally 0 of their
disengagements (in their case out of 109) had anything to do with extraneous
conditions such as road surface, construction, emergencies, etc.

[1] -
[https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disen...](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2017)

~~~
jadedhacker
Hard braking was discussed in a news article I have read some time ago about
Uber specifically. Your attitude is not helpful. The list you linked does not
include Uber though it does include many others. Thanks for the link.

EDIT: I'm looking at Waymo's totals, whom I consider to be a safer player than
Uber. The data totals come from your link on their page 2 Table 2:
Disengagements by Cause. The time period spans Dec. 2016 to Nov. 2017.

    
    
        Disengage for a recklessly behaving road user: 1  
        Disengage for hardware discrepancy: 13  
        Disengage for unwanted maneuver of the vehicle: 19  
        Disengage for a perception discrepancy: 16  
        Disengage for incorrect behavior prediction of other traffic participants: 5  
        Disengage for a software discrepancy: 9

~~~
TangoTrotFox
News articles on this topic tend to be less than worthless - an increasingly
typical state of affairs. In this case the reason is quite direct: people are
scared of revolutionary technology, and especially self driving vehicles --
and nothing gets clicks like fear mongering. Don't believe everything you
read, let alone parrot it.

You'll also notice the paragraph in Waymo's report, which is typical: _" Note
that, while we have used, where applicable, the causes mentioned in the DMV
rule (weather conditions, road surface conditions, construction, emergencies,
accidents or collisions), those causes were infrequent in our experience. Far
more frequent were the additional causes we have labeled as unwanted maneuver,
perception discrepancy, software discrepancy, hardware discrepancy, incorrect
behavior prediction, or other road users behaving recklessly."_

In other words, they ended up disengaging because of "emergencies", which your
imminent hard braking would apply as - exactly 0 times.

------
taneq
> What does it mean to experiment with technology that we know will kill
> people, even if it could save lives?

This tagline ignores the fact that every new technology ever invented has
killed people. Machines fail. Buildings collapse. Bras and vending machines
and popcorn and fire hydrants all kill people. The simple passage of time on
large projects kills people.

People are going to die no matter what. If we can reduce the body count then
that's a win.

~~~
buvanshak
Sorry. Cars have a slightly more potential to kill than bras and vending
machines and people dying of natural causes..

~~~
Fricken
Heart disease, cancer and chronic lower respiratory diseases are the top 3
leading causes of death in the USA, all regarded to be natural causes.
Accidents are #4, and car crashes are included in that.

~~~
pimmen
And if we exclude the age group to the people who use cars the most, like
people in middle age driving around their family? I would guess most of the
people in that car are not that likely to die from 1, 2 or 3 anytime soon.

~~~
Fricken
Yeah, cars are best for killing (and crippling) people in the prime of their
lives.

------
rob_raviolli
I recently read this paper
([http://sjha8.web.engr.illinois.edu/resources/DSN2018.pdf](http://sjha8.web.engr.illinois.edu/resources/DSN2018.pdf))
from the dependable systems conference. Seems like AVs are no where near
ready.

This is not just a Uber problem - it is pervasive across most manufacturers.
The rate at which these AVs are getting better essentially means we won't have
"human level" cars in a long time.

~~~
icebraining
I have no idea if AVs are near ready, but that analysis seems a bit shallow.
All the dataset gives them is the number of disengagements and the miles
driven each month, but not all miles are necessarily the same; for example, it
makes sense that Waymo et all would start taking their cars to streets with
more traffic/people/etc as they get more confident in the system's ability to
navigate in easier areas.

------
viburnum
America's car death rate is far higher than most rich countries. We could save
20,000 lives per year just by following best practices, but nobody gives a
shit, so let's all fantasize about magic cars.

~~~
a_imho
Agreed, autonomous vehicles _might_ offer some benefits inclding road safety
in the very long run, however we could do so much better right now if laws
were enforced / people were actually following them or just care. The system
is designed with plenty of buffer in it, it is just the nature of most drivers
to disregard speeding limtis and safe trailing distances.

As a consequence imo it is misguided to measure AVs against _average_ drivers,
because more average drivers on the road is exactly what I would rather not
have.

------
KKKKkkkk1
The article is claiming that the development of self-driving cars will require
killing people on public roads. That's a tantalizing prospect ("kill thousands
now to save millions later"), but no evidence or arguments are presented why.
Sure, there will be deaths due to road accidents, but is it really inevitable
that more people will die due to failures of engineering like Uber's or
Tesla's?

~~~
dbasedweeb
Utopias are always justified this way, and the worst crimes are committed as a
result. After all, if a potentially limitless number of lives are saved in the
future, you can argue for sacrificing millions today. Somehow people never
seem to understand that nothing real or worthwhile is down that road, and as a
species we go down it repeatedly.

I think the best argument against this, is to demand that the person willing
to let many die is the _first_ to die.

~~~
maxxxxx
"I think the best argument against this, is to demand that the person willing
to let many die is the first to die. "

That's why I am saying that all road testing of autonomous cars should take
place in the neighborhoods where the executives and their children live and
are on the road. That would align the incentives of the company and the
public.

------
tim333
> It will take billions of miles—and some unknown number of people killed—to
> gauge whether, by a statistically significant margin, AVs are safer than
> human-driven cars.

This kind of lumps all the AVs together. There is evidence though that Waymos
are already safer than human divers - from 2012:

> Google announced Tuesday that its self-driving cars have completed 300,000
> miles of test-drives, under a "wide range of conditions," all without any
> kind of accident. (The project has seen a few accidents in the past — but
> only with humans at the wheel.)

> To put that into perspective, the average U.S. driver has one accident
> roughly every 165,000 miles. [https://mashable.com/2012/08/07/google-
> driverless-cars-safer...](https://mashable.com/2012/08/07/google-driverless-
> cars-safer-than-you/?europe=true#VYLbEm9KDgqB)

Since then I think they have logged about 6 million miles and had one semi at
fault accident when their car thought a bus would give way but it didn't and
hit them. I don't think Waymo would been looking to order 62,000 vans if they
were not confident they would be safer than human drivers.

They have also had far more accidents with humans at fault crashing into
stationary Waymos than the other way around.

On the other hand Ubers crash seems like a reckless effort to try to catch up
by cutting corners and Tesla's it's your fault if you didn't look for a few
seconds attitude seems iffy.

------
Fricken
Over $100 billion has been invested in this as yet unproven technology, there
is no historical analogue for that. This money has been invested either
because the players involved stand to make a lot of money, or else they stand
to lose a lot of money if they don't invest in the technology. Moreso the
latter than the former, really.

Plenty of lipservice is paid to the potential safety benefits, along with a
laundry list of other social benefits, but really it's about money.

By logical extension, autonomous vehicles have to be safer than human drivers,
or they just won't fly. Too many accidents invites regulatory scrutiny, time
consuming safety audits, and worst of all, public backlash.

If you look at the players across the industry they can be broken into two
distinct categories: the offensive players, and the defensive players.

The offensive players: Namely Waymo, but also Cruise, Aurora innovation,
Drive.ai, Argo.ai, Zoox and Nutonomy: their safety records are impeccable.

But there are two prominent defensive players who both would be better off if
AVs were never a thing: Tesla and Uber. For them, screwing it up for everyone
is a net benefit to their business models.

Now, if we were actually concerned about safety, we wouldn't be driving, we'd
be riding bicycles (or walking or taking the train/bus), and our cities would
more closely resemble what they were in the 1930s or earlier. But there's no
money in bikes, and everybody hates them.

------
RcouF1uZ4gsC
One of the most dangerous states of mind, is to be pursuing something that you
are convinced will save people's lives in the future, but that might hurt
people now. This type of thinking is unrivaled in causing previously normal
people to do horrendously bad stuff. Add to this the potential for profits,
and it becomes a very toxic mix, where a person can be killing people while
trying to make a profit while fully convinced that they are a very good person
and are doing humanity a great service.

We have historical experience with an analogous situation. Medical experiments
and drug development. People became so focused on "the greater good for
humanity" that they did horrible stuff to study participants (see Tuskegee
Syphilis Study). Or drugs were not adequately assessed for safety in pregnant
patients and severe birth defects occurred (see thalidomide). This is one of
the reasons we have Institutional Review Boards to review medical studies and
the FDA to review drugs, because people involved in a project like this are so
good at fooling themselves.

When developing drugs, pharmacy companies aren't allowed to say "This drug
will cure X and benefit so many people. We would like to ignore the side
effects happening now because the cure will be so beneficial in the future."

We need the equivalent of an Institutional Review Board/FDA for self driving
cars.

As part of the submission, the engineers have to document what sensors they
are using, what the limitations of those sensors are, what algorithms/machine
learning techniques they are using, and the limitations of those. They also
need to disclose the engineering tradeoffs and design decisions (for example,
not having LIDAR).

Then there need to be tests for self-driving cars for safety on the unhappy
path.

What happens if there is a concrete barrier in front of you?

What happens if the lane markers are bad?

What happens if the car in front of you brakes?

What happens if you are driving with the sun facing you?

What happens if the truck in front of you drops something in the road?

What happens if a child darts in front of you?

And so on...

Then based on the testing, certify the car for self-driving only in certain
conditions that were shown to be safe based on the independent testing.

After a self driving car is on the road, all data collected by the self
driving cars should be shared with the review board so that there can be
ongoing monitoring. In addition, users of those cars should be able to report
incidents and near misses to the review board.

~~~
fraudsyndrome
You know, I'm not in this field but this should be stuff that I assume
would/should already be done BEFORE even putting it out to the market.

~~~
rohit2412
Then why did the multiple deaths caused by Uber and autopilot happen?

~~~
ben_w
USA motor vehicle fatality rate is 7.1 deaths per billion vehicle-km. Hard to
test that without going outside onto real roads with real drivers. (Tesla
fatality rate at time of third and most recent fatality: 14.3 deaths per
billion autopilot-km, which is significantly worse than I had realised).

~~~
ben_w
Note for anyone reading this in an archive: this was based on information
which I later learned was incorrect. The number of autopilot-km driven was not
at the time of the 3rd death, but either the 1st or 2nd, depending on if the
autopilot was involved in the incident of 29th January 2016.

------
paul7986
Not paying thru the nose to be a Musk guinea pig nor going to walk the streets
of Pittsburgh .. avoiding having an Uber car run me over.

Progress will be a killer!

