
Safety Numbers on Autonomous Cars Don't Add Up - lemiant
https://www.bloomberg.com/view/articles/2018-02-02/safety-numbers-on-autonomous-cars-just-don-t-add-up
======
evancox100
"Because the Cruise cars were still pondering their next move when the driver
took over, these incidents apparently do not constitute failures that must be
reported to the DMV. Though a neat piece of legalism, this logic can't help
but make one wonder how long a vehicle can remain motionless on public roads
without it constituting a failure of the autonomous technology."

I know! We'll just add a program that checks whether or not the AI would ever
stop and come to a decision.

~~~
ThePadawan
You could say that there is an obvious Halting Problem which needs solving.

------
Isamu
If nothing else, this article provided a link to the California vehicle code
that covers autonomous vehicles. That is an interesting read.

Back to the article, they point out impatient overrides by the test driver are
apparently not being counted as disengagements and do not appear on the
reports to the DMV. The author tries to make this into a safety concern, but
it just isn't. Maybe it is against the intent of the code as written, but
maybe not, since it seems to be concerned with true safety violations.

The code:
[https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e...](https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e-9df2-5ded9f208e9e/adopted_txt.pdf?MOD=AJPERES)

~~~
ars
> impatient overrides

It's not impatient. It's the car having no idea what to do next, so it just
sits there waiting for something to happen.

I count that as a failure in the path to autonomous driving. Remember: Those
1% scenarios are the hardest part, yet they are critical. You can't just say
"99%" it's good - after all humans drive perfectly 99.999% of the time
(measured by time on the road vs assuming 5 minutes per accident - which is
probably high, since the error leading to the accident probably took less time
than that), yet that's not good enough.

~~~
taneq
We need to distinguish between the two.

For any kind of self-driving vehicle, making a silly mistake and needing a
human to override it to prevent a collision is not OK at any significant
frequency. I'd guess the maximum acceptable rate would be no more than one
collision in multiple years of driving.

For a self-driving private vehicle with manual controls, pulling over and
saying "help me, human!" once every few months is perfectly acceptable. For a
standalone self-driving taxi with no manual controls, it's not. For a self-
driving fleet taxi with a remote operator able to take over and help it when
it gets stuck, it may be.

~~~
ars
> no more than one collision in multiple years of driving

That's really high, that's worse than what people do (on average a person will
probably have an accident every 18 years, none fatal).

And that's _with_ a person watching? That's _really_ bad.

> once every few months is perfectly acceptable

Not really. The people inside it will have no practice (or even ability) at
driving. Such a car is not really useful, you'll just make things more
dangerous, not less.

The bar for self driving cars is so incredibly high, I doubt you'll see them
on regular (non-instrumented, restricted) city streets until we have true AI.

Most likely there will be train-like intercity roads, dedicated (and
instrumented) for self driving cars and trucks.

But unrestricted city driving? Very unlikely to ever happen.

Despite how high the accident rate is, people are actually really really good
at driving. And computers are really really bad at being reliable.

When you can have a complex computer program that simply manages to have
99.9999% uptime, then we can _start_ to talk about self driving cars. We don't
even have that, self driving cars are an impossible dream.

And that 99.9999% number is not just random, it's how good you have to be to
beat a human at driving.

~~~
kurthr
Hmmm... I think I read the same insurance statistic and the number was 17.9
years, which is close enough. The concerning bit was that the number came from
insurance claims not accidents, which might be under-reported, because nobody
wants their insurance to go up! As a result many parking bumper dings, minor
side swipes of other cars, bicycles, and pedestrians probably aren't reported.

Otherwise, judging from the bumpers of most cars in NY/SF... I would guess
that each person had filed hundreds of claims.

------
sidibe
There is definitely something that doesn't add up between the recent surge in
pessimism about the progress on self driving cars and the fact that Waymo is
about to start taking passengers without backup drivers. Either they are
recklessly competitive or confident that the cars can handle any situations
they might run into in the area they're releasing them. If it were some
independent startup with less to lose I'd lean more to the former, but theyre
not.

~~~
tyingq
Hard to say. Waymo has way more miles than everyone else, but the layman's
observation is "yeah, but in environments you control with an iron hand and
travel over and over on the same rough, predictable routes, albeit with your
very rich suite of data points."

A bazillion miles on the same track isn't necessarily as impressive as a
magnitude fewer miles on varied uncontrolled/unmapped snowy, icy, narrow,
unlit, flooded, unplanned etc, routes.

So, how do I compare Google's/Waymo's vast number of sensor rich planned miles
versus Tesla's much lower, much less rich (no lidar) but much more diverse and
real-world miles? And everyone else's (Uber/Lyft) in-between play?

It's pretty unclear, for a layperson, right now, who is ahead of the pack.

~~~
lambdasquirrel
Do we need driverless to handle all the corner cases? I’m starting to think
this is just the wrong way to look at it. Most of the time we commute home
it’s just the same dumb stuff.

We can make an analogy to Starcraft AI-assisted here. What if instead of
having a fully automated AI, instead you could train your car to take you
home, by driving your route a couple of times? Kind of like how you might
train the computer to go harass your opponents expansion and then just call up
that subroutine everytime that it’s apropos (still waiting on Blizzard to make
this game).

Don’t try so hard to replace the human. Make the human less busy, more
powerful.

~~~
YokoZar
The open source Spring RTS interface allows for player-made AIs to execute
inside the game engine. You can play alongside your bots (and turn certain
behavior off/on with the interface).

------
protoplant
One thing I find healthy, is that many companies beyond Tesla are trying to
build this technology. Even though the story thinks the data may be suspect at
this point, the competition from a variety of sources gives us good chances
for some breakthroughs occurring.

~~~
InclinedPlane
In theory, sure, in practice, it's a little disconcerting.

I've worked in software since the '90s, I've seen how the sausage is made at
all kinds of companies big and small. The theory of autonomous cars is a bunch
of really smart folks hand crafting history's finest software. And there may
be some cases where that won't be too far from the truth. But the reality on
the street is going to be a zillion different competitors cutting every
corner, skirting ever regulation they can get away with, and just shitting out
the worst "move fast and break things" hackathon bullshit code that "sort of
seems to work, most of the time" that they can manage. I know how devs and
product managers think about testing and quality in the absence of dedicated
and rigorous QA standards and infrastructure, in the context of life critical
systems that is _frightening_.

Ask yourself, do you want to put your life in the hands of a code base that
had some pimple faced learned-to-code-in-10-days bootcamp graduate who just
"fixed a bug" in the drive software by ctrl-c-ctrl-v'ing from stackoverflow
and then pushed to master? Because that is going to be the reality, not the
ivory tower "well, if they did it the RIGHT way" fantasy that people have in
their heads. The only way we'll get the "right" way of autonomous software
development is if there is extensive and careful regulation with very rigorous
auditing and process requirements. And we are nowhere near that right now.

~~~
JumpCrisscross
> _the reality on the street is going to be a zillion different competitors
> cutting every corner, skirting ever regulation they can get away with, and
> just shitting out the worst "move fast and break things" hackathon bullshit
> code they can get away with that "sort of seems to work, most of the time"_

Humans are terrible drivers. A half crap autonomous car might still be safer
than the _status quo_. In any case, whether talking about consumer goods or
services, this is a space markets work in. Calling for rules to be written
before we fully understand the problem is a recipe for overregulation.

~~~
erobbins
> Humans are terrible drivers.

Compared to what? We have some evidence that computers can do better in
tightly controlled scenarios (limited access freeways) but that's always been
low hanging fruit.

I think humans are pretty good drivers, actually. How many 2 lane undivided
roads do people go zipping down all day, every day, with very few accidents? A
ton. We're also very adaptable and good at handling outlier situations.

~~~
sien
Sober, rested, humans are great drivers.

The problem is that we also a bit too good at creating outlier situations
(stoned, drunk, exhausted, angry at X Y or Z).

~~~
astura
Some just aren't good drivers when sober and rested - I've met many, and they
let anyone with a pulse get a driver's license and even if you don't have one
you're free to operate a motor vehicle all you want, nobody stops you from
turning the key.

------
carapace
Do you think self-driving car[0] manufacturers should be allowed to compete on
safety, in the sense that if one comes up with some sort of software
"seatbelt" of some kind should that company be allowed to prevent others from
using it? I should go look up how actual seatbelts and airbags became standard
in (almost) all cars...

[0] Can we _please_ call them "auto-autos"?

~~~
brainfish
In the case of the 3-point seatbelt, a Volvo engineer invented it and Volvo
received the patent, then made it available to other manufacturers at no
cost.[0] While that is a great precedent I don't believe it's any kind of
requirement.

[0]
[https://en.wikipedia.org/wiki/Seat_belt](https://en.wikipedia.org/wiki/Seat_belt)

~~~
Johnny555
Making it a requirement would be a double-edged sword -- on the one hand,
companies would share safety features, but on the other hand, they'd have much
less incentive to develop new safety features if they had to give them to
competitors at no cost.

Perhaps some sort of compulsory licensing would be better.

