Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Many Miles of Driving to Demonstrate Autonomous Vehicle Reliability? (2016) (rand.org)
21 points by quickfox on June 24, 2018 | hide | past | favorite | 41 comments


This is just bad math to justify the "imperative to develop adaptive regulations" aka "lets just let Uber drive without emergency braking".

No one is fooled. It didn't take a billion miles to expose Ubers system as broken, though it did unfortunately cost someone their life. It is also not needed to take a graduate class of statistics to look at Waymos tests and realize none of them involve driving rain in pitch black night to come to the conclusion that maybe they aren't giving the entire picture and that "driving miles" are not created equal.


First show me a machine that won't run into a barrier, or a human, or a parked fire truck, or a parked police car at 40-60mph without braking.

Then ask this (for now ridiculously abstract) question again.


These things seem to happen with humans too.

But what you're asking leads directly to the question being asked here! How long would something have to drive for before you believe me it's unlikely to do one of those things? Is it even possible for me to do this?


It would have to drive fully autonomous in all conditions that humans drive in, for enough miles to produce a statistically valid lower rate of accidents than humans.

Since it can't drive fully autonomous until it has proved itself, it would be allowed to have a safety observer onboard. But here is the catch: Any intervention by the observer counts as an accident, since we are simulating what it would do without a human. Assume worst case.

Oh, and software updates reset the clock.


it's a perfect chain of self-destruction

public thinks technology and computers are "magic" and perfect self-driving cars are somehow easily possible, not having a clue about the extensive limitations of radar and other sensors vs software

managers think they'll just fix the code by pushing a new update later, so just get it out the door because someone higher up is breathing down their neck

government forces their agencies to work with minimal budgets so testing is too basic, never updated and never re-reviewed, maybe even corrupted by lobbyists to do self-regulation without oversight

victims often won't be the drivers who early-adopted or abuse the technology out of ignorance, laziness or greed but instead innocent pedestrians, cyclists, or other drives in regular vehicles doing nothing wrong

when the victims sue, juries will be made to easily believe technology can't be wrong and it was human error by either the victim or the driver

remember toyota spaghetti code for electronic fly-by-wire steering and brakes? sadly I think it's going to take some lawyers with billion dollar class-action suits to get all this to slow down

beta testing 2-10 ton vehicles moving fast in the realworld with plenty of possible victims demands space-grade-level testing (and even then plenty of rockets blow up on the launch pad or never make it to orbit)

meanwhile I hope independents like consumer-reports come up with an ever-improving test site of obstacles and events for these cars to deal with and that they are very thorough and harsh about the tests every year


> sadly I think it's going to take some lawyers with billion dollar class-action suits to get all this to slow down

That will come in short order.

Uber was lucky that the person they killed was homeless and not a politician's son or daughter.


We should not make "better" the enemy of perfect or something of the essence. If we can have self-driving vehicles that are twice as safe as the average human, then they should be allowed on the streets.

https://www.rand.org/blog/articles/2017/11/why-waiting-for-p...

I realize that accepting that a machine may kill you when you cross the street is a bit unsettling but if you think about it, it is no different than accepting that the person in the car behind you may have a moment of absence or not pay attention to the road end up killing you or a pedestrian.

Yet having another human kill another human seems to be accepted by society?

If car manufacturing companies take on the responsibility and financial burden on them to compensate families that are killed/injured by a self-driving car, just like insurance companies do nowadays on the behalf of the drivers at fault, then the problem becomes non-existent unless I am missing something.


The article is saying that it may be difficult or impossible to prove that self driving cars are actually safe. The Tesla issues and recently released sensor shortcomings are damning and really put a dent in my faith of this technology.

An example from a recent road trip I was on: I was travelling down a suburban street with cars parked on the side of the road. There was a car parked with a relatively high undercarriage, so that from a distance I could see under the car through to the other side. I saw a pair of shoes toward the front side of the car, which I (correctly) deduced were connected to a human being. In anticipation of driving past a person who might step out in front of me I slowed down and made sure to pass safely.

Tesla, on the other hand will drive at 70mph into a concrete divider if some paint on the road is smudged.

We've got a ways to go yet I think.


I'm not sure why you are talking about tesla in a submission about self driving. Tesla doesn't have self driving technology. They only have adaptive cruise control and lane assist.


You really think Tesla is not relevant in a discussion about self driving cars?


> If we can have self-driving vehicles that are twice as safe as the average human, then they should be allowed on the streets.

Coal power plants kill about 100,000 people a year so a reasonable target for nuclear power is 50,000 a year.

Cigarettes kill 50% of long-term users so a good target for vaping devices is killing 25% of users.


I mean, it's not like we'd stop at 50% and call it good. But then we're at least reducing the number of deaths while we refine things.


I am appalled at statements such as "twice as safe as human". We should be comparing driverless technologies with humans assisted by the same technology running in passive mode.

Otherwise, you're implyjng that the developer of that technology would either release the technology as completely driverless or not release it at all.


> Otherwise, you're implyjng that the developer of that technology would either release the technology as completely driverless or not release it at all.

In some ways that statement is true. Ford and a number of other companies refuse to implement level 3 automation -- because of the switch-over costs and human-vehicle communication impedances which would almost certainly lead to more accidents.


You misunderstood that statement and said the exact opposite.

They refuse to implement level 3 automation, instead they roll out the same technology as driver assistance technology (level 1-2).

Find me a company that says that they have mind blowing level 3-4 technology and will not make assistive technology is with it. Ahh, there is one, Tesla. Autosteer is just lane keeping assist, but admitting that would mean Tesla can't claim to be futuristic. Why Tesla makes the claim "twice as safe as human" and refuses to give out assistive versions of it's technologies is horribly misleading bullshit.


Yup. If the computer works fine 99.9% of the time, there's just no way to keep the meat paying attention so it's there when that .1% situation pops up.


>> If we can have self-driving vehicles that are twice as safe as the average human, then they should be allowed on the streets.

Absolutely disagree with this approach. Self driving cars can't be merely slightly better than your average human - they need to be completely flawless. Just like an airplane autopilot can't be just a bit better than a human pilot, it needs to be 100% reliable.


Why exactly? Wouldn't that be akin to limiting medication to dried herbs of irregular purity and dosage because synthetic medicine cannot cure AIDS with antibiotics?

Even at the most risk adverse "needs to be better than both a human pilot alone or a human supervising it with overrides" would be a saner standard for autonomous operation. Since there have been cases of crashes from overriding an autopilot when it was judged as wrong but was actually right.


It's not the machine that killed you - it's the person who programmed the machine. If a pedestrian dies when a person is behind the wheel, there are any number of reasons: inattentiveness, drunkenness, anger - all very human states of being and emotion. That exact combination of influences will never happen again. And it could also be just an unfortunate accident.

But with an AI driving the car, some faceless person ran the numbers and decided that your life is worth some statistical amount. Not only yours, but all the people ever who will stand in front of that machine.

And that is murder.

We could give the AI's free will if we ever figure out how to encode that concept in terms a computer would understand. But that opens up a different can of worms because we are then enslaving a being with free will.

We could absolve the original programmers of blame - after all they were doing it for humanity or for the children or some other excuse. But then we have to decide how limited that absolution is.

Those are the kinds of issues we must address before we can fully trust an autonomous vehicle.


Can you define murder as you use it here?


I'm using the simple definition that murder is killing with intent, barring modifiers such as a state of war or emotional modifiers, which machines are incapable of.

Computer systems are deterministic - if we give it A, we expect to get B, not C or D in some unpredictable pattern. If we do somehow get C, that is an error, not an insight. If we allow that the machine is correct 90% of the time, it's still in error but furthermore we have now codified an acceptable error rate. A human or group of humans had to make that decision, because computers are incapable of deciding what is acceptable or not acceptable.

Ergo, when you step off the curb into the path of that self-driving car, someone has already made a decision that 10% of the time, you die. The stated intent might be to maximise the preservation of the lives of the vehicle's occupants or that group of schoolchildren or a dog or whatever, but the unstated intent is that someone has to die.

Which means it is codified killing, or murder. It is not a happy (or unhappy) accident of the universe.


By this logic doctors are murderers whenever they perform an operation and the patient dies, all equipment manufacturers that make something that's ever resulted in a death are murderers, all engineers of bridges that collapse under extreme conditions are murderers.

Really widens the net for the concept of murder.


If they ever formally codified the acceptable risk resulting in death and operated accordingly, then yes, they might be.

However, a doctor is not a machine - there isn't some piece of code that says 10% of all appendicitis cases die. Equipment deaths are usually operator error or perhaps component failure, etc. Bridges? Who knows?

The point is that an autonomous vehicle has a programmed decision tree where the others do not and at some point, someone has to have, or should have, taken into account vehicle operation under catastrophic circumstances.

Go read up on the trolley problem and tell me if I'm wrong in bringing this up.


This paper consistently assumes a fleet size of 100, and 24/7/365 driving. These are silly in different directions, but I think the former invalidates a lot of their claims. As a slightly not-random example, GM produces around 30,000 Chevy Bolts a year. Things start to look a little expensive, but the paper claims "This analysis shows that for fatalities it is not possible to test-drive autonomous vehicles to demonstrate their safety to any plausible standard". I think they are neglecting "without increasing fleet sizes beyond 100 vehicles".


Their assumptions are just one way of reaching the 275 million mile mark needed to hit that 95% confidence interval. If we're looking at actual numbers, Waymo's recently boasted of hitting the 7 million mile mark over several years of testing so if anything, Rand's assumption of 275M miles in 12.5 years for a test programme looks to be erring on the generous side. Sure, if Chevrolet decided to use their entire production run of Bolts to test autonomous vehicle performance you'd get 275 million miles pretty quickly, but they might struggle to get consumer buy in and regulator approval for selling fully autonomous driving capability to 30,000 consumers unless and until they can produce evidence that their autonomous mode accident rate is lower than human drivers...

And there's a much bigger problem, in that the software being tested is continually evolving, and modifications which result in improvement in general driving conditions don't necessarily tends towards stable (or monotonically reducing) numbers of rare accident-inducing bugs.


Ah, and indeed it seems like substantial (>10x) fleet size increases are planned.

https://www.theverge.com/2018/1/30/16948356/waymo-google-fia...


FWIW the paper mentions that Google's autonomous driving fleet currently has 55 cars, so I suspect the 100 fleet size was picked to reflect a significant improvement over a notable competitor in the field.

The article mentions Google's self-driving fleet size but I didn't notice other companies. I wonder what the fleet size is at other companies that are working on this, anyone know?

The reason I ask is that I'm curious if "making more cars" is really the bottleneck. It's totally possible that the Waymo team doesn't have the headcount to keep up with the data output from the existing 55 units, or the fleet logistics bandwidth to keep up more than 50 or 60 heavily custom self-driving vehicles. It may very reasonably be a staffing issue rather than a money one, and it's possible that even if Google bought the rest of the self driving industry they might not be able to maintain more than a couple hundred cars.

Disclaimer: I work for Google, but not on self driving cars. I actually know almost nothing about our self driving efforts, sorry.


>Self-driving vehicles must be driven for hundreds of millions of miles to demonstrate reliability, which would take 10-100 years.

So we can A) accept higher fatalities for convenience, B) demand much better performance from self-driving vehicles than we expect of humans, or C) wait.


If we have a hundred years to burn, how about reorganizing our cities into forms that obviate the need for cars (self-driving or not) in the first place?

Anecdotally, I've heard people make the point that public transit is infeasible in the states because the population density is too low, and people are too spread out.

I'm not sure that it is -- the US Census Bureau stats show the share of Americans living in urban areas growing steadily throughout the history of the country, with 80% of the population currently hailing from urban areas.

We are and have been urbanizing since the birth of the country, I reckon if were smart about building infra now, we can remove the problem SDVs are trying to solve in approximately as long as it would take for them to become ubiquitous were we to take no action.


Agreed. If car crashes kill so many in the U.S., wouldn't it make sense to make the country a less car-dependent nation if we want to reduce fatalities? Walking and biking generally won't get people killed, so why not promote that?

I think part of the problem is that self-driving vehicles is marketed as a life-saving technology, when it is the novelty and futurism people are attracted to. (It is indeed an impressive and challenging problem.) The furthest mainstream press release I can recall making the case for autonomous vehicles was Google's efforts. This makes me wonder if they had a solution in search of a problem, so they took road deaths in the U.S to sell the idea. Maybe they were chasing the next iteration of intense computing application, e.g. Deep Blue -> Watson -> self-driving cars.

The 40k+ fatalities per year is a solvable problem without autonomous vehicles: (1) promote equity in other methods of transportation, (2) enhance driver-assistance technology, (3) design roads to be built for safety rather than convenience/speed [0]

Regarding point 2 above, note that Chevrolet has "teen-driver technology" which can set a maximum speed, and provide visual and audible warnings if the (teen) driver exceeds the speed limit [1]. Presumably the vehicle has cameras to read speed limit signs. Why isn't this being applied to all drivers? It seems to be the classic case of "let's blame teens for everyone's bad driving" (juvenoia).

[0] https://www.strongtowns.org/journal/2018/1/25/speed-kills-so... [1] https://www.chevrolet.com/teen-driver-technology


Right now I’d be thrilled with doing just as well as humans without fudging the numbers. We’re orders of magnitude away from that, so frankly this is all a non-issue. We’re far from demonstrating reliability because they’re far from reliable. Maybe someday they will be, maybe not, but for now we’re playing a dangerous game in the name of a dubious utopia. The most likely outcome is a winter.

Edit: I don’t mind the downvotes, but I wish they came with a rebuttal rather than radio silence.


Pretty sure you're being downvoted by making the statement that we're orders of magnitude away, but not providing any data to back up your claim. Even with the number of accidents / injuries / deaths caused by self-driving cars, without actual data, they at least seem to being doing as well as humans - though that's completely anecdotal from my side.


Fair enough.

https://www.insurancejournal.com/news/west/2018/02/01/479262...

The latest disengagement report showed that Waymo vehicles, in tests conducted from December 2016 through November 2017, on average logged 5,596 miles without Waymo’s safety drivers disengaging the system and retaking the wheel. Its next closest contender was Cruise, owned by General Motors Co, at 1,214 miles on average between disengagements.

And later

Waymo, for example, drove 352,545 miles in the state during the period with only 63 disengagements. Cruise vehicles drove about a third less, at 127,516 miles, and had 105 disengagements.

The third best performance came from Nissan Motor Co , which drove 5,007 miles and had 24 disengagements, meaning that its vehicles had disengagements on average every 208 miles.

The numbers fall off sharply after Nissan, with Baidu Inc at an average rate of every 41 miles, chipmaker Nvidia Corp at 4.6 miles on average, and Mercedes, with disengagements every 1.3 miles on average.

Some further discussion and context:

https://www.scientificamerican.com/article/are-autonomous-ca...

Between mistakes a human wouldn’t make unless impaired, such as accelerating into a crash barrier or into a fire truck, and the high rate of human intervention, compared to the rate of fatal accidents per x miles humans have (1.27 per 100 million miles even including drunks, the elderly and new drivers) the argument for “safer than humans” seems utterly insupportable at this time.


That makes sense. Though, with Waymo/Cruise/etc (that are still in the testing phase - at least compared to groups like Uber and Tesla), is there any documentation on why they disengaged?

Personally (again, anecdote, but still) I've seen many, many people who, when brought to a new situation while driving, will begin to panic and ultimately stop in the middle of traffic... Often during rush hour. I can imagine that the systems driving these cars could be put in to a similar position (remember the bicyclist and the Waymo car?) and need human intervention to continue.


What’s the difference between B and C?


The difference is in how you measure "better than humans." Which also means looking closely at how we measure how "good" a human driver is.

It's the same argument as "what is AI" which inevitably devolves to "must be demonstrably smarter than humans in every way before we consider it to actually be intelligent."


I also suspect that the average human driver is getting worse not better. We have way more distractions available in the cabin than in the past mostly due to cell phones. I also notice a lot of younger people are either forgoing driving altogether or getting their licenses much later due to the availability of uber/lift as an alternative so would imagine they have less experience.


Totally unsupported by examples, but it seems like the huge economic benefits will cause society to underestimate to risk of autonomous vehicles. We'll debate and distort the relative risk and provide plausible deniability for the right people. In the end, we'll tweak the AI based on real world failures until it's good enough to win the PR battle. This is assuming it can be tweaked to work. The key will be protecting (or limiting) the liability of car manufactures from lawsuits. If AVs are involved in 37,000 fatalities a year (the 2016 human stats) the lawsuits will be untenable. Maybe there's a plan to handle this, and I just haven't seen it. Just my opinion...


How do we test human readiness for driving? We give them a driving test, which takes all of 40 minutes. This is possible only because we know a lot about what it means to be a human. To test self-driving technology we need to understand a lot about the AI that is behind it, which is an active area of research with much progress. Moreover, there is adversarial testing in simulation and on raw sensor data, which can tell you a lot about your system before any large scale real world testing begins, including when it might be safe to start real-world testing. Real-world testing at such large scales is neither a practical nor a necessary component to validating self-driving systems.


>We give them a driving test, which takes all of 40 minutes

40 minutes + 16 years to prime the neural net for

- object recognition and classification

- developing a theory of mind so that the NN can anticipate how other NNs will behave

- etc

The NN can't sit the 40 minute test unless they prove they can do successful object recognition and avoidance at low speed (while in bipedal mode). That they have the sufficient fine/gross motor skills to navigate a complex and changing 3D environment etc.

It's going to take a while before we've got digital systems that can compare.


Take a look at Germany, and how to get a drivers license there. Hint: if you are bad at parallel parking, you will fail the exam.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: