Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Safety Numbers on Autonomous Cars Don't Add Up (bloomberg.com)
46 points by lemiant on Feb 5, 2018 | hide | past | favorite | 46 comments



"Because the Cruise cars were still pondering their next move when the driver took over, these incidents apparently do not constitute failures that must be reported to the DMV. Though a neat piece of legalism, this logic can't help but make one wonder how long a vehicle can remain motionless on public roads without it constituting a failure of the autonomous technology."

I know! We'll just add a program that checks whether or not the AI would ever stop and come to a decision.


You could say that there is an obvious Halting Problem which needs solving.


You could even hire a freelancer to do it. I guarantee you will find someone that can promise to do it at a great price.


If nothing else, this article provided a link to the California vehicle code that covers autonomous vehicles. That is an interesting read.

Back to the article, they point out impatient overrides by the test driver are apparently not being counted as disengagements and do not appear on the reports to the DMV. The author tries to make this into a safety concern, but it just isn't. Maybe it is against the intent of the code as written, but maybe not, since it seems to be concerned with true safety violations.

The code: https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e...


> impatient overrides

It's not impatient. It's the car having no idea what to do next, so it just sits there waiting for something to happen.

I count that as a failure in the path to autonomous driving. Remember: Those 1% scenarios are the hardest part, yet they are critical. You can't just say "99%" it's good - after all humans drive perfectly 99.999% of the time (measured by time on the road vs assuming 5 minutes per accident - which is probably high, since the error leading to the accident probably took less time than that), yet that's not good enough.


We need to distinguish between the two.

For any kind of self-driving vehicle, making a silly mistake and needing a human to override it to prevent a collision is not OK at any significant frequency. I'd guess the maximum acceptable rate would be no more than one collision in multiple years of driving.

For a self-driving private vehicle with manual controls, pulling over and saying "help me, human!" once every few months is perfectly acceptable. For a standalone self-driving taxi with no manual controls, it's not. For a self-driving fleet taxi with a remote operator able to take over and help it when it gets stuck, it may be.


> We need to distinguish between the two.

More than that, you need to distinguish between accidents and fatalities and then you need to put them both against the number of miles driven. In the US we're currently at about 36,000 fatalities per year with about 3 trillion miles driven.

> I'd guess the maximum acceptable rate would be no more than one collision in multiple years of driving.

Again.. terrible metric. The rate versus the number of miles driven _while in autonomous mode_ is what you really want to consider; and anything worse than the current numbers is absolutely unacceptable.

> For a self-driving fleet taxi with a remote operator able to take over and help it when it gets stuck, it may be.

I'm not sure that's better, or an acceptable solution. Remotely controlled, how? Do we put limits on that system? Are limits always a good idea given the context it's expected to be used in? How long between alert and remote control? What's the expected latency of drive controls, given that light actually moves pretty slow relative to the planet's size? Then, how do we secure any of that?

Either the system works better than humans, or it shouldn't be deployed. Limited trials, a good grip on the numbers and a strong regulator are what's needed.


> no more than one collision in multiple years of driving

That's really high, that's worse than what people do (on average a person will probably have an accident every 18 years, none fatal).

And that's with a person watching? That's really bad.

> once every few months is perfectly acceptable

Not really. The people inside it will have no practice (or even ability) at driving. Such a car is not really useful, you'll just make things more dangerous, not less.

The bar for self driving cars is so incredibly high, I doubt you'll see them on regular (non-instrumented, restricted) city streets until we have true AI.

Most likely there will be train-like intercity roads, dedicated (and instrumented) for self driving cars and trucks.

But unrestricted city driving? Very unlikely to ever happen.

Despite how high the accident rate is, people are actually really really good at driving. And computers are really really bad at being reliable.

When you can have a complex computer program that simply manages to have 99.9999% uptime, then we can start to talk about self driving cars. We don't even have that, self driving cars are an impossible dream.

And that 99.9999% number is not just random, it's how good you have to be to beat a human at driving.


Hmmm... I think I read the same insurance statistic and the number was 17.9 years, which is close enough. The concerning bit was that the number came from insurance claims not accidents, which might be under-reported, because nobody wants their insurance to go up! As a result many parking bumper dings, minor side swipes of other cars, bicycles, and pedestrians probably aren't reported.

Otherwise, judging from the bumpers of most cars in NY/SF... I would guess that each person had filed hundreds of claims.


What about self-driving trucks on highways?


>Despite how high the accident rate is, people are actually really really good at driving. And computers are really really bad at being reliable.

Human swarm behavior is terrible (that's your rush hour traffic for you). And computers are very very reliable. The only question is whether they can be programmed in a way to handle enough edge cases to be feasible.

I'm with you in thinking that for the next 20 years we'll see a lot of automation on high ways but not in cities. On the highways there's very much less variability, and less erratic human driving. For me that's already 99% of what I would buy a self driving car for. I'm happy driving short trips through the city. I hate driving home for 2-3h and not being able to do anything else.


I agree with the idea that an overide because the car has been stopped for too long is less dangerous than a more active sort of overide. But I think it still is a safety issue.

I’m mostly thinking of when we take the human out of the equation completely, if the car is stopped indefinitely that is going to cause all sorts of problems. If it’s blocking a city street you’re creating traffic as well as potentially slowing down first responders. Being stopped on a highway is very much a safety hazard.

Even with a human driver to take control there are places where coming to a stop would be a very bad thing (an on or off ramp for example).


> if the car is stopped indefinitely that is going to cause all sorts of problems.

Your comment is true, but consider the context. There was a vehicle already blocking the lane that caused the autonomous car to stop. The autonomous car did not block an unobstructed lane, the lane was already blocked.


I've seen Cruise's cars braking for no reason, including once at the intersection (there's nothing in front of the car)

I couldn't tell if it figured out it's safe to proceed on its own or if someone took over. It's also possible that the human drivers were doing the braking but if that's the case they clearly need better safety drivers


i watched a Cruise driver sit through multiple opportunities to make a left against traffic, until the light turned red and the traffic clear out complentely, basically wasting the green light. It was rather infuriating to watch, i joked to my girlfriend who also watched that we are all screwed if the Valley cant get its act together to hire really top notch drivers to train driverless cars. We dont need the roads full driverless cars who dont know there own power.


The main thrust of the article is that the disengagement numbers are not a good judge of which ai is better. As the companies have to much leeway to fudge the numbers as they are self reported.


There is definitely something that doesn't add up between the recent surge in pessimism about the progress on self driving cars and the fact that Waymo is about to start taking passengers without backup drivers. Either they are recklessly competitive or confident that the cars can handle any situations they might run into in the area they're releasing them. If it were some independent startup with less to lose I'd lean more to the former, but theyre not.


> Either they are recklessly competitive or confident that the cars can handle any situations they might run into in the area they're releasing them.

I don't think these are the only two possibilities. A third might be that they're confident that if a car finds a situation it can't handle, it'll gracefully degrade. If 1 out of 10,000 rides ends in a crash, that's a problem. If 1 out of 10,000 rides ends in needing to call a human driver for a pickup, it's not.


Hard to say. Waymo has way more miles than everyone else, but the layman's observation is "yeah, but in environments you control with an iron hand and travel over and over on the same rough, predictable routes, albeit with your very rich suite of data points."

A bazillion miles on the same track isn't necessarily as impressive as a magnitude fewer miles on varied uncontrolled/unmapped snowy, icy, narrow, unlit, flooded, unplanned etc, routes.

So, how do I compare Google's/Waymo's vast number of sensor rich planned miles versus Tesla's much lower, much less rich (no lidar) but much more diverse and real-world miles? And everyone else's (Uber/Lyft) in-between play?

It's pretty unclear, for a layperson, right now, who is ahead of the pack.


Do we need driverless to handle all the corner cases? I’m starting to think this is just the wrong way to look at it. Most of the time we commute home it’s just the same dumb stuff.

We can make an analogy to Starcraft AI-assisted here. What if instead of having a fully automated AI, instead you could train your car to take you home, by driving your route a couple of times? Kind of like how you might train the computer to go harass your opponents expansion and then just call up that subroutine everytime that it’s apropos (still waiting on Blizzard to make this game).

Don’t try so hard to replace the human. Make the human less busy, more powerful.


The open source Spring RTS interface allows for player-made AIs to execute inside the game engine. You can play alongside your bots (and turn certain behavior off/on with the interface).


Interesting perspective, and your view seems to favor Tesla. Honestly, I just don't know, and expert opinions seem to vary right now. It's hard to remember a time when it was harder to see the ultimate winner in such an important space.


The goal is a car that can drive itself, empty, or without a licensed passenger on board, that has disruptive potential. Being able to do the whole job in a limited geo-fenced area is better than being able to do half the job anywhere. There are hypothetical safety benefits to partially autonomous vehicles, but it's not a game-changer, it won't transform mobility.

Waymo was doing partial autonomy pretty well back in 2012, before self driving cars were even a twinkle in Elon's eye, and before anyone had ever taken seriously the notion of utilizing deep learning in autonomous vehicles. The real challenge is in fully validating the safety and reliability of these systems to the extent that a commercial robotaxi service becomes feasible.

I recall having conversations back in 2012 wondering *what if these things get pretty good, reliable enough that they can be counted on to be safe, but still subject to getting hung up or confused in any of the myriad situations drivers can get into where some sense of contextual awareness, creativity, and higher level reasoning is required, and there were big debates about whether Remote control, or remote guidance would be viable as a solution for tricky situations. It turns out that Both Waymo and GM are doing this. I'm not sure yet what the ratio of remote monitors to operational vehicles is expected to be for initial pilot deployments. It could be 1:100, or 1:10, or something else.

What's fascinating is that Analysts have estimated about $100 billion (not counting China) have been invested in the emerging self driving car industrial ecosystem that includes software development, chips, sensors, fleet management service, mapping, logistics and everything else. I don't think there's an historical analogue for something like that, where so much effort and value has been placed in an as-of-yet unproven technology.

For me the interesting thing is that it's all playing out according to a script (it's needed some revision, but not much) I wrote as thought experiment in 2010 just by asking myself "Well, what would happen if this technology actually came to fruition?" I've been following the development of Autonomous vehicles since the Darpa days, but at that time I never took it very seriously as something that might actually work in the real world, it was just a neat science experiment.


> Waymo has way more

Well-put!


I didn't notice the alliteration at first, accidental. But thanks anyway :)


One thing I find healthy, is that many companies beyond Tesla are trying to build this technology. Even though the story thinks the data may be suspect at this point, the competition from a variety of sources gives us good chances for some breakthroughs occurring.


In theory, sure, in practice, it's a little disconcerting.

I've worked in software since the '90s, I've seen how the sausage is made at all kinds of companies big and small. The theory of autonomous cars is a bunch of really smart folks hand crafting history's finest software. And there may be some cases where that won't be too far from the truth. But the reality on the street is going to be a zillion different competitors cutting every corner, skirting ever regulation they can get away with, and just shitting out the worst "move fast and break things" hackathon bullshit code that "sort of seems to work, most of the time" that they can manage. I know how devs and product managers think about testing and quality in the absence of dedicated and rigorous QA standards and infrastructure, in the context of life critical systems that is frightening.

Ask yourself, do you want to put your life in the hands of a code base that had some pimple faced learned-to-code-in-10-days bootcamp graduate who just "fixed a bug" in the drive software by ctrl-c-ctrl-v'ing from stackoverflow and then pushed to master? Because that is going to be the reality, not the ivory tower "well, if they did it the RIGHT way" fantasy that people have in their heads. The only way we'll get the "right" way of autonomous software development is if there is extensive and careful regulation with very rigorous auditing and process requirements. And we are nowhere near that right now.


I think most big players (like Google) are conscious that 1) there is a lot of money to make 2) a few deaths in a row will scare the public and the regulators

So they are not cutting corners. I'll be more concerned about the companies playing catch up, or the cheap clone that will try to cut on the costly sensor like Lidar and on the long and rigorous process required to develop it correctly. .


Precisely. And once there is the perception of a gold rush and that others (namely google et al) are already way ahead, then what?


I'm guessing that Google had deployed Level 4, built with legacy technology. Very large codebase, strong dependency on a well serviced Lidar, dependency on accurate calibration, dependency on real-time high fidelity maps. A solution scalable to well-managed fleets in good neighborhoods. If this is the case, the others still have a chance at getting to Level 5 in the same time frame as Google. If they play the technology in a right way.


> The only way we'll get the "right" way of autonomous software development is if there is extensive and careful regulation with very rigorous auditing and process requirements.

What makes you think the NHTSA has the expertise or resources to carry this out? The proof will be in the empirical results (accidents/fatalities per mile), not super duper code audits.


>The only way we'll get the "right" way of autonomous software development is if there is extensive and careful regulation with very rigorous auditing and process requirements.

From what I've seen that doesn't really make a difference either. Companies will follow the regulation on paper, but not in spirit. That's already happening in other heavily regulated areas of software development. My experience has actually been that heavy regulation makes for worse software, because the company starts spending more time on lawyers and "quality engineers" driving up meaningless metrics than maintainable clean code.


So basically, it's too late.

Maybe regulators should have been preemptively putting barriers in front of commercializing (then) sci-fi level tech. When there's no competitive pressure and rushing to be first to market, when it's just university people on DARPA funding developing the tech, you can afford to have it done The Right Way. Now that people are excited and build companies around self-driving, we'll only get Worse is Better.


> Because that is going to be the reality, not the ivory tower "well, if they did it the RIGHT way" fantasy that people have in their heads.

And if someone doubts that, remember that's what always happened in computing. That's how Lisp Machines and Smalltalk systems lost to UNIX. That's how we got C and JavaScript. Worse is better. The rule of computing under competitive, market-driven environments.

I dread to see it applied to life-critical systems.


> the reality on the street is going to be a zillion different competitors cutting every corner, skirting ever regulation they can get away with, and just shitting out the worst "move fast and break things" hackathon bullshit code they can get away with that "sort of seems to work, most of the time"

Humans are terrible drivers. A half crap autonomous car might still be safer than the status quo. In any case, whether talking about consumer goods or services, this is a space markets work in. Calling for rules to be written before we fully understand the problem is a recipe for overregulation.


> Calling for rules to be written before we fully understand the problem is a recipe for overregulation.

I think s/he outlined, at least in essence, one of the (many) problems. This is an area where rigorous QA, test, re-testing, and re-re-testing should be paramount above all else -- including profits, at least at first until we get the software down pat. Do you want to trust your children's lives to some half-baked "ship it!" product? I don't.


> This is an area where rigorous QA, test, re-testing, and re-re-testing should be paramount above all

Great, when are we going to do this for people?


> Humans are terrible drivers.

Compared to what? We have some evidence that computers can do better in tightly controlled scenarios (limited access freeways) but that's always been low hanging fruit.

I think humans are pretty good drivers, actually. How many 2 lane undivided roads do people go zipping down all day, every day, with very few accidents? A ton. We're also very adaptable and good at handling outlier situations.


Sober, rested, humans are great drivers.

The problem is that we also a bit too good at creating outlier situations (stoned, drunk, exhausted, angry at X Y or Z).


Some just aren't good drivers when sober and rested - I've met many, and they let anyone with a pulse get a driver's license and even if you don't have one you're free to operate a motor vehicle all you want, nobody stops you from turning the key.


Sober, rested, focused humans.

We're also bad at maintaining constant focus for longer periods of time.


> Compared to what?

Compared to the potential machine driver. Yeah, one may say it's comparing reality to a dream. But it's a fact that humans have a ceiling on reaction time, focus and input processing, and machines of today all operate at levels much, much above that ceiling. That machines can be made to drive safer than humans is an extrapolation made pretty much of simple logic steps.


OK. So what? The key word here is "might" be safer than the status quo. We should ensure that we are measuring carefully and confident in our results (which also requires being confident that the systems are well behaved and well understood and thus the testing is valid) so that if they do show they are safer than humans on average it will be beneficial to move to autonomous driving as much as is practicable. And if, on a case by case basis, that is not true then we can avoid killing more people unnecessarily because we want to save some money on development costs.


Do you think self-driving car[0] manufacturers should be allowed to compete on safety, in the sense that if one comes up with some sort of software "seatbelt" of some kind should that company be allowed to prevent others from using it? I should go look up how actual seatbelts and airbags became standard in (almost) all cars...

[0] Can we please call them "auto-autos"?


In the case of the 3-point seatbelt, a Volvo engineer invented it and Volvo received the patent, then made it available to other manufacturers at no cost.[0] While that is a great precedent I don't believe it's any kind of requirement.

[0] https://en.wikipedia.org/wiki/Seat_belt


Making it a requirement would be a double-edged sword -- on the one hand, companies would share safety features, but on the other hand, they'd have much less incentive to develop new safety features if they had to give them to competitors at no cost.

Perhaps some sort of compulsory licensing would be better.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: