Hacker News new | comments | show | ask | jobs | submit login
The Menace and the Promise of Autonomous Vehicles (longreads.com)
40 points by raleighm 40 days ago | hide | past | web | favorite | 47 comments



I stopped reading at "Central to AV testing is the “trolley question”. No, that's not a central problem. If it was, it would be covered in driver's ed and driver testing. It's not. Most accident avoidance is pure braking.

On the evidence, the central question in AV testing is "can we trust the human to take over fast enough when the automation does something bad?" The answer appears to be "no".

He also parrots Uber's whining about California's "regulation" of AV testing. The California DMV autonomous vehicle testing rules mirror those for human driver learner's permits. You have to get a license. You have to show proof of insurance. You have to report accidents. You have to have a licensed driver on board. DMV can pull your license if you have accidents. You can't carry passengers for hire. You can't drive a heavy truck. No surprises there.

DMV does require reports of minor accidents and of disconnects, and they publish that data. That's what irks the companies with bad automatic driving systems - public disclosure that they suck.


No, that's not a central problem. If it was, it would be covered in driver's ed and driver testing. It's not. Most accident avoidance is pure braking.

Absolutely. This sort of hypothetical-spinning kind of mirrors the way concerns about general AI "taking over" mask the problem of dumb AI making disturbingly biased decisions and putting us at the risk through the failures of fragile logic.

Autonomous vehicles promise all sorts of paradoxical effects, haven't been demonstrated to be more reliable than humans, etc. A bit of focus on the real near-future could benefit these speculators.


There is a valid trolley question but it's not about how the AV makes decisions.

It's about whether the benefit of developing AVs as fast as possible outweighs the risk of accidents caused by immature AVs.

If AVs can save 1000 lifes a year then going Uber style and risking 100 deaths to speed up development of AVs by even just 2 months might save more lifes than trying to avoid deaths entirely.


The evidence of what actually happens when you "go Uber style" and cause a death as a result strongly suggests that risking 100 deaths will not speed up development of AVs...

Edit: genuinely perplexed by the downvoter who apparently believes the Uber "move fast, break things, suspend testing for at least several months after attracting widespread public opposition" approach has been a net positive for their prospects of commercialising self driving cars in the near future?


Trouble is, our psychology doesn’t work that well. Yes, in that scenario more lives are saved by going in faster, but people react more strongly to the one hundred deaths than to the nine hundred who don’t die.

Heck, throw in the availability heuristic bias and we as a species react about as strongly to the four deaths caused by AI than to the roughly three million caused by humans in the same period.


I would have agreed with you whole-heartedly two months ago: as a cyclist, it’s all too easy to get into a very toxic place in your mind because so many people are actively trying to kill you after you’ve pointed out they are patently unable drive a two ton vehicle and check twitter at the same time. That and conversations with drivers unable to comprehend that their stopping distance depends quite simply on their speed leads to simple, obvious recommendation: drive slowly, pay attention. And the trolley problem genuinely feels like an insult.

Since, we have seen what the second-tier self-driving cars (Uber, Tesla) are struggling with: they have to ignore static object along the road (lamp-posts) and they are not great at making a difference between those and traffic. You can’t stop every time there is something near the road that doesn’t move, so you have to use the likelihood that said thing is actually static to move forward. That turns into an arbitrage between dynamic driving and safety — still not a trolley problem, unless you compare the potential gains of reasonably fast self driving against the lives of people who look like lampposts.

I don’t know how or why Waymo doesn’t appear to have the same concerns. It might have to do with the extend or quality of their Lidar, the progress they made on object recognition, but they indeed do not have an obvious trolley-like problem to solve. If you consider how careful the automated driver has to be, you might see similar speed-vs-safety arbitrage requiring some minimal threshold or ratio. I can’t imagine that is nearly as relevant as good maps, clear understanding of local driving habits and expectations, but there could be some aspects of it, just because that’s how any optimisation problem is written.


> It's not. Most accident avoidance is pure braking.

This is false. "Accident avoidance" also means being constantly aware of your surroundings and slowing the car down before an incident (like a person suddenly getting in front of your car) might happen. This strategy was actually posed as a question on my driver's exam test: What should you do if you see a bunch of kids playing on the sidewalk close to the road? Correct answer: You should slow down, because the kids might suddenly hop in front of you. The same goes for bicyclists.


Slowing down is braking surely? His point was that there is rarely an accident the recreates the moral dilemmas of the trolley problems and I think he is right.


> Slowing down is braking surely?

No, it's your last resort if you must slow down. Better drivers learn to coast, down-shift, or tricks such as using the slight uphill grade of a roadway to adjust speed without resorting to braking.

Next time you are driving and have good visibility far ahead in a long line of cars (such as approaching a big hill) watch for the cars with the brake lights coming on constantly. These are the less capable drivers, and once you spot them you will see them making other mistakes. For example, one trick I've learned is to watch for the driver's hand resting on the turn signal stalk in adjacent lanes as I approach. This is a tell that they are intending to cut me off. When I spot it, I can back off to anticipate them, or be more aggressive and accelerate to close the gap they were aiming for.


Semantics. All the actions you listed in your first paragraph can be considered braking maneuvers.


It's not semantics when self driving technologies rely more on superior reflexes to brake late in response to stuff suddenly appearing in the roadway to compensate for inferior ability to adjust ambient speed through better anticipation of how likely a particular mammal on the pavement is to step out into it.


Animats' point is that if you break and avoiding hitting anyone, there's no trolley problem. How you break is an important but unrelated question.


I think you're right about the original point, but the wider counter argument is that human anticipation is both important in reducing the instances of "trolley problems" (usually not hitting object vs not being rear ended rather than the sillier example from philosophy) and choosing which pedestrians to slow down for and how much (because slowing for all pedestrians is more likely to cause accidents with other vehicles as well as impairing journey progress, and slowing for no pedestrians may lead to you being unable to stop in time) is closer to a trolley problem than something that can easily be generalised.

And to respond to another of Animat's original points humans aren't trained to brake/swerve in a harder, more risky manner for obstacles which are animate and not bother risking stopping at all for the unidentified object that appears to be a piece of cloth because unlike AI, this is not something we need to learn. (I agree with his original contention that pure software error and human inability to respond to it is a bigger problem than "trolley problems" with current generation tech)


Words matter, and 'brake' or 'braking' have dictionary definitions which contradict your statement. It's an important distinction because experienced drivers know to anticipate problems and slow down before they need to brake. You probably do this already without thinking about it, especially if you drive a manual transmission. Ever drive a speedboat? Same idea, they have no brakes and you learn to look further ahead and anticipate, and allow outside forces to slow you (rather than relying on an internal braking mechanism).


That's not fair. 'Brake' also has dictionary definitions that directly support my statement:

anything that has a slowing or stopping effect.

See http://www.dictionary.com/browse/brake?s=t

The comment I replied to by tombrossman used the terms coast and down-shift, which are both commonly referred to as engine braking[1]

1. https://en.wikipedia.org/wiki/Engine_braking


> Slowing down is braking surely

Slowing down as a result of a premeditated thought (does that kid play with a ball which might become loose? is that kid accompanied by his parents? is that kid really a kid or just a small adult person?) is more than mere braking, that's what I was trying to say.


Maybe start worrying about the problem after they’ve sorted out the not-driving-into-medians problem.


It would be much easier to countenance arguments that self driving cars will save us all if humans weren't such good drivers.

http://www.iihs.org/iihs/topics/t/general-statistics/fatalit...

In Massachussetts, the number of deaths per 100 million miles driven is 0.66. Those are the massholes. By contrast, it was reported that Uber required human intervention once in thirteen miles while Waymo required human intervention once every 5000+. Those are not directly comparable to fatality statistics because the interventions are also for things like avoiding disturbing events like hard braking, but they are still illustrative.

https://arstechnica.com/cars/2018/03/leaked-data-suggests-ub...

Let those orders of magnitude sink in. The humans are driving in all weather conditions all over the country while self-driving cars are in (generally) sunny relatively controlled areas.

Personally, I have hope that we can eventually beat humans, but these kinds of statistics call for a careful, deliberative introduction of this technology.


So you think it's reasonable to compare the rates that humans not only get into crashes, but actually die (or kill) in crashes - to the rates that a human takes over for an early alpha tech version of self driving vehicles?

Your stats not even remotely "illustrative" of anything other than what they say of the presenter.


This would be academic if someone hadn't actually died. I don't know the numbers, but I'm fairly sure that just tanked the self-driving stats pretty hard.

EDIT: Also, consider what avoiding "hard braking", which was a large reason for human interventions iirc, actually means. It means either that the computer jams on the brakes hard for no reason or that it only realized it needed to brake substantially after a human would have realized the necessity.

EDIT2: Uber had driven 3 million miles at the time of the death according to the below article. A single incident does not statistics make, but based on the NSTB analysis so far, that version of the software made a systematic error. That puts Uber's score at very handwavy 33.33 per 100 million miles. I'm sure the number will change with software revisions and just from random chance. https://www.nytimes.com/2018/03/19/technology/uber-driverles...

EDIT3: I should mention that I was initially hesitant to compute this "statistic" because it's so raw and will wildly fluctuate. Interventions are more solid statistically. I don't think it's crazy for someone to look at a heavy box of metal that requires constant supervision to do the right thing and conclude in the absence of (appropriately) extensive testing that it's dangerous.


And now you're just making up stuff. I'm shocked... Disengagements have absolutely nothing to do with "avoiding hard braking". Disengagement numbers and analysis are completely open you can find them here [1] along with the reasons, miles driven, etc. A disengagement is defined as, "a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle."

So for instance if the software is providing a buggy output and the car is stopped for debugging, recalibration, etc that counts as a disengagement. And that is indeed, by far, the most common occurrence. It's extremely rarely to actually avoid a crash - let alone your fixation with "hard braking." Some companies, such as Nvidia, specifically clarify that literally 0 of their disengagements (in their case out of 109) had anything to do with extraneous conditions such as road surface, construction, emergencies, etc.

[1] - https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disen...


Hard braking was discussed in a news article I have read some time ago about Uber specifically. Your attitude is not helpful. The list you linked does not include Uber though it does include many others. Thanks for the link.

EDIT: I'm looking at Waymo's totals, whom I consider to be a safer player than Uber. The data totals come from your link on their page 2 Table 2: Disengagements by Cause. The time period spans Dec. 2016 to Nov. 2017.

    Disengage for a recklessly behaving road user: 1  
    Disengage for hardware discrepancy: 13  
    Disengage for unwanted maneuver of the vehicle: 19  
    Disengage for a perception discrepancy: 16  
    Disengage for incorrect behavior prediction of other traffic participants: 5  
    Disengage for a software discrepancy: 9


News articles on this topic tend to be less than worthless - an increasingly typical state of affairs. In this case the reason is quite direct: people are scared of revolutionary technology, and especially self driving vehicles -- and nothing gets clicks like fear mongering. Don't believe everything you read, let alone parrot it.

You'll also notice the paragraph in Waymo's report, which is typical: "Note that, while we have used, where applicable, the causes mentioned in the DMV rule (weather conditions, road surface conditions, construction, emergencies, accidents or collisions), those causes were infrequent in our experience. Far more frequent were the additional causes we have labeled as unwanted maneuver, perception discrepancy, software discrepancy, hardware discrepancy, incorrect behavior prediction, or other road users behaving recklessly."

In other words, they ended up disengaging because of "emergencies", which your imminent hard braking would apply as - exactly 0 times.


All self driving car companies put together haven't driven even close to the number of miles where an average human driver would have had a single fatal accident and self-driving cars have had multiple fatal accidents.

The sample size is obviously too small to be certain, but that does not look good.


> What does it mean to experiment with technology that we know will kill people, even if it could save lives?

This tagline ignores the fact that every new technology ever invented has killed people. Machines fail. Buildings collapse. Bras and vending machines and popcorn and fire hydrants all kill people. The simple passage of time on large projects kills people.

People are going to die no matter what. If we can reduce the body count then that's a win.


Your remark has led me to this article: https://www.standard.co.uk/news/death-by-bra-7225474.html

Thank you :)


We absolutely should stop research into chemo- and radio- therapies as cancer treatments because they may kill the patient.

Is this article a satire?


Sorry. Cars have a slightly more potential to kill than bras and vending machines and people dying of natural causes..


Heart disease, cancer and chronic lower respiratory diseases are the top 3 leading causes of death in the USA, all regarded to be natural causes. Accidents are #4, and car crashes are included in that.


And if we exclude the age group to the people who use cars the most, like people in middle age driving around their family? I would guess most of the people in that car are not that likely to die from 1, 2 or 3 anytime soon.


Yeah, cars are best for killing (and crippling) people in the prime of their lives.


I recently read this paper (http://sjha8.web.engr.illinois.edu/resources/DSN2018.pdf) from the dependable systems conference. Seems like AVs are no where near ready.

This is not just a Uber problem - it is pervasive across most manufacturers. The rate at which these AVs are getting better essentially means we won't have "human level" cars in a long time.


I have no idea if AVs are near ready, but that analysis seems a bit shallow. All the dataset gives them is the number of disengagements and the miles driven each month, but not all miles are necessarily the same; for example, it makes sense that Waymo et all would start taking their cars to streets with more traffic/people/etc as they get more confident in the system's ability to navigate in easier areas.


great reference, thanks for sharing. This type of systems level analysis is what I've been looking for (and I think what all the av hype is ignoring).


America's car death rate is far higher than most rich countries. We could save 20,000 lives per year just by following best practices, but nobody gives a shit, so let's all fantasize about magic cars.


Agreed, autonomous vehicles might offer some benefits inclding road safety in the very long run, however we could do so much better right now if laws were enforced / people were actually following them or just care. The system is designed with plenty of buffer in it, it is just the nature of most drivers to disregard speeding limtis and safe trailing distances.

As a consequence imo it is misguided to measure AVs against average drivers, because more average drivers on the road is exactly what I would rather not have.


The article is claiming that the development of self-driving cars will require killing people on public roads. That's a tantalizing prospect ("kill thousands now to save millions later"), but no evidence or arguments are presented why. Sure, there will be deaths due to road accidents, but is it really inevitable that more people will die due to failures of engineering like Uber's or Tesla's?


Utopias are always justified this way, and the worst crimes are committed as a result. After all, if a potentially limitless number of lives are saved in the future, you can argue for sacrificing millions today. Somehow people never seem to understand that nothing real or worthwhile is down that road, and as a species we go down it repeatedly.

I think the best argument against this, is to demand that the person willing to let many die is the first to die.


"I think the best argument against this, is to demand that the person willing to let many die is the first to die. "

That's why I am saying that all road testing of autonomous cars should take place in the neighborhoods where the executives and their children live and are on the road. That would align the incentives of the company and the public.


> It will take billions of miles—and some unknown number of people killed—to gauge whether, by a statistically significant margin, AVs are safer than human-driven cars.

This kind of lumps all the AVs together. There is evidence though that Waymos are already safer than human divers - from 2012:

> Google announced Tuesday that its self-driving cars have completed 300,000 miles of test-drives, under a "wide range of conditions," all without any kind of accident. (The project has seen a few accidents in the past — but only with humans at the wheel.)

> To put that into perspective, the average U.S. driver has one accident roughly every 165,000 miles. https://mashable.com/2012/08/07/google-driverless-cars-safer...

Since then I think they have logged about 6 million miles and had one semi at fault accident when their car thought a bus would give way but it didn't and hit them. I don't think Waymo would been looking to order 62,000 vans if they were not confident they would be safer than human drivers.

They have also had far more accidents with humans at fault crashing into stationary Waymos than the other way around.

On the other hand Ubers crash seems like a reckless effort to try to catch up by cutting corners and Tesla's it's your fault if you didn't look for a few seconds attitude seems iffy.


Over $100 billion has been invested in this as yet unproven technology, there is no historical analogue for that. This money has been invested either because the players involved stand to make a lot of money, or else they stand to lose a lot of money if they don't invest in the technology. Moreso the latter than the former, really.

Plenty of lipservice is paid to the potential safety benefits, along with a laundry list of other social benefits, but really it's about money.

By logical extension, autonomous vehicles have to be safer than human drivers, or they just won't fly. Too many accidents invites regulatory scrutiny, time consuming safety audits, and worst of all, public backlash.

If you look at the players across the industry they can be broken into two distinct categories: the offensive players, and the defensive players.

The offensive players: Namely Waymo, but also Cruise, Aurora innovation, Drive.ai, Argo.ai, Zoox and Nutonomy: their safety records are impeccable.

But there are two prominent defensive players who both would be better off if AVs were never a thing: Tesla and Uber. For them, screwing it up for everyone is a net benefit to their business models.

Now, if we were actually concerned about safety, we wouldn't be driving, we'd be riding bicycles (or walking or taking the train/bus), and our cities would more closely resemble what they were in the 1930s or earlier. But there's no money in bikes, and everybody hates them.


One of the most dangerous states of mind, is to be pursuing something that you are convinced will save people's lives in the future, but that might hurt people now. This type of thinking is unrivaled in causing previously normal people to do horrendously bad stuff. Add to this the potential for profits, and it becomes a very toxic mix, where a person can be killing people while trying to make a profit while fully convinced that they are a very good person and are doing humanity a great service.

We have historical experience with an analogous situation. Medical experiments and drug development. People became so focused on "the greater good for humanity" that they did horrible stuff to study participants (see Tuskegee Syphilis Study). Or drugs were not adequately assessed for safety in pregnant patients and severe birth defects occurred (see thalidomide). This is one of the reasons we have Institutional Review Boards to review medical studies and the FDA to review drugs, because people involved in a project like this are so good at fooling themselves.

When developing drugs, pharmacy companies aren't allowed to say "This drug will cure X and benefit so many people. We would like to ignore the side effects happening now because the cure will be so beneficial in the future."

We need the equivalent of an Institutional Review Board/FDA for self driving cars.

As part of the submission, the engineers have to document what sensors they are using, what the limitations of those sensors are, what algorithms/machine learning techniques they are using, and the limitations of those. They also need to disclose the engineering tradeoffs and design decisions (for example, not having LIDAR).

Then there need to be tests for self-driving cars for safety on the unhappy path.

What happens if there is a concrete barrier in front of you?

What happens if the lane markers are bad?

What happens if the car in front of you brakes?

What happens if you are driving with the sun facing you?

What happens if the truck in front of you drops something in the road?

What happens if a child darts in front of you?

And so on...

Then based on the testing, certify the car for self-driving only in certain conditions that were shown to be safe based on the independent testing.

After a self driving car is on the road, all data collected by the self driving cars should be shared with the review board so that there can be ongoing monitoring. In addition, users of those cars should be able to report incidents and near misses to the review board.


You know, I'm not in this field but this should be stuff that I assume would/should already be done BEFORE even putting it out to the market.


Then why did the multiple deaths caused by Uber and autopilot happen?


USA motor vehicle fatality rate is 7.1 deaths per billion vehicle-km. Hard to test that without going outside onto real roads with real drivers. (Tesla fatality rate at time of third and most recent fatality: 14.3 deaths per billion autopilot-km, which is significantly worse than I had realised).


Note for anyone reading this in an archive: this was based on information which I later learned was incorrect. The number of autopilot-km driven was not at the time of the 3rd death, but either the 1st or 2nd, depending on if the autopilot was involved in the incident of 29th January 2016.


Not paying thru the nose to be a Musk guinea pig nor going to walk the streets of Pittsburgh .. avoiding having an Uber car run me over.

Progress will be a killer!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: