On the evidence, the central question in AV testing is "can we trust the human to take over fast enough when the automation does something bad?" The answer appears to be "no".
He also parrots Uber's whining about California's "regulation" of AV testing. The California DMV autonomous vehicle testing rules mirror those for human driver learner's permits. You have to get a license. You have to show proof of insurance. You have to report accidents. You have to have a licensed driver on board. DMV can pull your license if you have accidents. You can't carry passengers for hire. You can't drive a heavy truck. No surprises there.
DMV does require reports of minor accidents and of disconnects, and they publish that data. That's what irks the companies with bad automatic driving systems - public disclosure that they suck.
Absolutely. This sort of hypothetical-spinning kind of mirrors the way concerns about general AI "taking over" mask the problem of dumb AI making disturbingly biased decisions and putting us at the risk through the failures of fragile logic.
Autonomous vehicles promise all sorts of paradoxical effects, haven't been demonstrated to be more reliable than humans, etc. A bit of focus on the real near-future could benefit these speculators.
It's about whether the benefit of developing AVs as fast as possible outweighs the risk of accidents caused by immature AVs.
If AVs can save 1000 lifes a year then going Uber style and risking 100 deaths to speed up development of AVs by even just 2 months might save more lifes than trying to avoid deaths entirely.
Edit: genuinely perplexed by the downvoter who apparently believes the Uber "move fast, break things, suspend testing for at least several months after attracting widespread public opposition" approach has been a net positive for their prospects of commercialising self driving cars in the near future?
Heck, throw in the availability heuristic bias and we as a species react about as strongly to the four deaths caused by AI than to the roughly three million caused by humans in the same period.
Since, we have seen what the second-tier self-driving cars (Uber, Tesla) are struggling with: they have to ignore static object along the road (lamp-posts) and they are not great at making a difference between those and traffic. You can’t stop every time there is something near the road that doesn’t move, so you have to use the likelihood that said thing is actually static to move forward. That turns into an arbitrage between dynamic driving and safety — still not a trolley problem, unless you compare the potential gains of reasonably fast self driving against the lives of people who look like lampposts.
I don’t know how or why Waymo doesn’t appear to have the same concerns. It might have to do with the extend or quality of their Lidar, the progress they made on object recognition, but they indeed do not have an obvious trolley-like problem to solve. If you consider how careful the automated driver has to be, you might see similar speed-vs-safety arbitrage requiring some minimal threshold or ratio. I can’t imagine that is nearly as relevant as good maps, clear understanding of local driving habits and expectations, but there could be some aspects of it, just because that’s how any optimisation problem is written.
This is false. "Accident avoidance" also means being constantly aware of your surroundings and slowing the car down before an incident (like a person suddenly getting in front of your car) might happen. This strategy was actually posed as a question on my driver's exam test: What should you do if you see a bunch of kids playing on the sidewalk close to the road? Correct answer: You should slow down, because the kids might suddenly hop in front of you. The same goes for bicyclists.
No, it's your last resort if you must slow down. Better drivers learn to coast, down-shift, or tricks such as using the slight uphill grade of a roadway to adjust speed without resorting to braking.
Next time you are driving and have good visibility far ahead in a long line of cars (such as approaching a big hill) watch for the cars with the brake lights coming on constantly. These are the less capable drivers, and once you spot them you will see them making other mistakes. For example, one trick I've learned is to watch for the driver's hand resting on the turn signal stalk in adjacent lanes as I approach. This is a tell that they are intending to cut me off. When I spot it, I can back off to anticipate them, or be more aggressive and accelerate to close the gap they were aiming for.
And to respond to another of Animat's original points humans aren't trained to brake/swerve in a harder, more risky manner for obstacles which are animate and not bother risking stopping at all for the unidentified object that appears to be a piece of cloth because unlike AI, this is not something we need to learn. (I agree with his original contention that pure software error and human inability to respond to it is a bigger problem than "trolley problems" with current generation tech)
anything that has a slowing or stopping effect.
The comment I replied to by tombrossman used the terms coast and down-shift, which are both commonly referred to as engine braking
Slowing down as a result of a premeditated thought (does that kid play with a ball which might become loose? is that kid accompanied by his parents? is that kid really a kid or just a small adult person?) is more than mere braking, that's what I was trying to say.
In Massachussetts, the number of deaths per 100 million miles driven is 0.66. Those are the massholes. By contrast, it was reported that Uber required human intervention once in thirteen miles while Waymo required human intervention once every 5000+. Those are not directly comparable to fatality statistics because the interventions are also for things like avoiding disturbing events like hard braking, but they are still illustrative.
Let those orders of magnitude sink in. The humans are driving in all weather conditions all over the country while self-driving cars are in (generally) sunny relatively controlled areas.
Personally, I have hope that we can eventually beat humans, but these kinds of statistics call for a careful, deliberative introduction of this technology.
Your stats not even remotely "illustrative" of anything other than what they say of the presenter.
EDIT: Also, consider what avoiding "hard braking", which was a large reason for human interventions iirc, actually means. It means either that the computer jams on the brakes hard for no reason or that it only realized it needed to brake substantially after a human would have realized the necessity.
EDIT2: Uber had driven 3 million miles at the time of the death according to the below article. A single incident does not statistics make, but based on the NSTB analysis so far, that version of the software made a systematic error. That puts Uber's score at very handwavy 33.33 per 100 million miles. I'm sure the number will change with software revisions and just from random chance. https://www.nytimes.com/2018/03/19/technology/uber-driverles...
EDIT3: I should mention that I was initially hesitant to compute this "statistic" because it's so raw and will wildly fluctuate. Interventions are more solid statistically. I don't think it's crazy for someone to look at a heavy box of metal that requires constant supervision to do the right thing and conclude in the absence of (appropriately) extensive testing that it's dangerous.
So for instance if the software is providing a buggy output and the car is stopped for debugging, recalibration, etc that counts as a disengagement. And that is indeed, by far, the most common occurrence. It's extremely rarely to actually avoid a crash - let alone your fixation with "hard braking." Some companies, such as Nvidia, specifically clarify that literally 0 of their disengagements (in their case out of 109) had anything to do with extraneous conditions such as road surface, construction, emergencies, etc.
 - https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disen...
EDIT: I'm looking at Waymo's totals, whom I consider to be a safer player than Uber. The data totals come from your link on their page 2 Table 2: Disengagements by Cause. The time period spans Dec. 2016 to Nov. 2017.
Disengage for a recklessly behaving road user: 1
Disengage for hardware discrepancy: 13
Disengage for unwanted maneuver of the vehicle: 19
Disengage for a perception discrepancy: 16
Disengage for incorrect behavior prediction of other traffic participants: 5
Disengage for a software discrepancy: 9
You'll also notice the paragraph in Waymo's report, which is typical: "Note that, while we have used, where applicable, the causes mentioned in the DMV rule (weather conditions, road surface conditions, construction, emergencies, accidents or collisions), those causes were infrequent in our experience. Far more frequent were the additional causes we have labeled as unwanted maneuver, perception discrepancy, software discrepancy, hardware discrepancy, incorrect behavior prediction, or other road users behaving recklessly."
In other words, they ended up disengaging because of "emergencies", which your imminent hard braking would apply as - exactly 0 times.
The sample size is obviously too small to be certain, but that does not look good.
This tagline ignores the fact that every new technology ever invented has killed people. Machines fail. Buildings collapse. Bras and vending machines and popcorn and fire hydrants all kill people. The simple passage of time on large projects kills people.
People are going to die no matter what. If we can reduce the body count then that's a win.
Thank you :)
Is this article a satire?
This is not just a Uber problem - it is pervasive across most manufacturers. The rate at which these AVs are getting better essentially means we won't have "human level" cars in a long time.
As a consequence imo it is misguided to measure AVs against average drivers, because more average drivers on the road is exactly what I would rather not have.
I think the best argument against this, is to demand that the person willing to let many die is the first to die.
That's why I am saying that all road testing of autonomous cars should take place in the neighborhoods where the executives and their children live and are on the road. That would align the incentives of the company and the public.
This kind of lumps all the AVs together. There is evidence though that Waymos are already safer than human divers - from 2012:
> Google announced Tuesday that its self-driving cars have completed 300,000 miles of test-drives, under a "wide range of conditions," all without any kind of accident. (The project has seen a few accidents in the past — but only with humans at the wheel.)
> To put that into perspective, the average U.S. driver has one accident roughly every 165,000 miles. https://mashable.com/2012/08/07/google-driverless-cars-safer...
Since then I think they have logged about 6 million miles and had one semi at fault accident when their car thought a bus would give way but it didn't and hit them. I don't think Waymo would been looking to order 62,000 vans if they were not confident they would be safer than human drivers.
They have also had far more accidents with humans at fault crashing into stationary Waymos than the other way around.
On the other hand Ubers crash seems like a reckless effort to try to catch up by cutting corners and Tesla's it's your fault if you didn't look for a few seconds attitude seems iffy.
Plenty of lipservice is paid to the potential safety benefits, along with a laundry list of other social benefits, but really it's about money.
By logical extension, autonomous vehicles have to be safer than human drivers, or they just won't fly. Too many accidents invites regulatory scrutiny, time consuming safety audits, and worst of all, public backlash.
If you look at the players across the industry they can be broken into two distinct categories: the offensive players, and the defensive players.
The offensive players: Namely Waymo, but also Cruise, Aurora innovation, Drive.ai, Argo.ai, Zoox and Nutonomy: their safety records are impeccable.
But there are two prominent defensive players who both would be better off if AVs were never a thing: Tesla and Uber. For them, screwing it up for everyone is a net benefit to their business models.
Now, if we were actually concerned about safety, we wouldn't be driving, we'd be riding bicycles (or walking or taking the train/bus), and our cities would more closely resemble what they were in the 1930s or earlier. But there's no money in bikes, and everybody hates them.
We have historical experience with an analogous situation. Medical experiments and drug development. People became so focused on "the greater good for humanity" that they did horrible stuff to study participants (see Tuskegee Syphilis Study). Or drugs were not adequately assessed for safety in pregnant patients and severe birth defects occurred (see thalidomide). This is one of the reasons we have Institutional Review Boards to review medical studies and the FDA to review drugs, because people involved in a project like this are so good at fooling themselves.
When developing drugs, pharmacy companies aren't allowed to say "This drug will cure X and benefit so many people. We would like to ignore the side effects happening now because the cure will be so beneficial in the future."
We need the equivalent of an Institutional Review Board/FDA for self driving cars.
As part of the submission, the engineers have to document what sensors they are using, what the limitations of those sensors are, what algorithms/machine learning techniques they are using, and the limitations of those. They also need to disclose the engineering tradeoffs and design decisions (for example, not having LIDAR).
Then there need to be tests for self-driving cars for safety on the unhappy path.
What happens if there is a concrete barrier in front of you?
What happens if the lane markers are bad?
What happens if the car in front of you brakes?
What happens if you are driving with the sun facing you?
What happens if the truck in front of you drops something in the road?
What happens if a child darts in front of you?
And so on...
Then based on the testing, certify the car for self-driving only in certain conditions that were shown to be safe based on the independent testing.
After a self driving car is on the road, all data collected by the self driving cars should be shared with the review board so that there can be ongoing monitoring. In addition, users of those cars should be able to report incidents and near misses to the review board.
Progress will be a killer!