Hacker News new | past | comments | ask | show | jobs | submit login

That’s not an incorrect metric, but it’s not a very useful one. Or put differently, you’ll need to be a lot more “perfect” than you think in order to get to an ultimate injury/death rate lower than human drivers. You need to be able to handle all the corner cases humans handle routinely, otherwise you’re going to get catastrophic effects.

For example, right now Teslas cannot detect stationary obstacles. They slam right into them at highway speeds: https://www.caranddriver.com/features/a24511826/safety-featu.... This is not a matter of just tweaking the algorithms to get better and better error rates—it’s a fundamental problem with the system Tesla uses to detect obstacles.

In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.




> In order to actually get close to the accident rate of human drivers on average, you have to be “perfect” in the sense you have to be able to handle every edge case a human driver is likely to ever run into in their entire lives.

That's like saying for airbags to be overall better, they have to be better at every single edge case like kids in the front seat or unrestrained passengers. We know for a fact they are not better at those edge cases, yet overall airbags are safer.

Why? Because 99.99999% of driving is not edge cases (that's why they're called edge cases), and as long as you're better for that very vast majority of cases, then you're better overall.


Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.

You need to handle the “long tail” of exceptional cases. While most driving is not exceptional cases (on a per mile basis), they arise quite often for any given driver, and more importantly, for drivers in the aggregate. A vehicle stopped in the middle of the road is an edge case. It’s also something that just happened to me today, and in the DC metro area happens hundreds of times per day. Encountering a traffic cop in the middle of the street directing traffic happens to thousands of cars per day in DC. Traffic detours happen thousands of times per day. Road construction, unplanned rerouting, screwed up lane markings, misleading signs, etc.

The basic problem you’re having is that you’re assuming that failure modes for self driving vehicles are basically random. That’s not the case. A particular human might plow into a stopped vehicle because she is not paying attention, but the vast majority of people will not. But a particular Tesla will plow into a stopped vehicle because it doesn’t know how to handle that edge case, and so will every other Tesla on the road. A human driver might blow through a construction worker signaling traffic by hand because they’re texting. But the vast vast majority will not. But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case. A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time. Humans can handle all the edge cases most of the time. That means self driving cars must be able to handle all the edge cases, because any edge case they can’t handle, they can’t handle it any of the time.


> Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.

We know how many fatalities there are in the US per year, but I wonder how many crashes there are? How many near misses, or how many times does someone "luck" out and miss death by inches while being completely oblivious to it?

> A vehicle stopped in the middle of the road is an edge case

No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

> But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case

You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

> A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time.

Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

> Humans can handle all the edge cases most of the time

The number of road deaths per day around the world makes me strongly disagree with that.

It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is.

They don't have to be perfect, but they do have to continually get better. And they are.


> No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

Yet Tesla released a vehicle with an “auto pilot” that can’t handle that case. Makes me skeptical they’ll ever be able to handle the real edge cases.

> You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

Teslas can’t. And will the be able to read the hand signals of the Verizon worker who didn’t have a stop sign while directing traffic on my commute last week?

> Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

For a human driver, it doesn’t come down to reaction time. The human driver will know to be careful from the pan handler’s body language long before they jump into traffic.

Also, being able to emergency stop isn’t the issue. Designing a system that can emergency stop while not generating false positives is the issue. That’s why that Uber killed the lady in Arizona. Uber had to disable the emergency breaking because it generated too many false positives.

> Humans can handle all the edge cases most of the time The number of road deaths per day around the world makes me strongly disagree with that.

Humans drive 3.2 trillion miles every year in the US, in every sort of condition. Statistically, people encounter a lifetime’s worth of edge cases without ever getting into a collision (there is one collision for about every 520,000 miles driven in the US). In order to reach human levels of safety, self driving cars must be able to handle every edge case a human is likely to encounter over a entire lifetime.

> It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is. They don't have to be perfect, but they do have to continually get better. And they are.

I have a bent against techno optimism. Engineering is really hard, and most technology doesn’t pan out. Technology gets “continually better” until you hit a wall, and then it stops, and where it stops may not be advanced enough to do what you need. That happened with aerospace, for example. I grew up during a period of techno optimism about aerospace, but by the time I actually got my degree in aerospace engineering, I realized that we had hit a plateau. In the 60 years between the 1900s and 1960s, we went from the Wright Flyer to putting a man in space. But we hit a plateau since then. When the Boeing engineers where designing the 747 in the 1960s, I don’t think they realized that they were basically at the end of aviation history. That 50+ years later (nearly the same gap between the Wright Flyer and themselves), the Tokyo to LA flight would take basically the same time as it would in their 747.

The history of technology is the history of plateaus. We never got pervasive nuclear power. We never got supersonic airliners. Voice control of computers is still an absurd joke three generations after it was shown on Star Trek.

It’s 2019. The folks who pioneered automatic memory management and high level languages in their youth are now octogenarians, or dead. But the “sufficiently smart compiler” or garbage collector never happened. We still write systems software in what is fundamentally a 1960s-era language. The big new trend is languages (like Rust), that require even more human input about object lifetimes.

CPUs have hit a wall. You used to be able to saturate the fastest Ethernet link of the day with one core and an ordinary TCP stack. No longer. Because CPUs have hit a wall, we’re again trading human brain cells for gains: multi-core CPUs that require even more clever programming, vector instruction sets that often require hand rolled assembly, systems like DPDK that bypass the abstraction of sockets and force programmers to think at the packet level. This is all a symptom of the fact that we’ve hit a plateau in key areas of computing.

There is no reason to assume self driving tech will keep getting better until it gets good enough. It may, or it may not. This is real engineering, where the last 10% is 90% of the work, and where that last 10% often proves intractable.


How many of those airbag fatalities are due to drivers not buckling up?

It's called Supplemental Restraint System for a reason.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: