Hacker News new | past | comments | ask | show | jobs | submit login

> Fun fact: if you can only handle 99.99999% of cases (say on a per mile basis), your system will blow up 32 million times per year in the US alone.

We know how many fatalities there are in the US per year, but I wonder how many crashes there are? How many near misses, or how many times does someone "luck" out and miss death by inches while being completely oblivious to it?

> A vehicle stopped in the middle of the road is an edge case

No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

> But all the Teslas on that stretch of highway will blow through that guy signaling traffic, because they don’t know how to handle that edge case

You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

> A human driver might hit a panhandler who jumps into the street because she isn’t paying attention to body language. Every Tesla will do it, every time.

Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

> Humans can handle all the edge cases most of the time

The number of road deaths per day around the world makes me strongly disagree with that.

It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is.

They don't have to be perfect, but they do have to continually get better. And they are.




> No it's not. It happens all the time (vehicles waiting to turn across on comming traffic) and I'm sure training models are already dealing with it.

Yet Tesla released a vehicle with an “auto pilot” that can’t handle that case. Makes me skeptical they’ll ever be able to handle the real edge cases.

> You really think self-driving cars won't be able to read the "stop" sign a construction worker holds out? I bet they can now.

Teslas can’t. And will the be able to read the hand signals of the Verizon worker who didn’t have a stop sign while directing traffic on my commute last week?

> Again, you really think self-driving cars won't automatically emergency stop when they detect something jump out into their lane? Again, I'd be willing to bet they'll have a much faster reaction time than your average driver.

For a human driver, it doesn’t come down to reaction time. The human driver will know to be careful from the pan handler’s body language long before they jump into traffic.

Also, being able to emergency stop isn’t the issue. Designing a system that can emergency stop while not generating false positives is the issue. That’s why that Uber killed the lady in Arizona. Uber had to disable the emergency breaking because it generated too many false positives.

> Humans can handle all the edge cases most of the time The number of road deaths per day around the world makes me strongly disagree with that.

Humans drive 3.2 trillion miles every year in the US, in every sort of condition. Statistically, people encounter a lifetime’s worth of edge cases without ever getting into a collision (there is one collision for about every 520,000 miles driven in the US). In order to reach human levels of safety, self driving cars must be able to handle every edge case a human is likely to encounter over a entire lifetime.

> It sounds like you have a particular bent against "Tesla", and you're not seeing this for what it is. They don't have to be perfect, but they do have to continually get better. And they are.

I have a bent against techno optimism. Engineering is really hard, and most technology doesn’t pan out. Technology gets “continually better” until you hit a wall, and then it stops, and where it stops may not be advanced enough to do what you need. That happened with aerospace, for example. I grew up during a period of techno optimism about aerospace, but by the time I actually got my degree in aerospace engineering, I realized that we had hit a plateau. In the 60 years between the 1900s and 1960s, we went from the Wright Flyer to putting a man in space. But we hit a plateau since then. When the Boeing engineers where designing the 747 in the 1960s, I don’t think they realized that they were basically at the end of aviation history. That 50+ years later (nearly the same gap between the Wright Flyer and themselves), the Tokyo to LA flight would take basically the same time as it would in their 747.

The history of technology is the history of plateaus. We never got pervasive nuclear power. We never got supersonic airliners. Voice control of computers is still an absurd joke three generations after it was shown on Star Trek.

It’s 2019. The folks who pioneered automatic memory management and high level languages in their youth are now octogenarians, or dead. But the “sufficiently smart compiler” or garbage collector never happened. We still write systems software in what is fundamentally a 1960s-era language. The big new trend is languages (like Rust), that require even more human input about object lifetimes.

CPUs have hit a wall. You used to be able to saturate the fastest Ethernet link of the day with one core and an ordinary TCP stack. No longer. Because CPUs have hit a wall, we’re again trading human brain cells for gains: multi-core CPUs that require even more clever programming, vector instruction sets that often require hand rolled assembly, systems like DPDK that bypass the abstraction of sockets and force programmers to think at the packet level. This is all a symptom of the fact that we’ve hit a plateau in key areas of computing.

There is no reason to assume self driving tech will keep getting better until it gets good enough. It may, or it may not. This is real engineering, where the last 10% is 90% of the work, and where that last 10% often proves intractable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: