Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

[0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.



Re: gotchas: an even easier one is that the Tesla FSD statistics don't include when the car does something unsafe and the driver intervenes and takes control, averting a crash.

How often does that happen? We have no idea. Tesla can certainly tell when a driver intervenes, but they can't count every occurrence as safety-related, because a driver might take control for all sorts of reasons.

This is why we can make stronger statements about the safety of Waymo. Their software was only tested by people trained and paid to test it, who were also recording every time they had to intervene because of safety, even if there was no crash. That's a metric they could track and improve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: