Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Haven’t we proved autonomous cars to be safer than human drivers? Trucking is different of course, but just wanted to point that out


I don't really have anything to back this up, so take it with a massive gain of salt, but I seem to keep seeing discussion of Teslas hitting bikes (both cyclists and motorcycles). Most of it centers around there being enough doubt about whether Autopilot was being used, or whether the driver was actually monitoring like they should have been, or whether the cyclist "should" have been in the road, etc, as to "count" as being less safe. I can only imagine that there are impassioned interpretations of the data going multiple directions.

A Verge article [0] has some discussion on this with regard to a motorcyclist that follows this pattern, and FortNine also covers this in a characteristically well-made video [1].

There's also the problem of autonomous (electric) cars being too quiet when they first begin moving, but that's easy to solve.

So I'm not entirely sure about their safety being "proven", but that's largely because the problem domain is enormous and the data somewhat open to interpretation. Overall "proved" seems too certain a word as of now, without narrowly defining the scope.

[0] https://www.theverge.com/2022/7/27/23280461/tesla-autopilot-...

[1] https://www.youtube.com/watch?v=yRdzIs4FJJg


Waymo/Cruise are the more relevant entities for this conversation (although neither are actively pursuing trucking right now). Tesla hasn't even legally registered FSD as a self-driving product in California. Once they do that and get their disengagement rates 10^-3 lower then they will be more relevant to the conversation about how safe driver-less vehicles are.

Not to discount the obvious potential safety issues, just noting that Tesla is taking a very different approach that is currently pretty far from being able to complete trips consistently, let alone be measured up against human drivers.


No, is the short answer. Of course self driving continues to be trialled in SF and elsewhere, but my understanding is that the safety data (where available) for example does not consider a wide range of road conditions, and has other statistical limitations when attempting to compare with human drivers.

Certainly there are good reasons why SFFD is not keen on Waymo and Cruise.

Self-driving (and AI in general) has a tendency to go catastrophically wrong for impossible to determine reasons (imagine if each hallucination was a potential traffic accident), in a way that is hard to predict and is essentially open ended in terms of consequence. Humans do very stupid things much much less often than fairly stupid things - no such correlation is offered by AI.


> Self-driving [AI] has a tendency to go catastrophically wrong for impossible to determine reasons

I don't think that's true. I live in SF and am not aware of a single catastrophe owing to self-driving cars, let alone enough such catastrophes to form a statistical sample from which we could reliably infer tendencies.


No we did not. That one seems to be just an supported claim by companies pushing that tech.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: