Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand how this is possible. I recently rode in a Tesla with FSD engaged. In a span of 10 minutes it tried to run a red light, failed to change lanes, and failed to get off the freeway offramp. These were all vanilla scenarios with perfect San Diego weather and well-lit, well-labeled paths and signs.

Occam's razor in a situation where a hype man promised step-function improvements for well over a decade is that this is not an honest assessment or an assessment not based on honest data.




I don't own a Tesla, but my understanding is that autopilot is basically marketing speak for what everyone else calls lane assist and smart cruise control, while FSD is when the car is supposed to be able to drive for you. In other words this report is saying human drivers + automated safety features are safer than human drivers alone. In conditions where the Tesla will let someone enable autopilot in the first place.


There are a few hundred thousand users at this point. If this stuff is so dangerous, there should statistically be some evidence for that in the form of lots of accidents happening with these cars.

Instead we have Tesla suggesting that these cars are actually more safe. Of course they are very biased and you'd be well advised to take that with a grain of salt. There have been some incidents of course but overall it seems not a whole lot of bad stuff is actually happening. This, despite people insisting this is super dangerous, isn't working, cannot possibly be working because of reasons (imagined or real), etc.

Meanwhile other manufacturers now have proper self driving cars on the roads without a driver. Those to have had some incidents but not a whole lot of fatal ones. Lots of nay-sayers and yet the doom they spell just doesn't seem to actually happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: