Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand this anti-autonomy cheerleading. It's like people on HN live in a parallel universe where there have been a bunch of deaths from cars running Autopilot, whereas in the world I live in, it seems to be somewhat safer than a human alone. Like, people can mess up either way, but they seem to be less likely to do so when the car is also looking out for them. What am I missing?


You have to compare the one death using autopilot to one death of people driving Teslas without autopilot. Musk tried to compare it against the universe of drivers (Teslas, kids driving crappy cars, etc), which was a complete false comparison.

So the reason it was a big deal is because it was a huge fatality. Tesla drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't been engaged, that death would not have occurred. Autopilot makes Tesla drivers less safe, not more safe.

Also, the government is doing self driving industry a huge favor. These fatalities could screw over the whole industry if they get out of hand. Musk is giving self driving a bad name.


Two deaths using Tesla's autopilot. [1]

[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo


Why was this not widely reported?


The dash cam video was only released last week, in conjunction with a lawsuit. Now it's on all the mainstream news outlets, from the Wall Street Journal to the New York Times to Fox News.

This is yet another "Tesla hit slow/stopped vehicle on left of expressway" accident. There are now three of those known, two with video, one fatal. Watch the video. The vehicle is tracking the lane very accurately. Either the driver is very attentive or the lane following system has control. Then, with no slowdown whatsoever, the vehicle plows into a stopped or slow-moving street sweeper.

Here's one of the other crashes in that situation.[1] This was slower, so it wasn't lethal. There's another one where a Tesla on autopilot sideswiped a vehicle stopped at the left side of an expressway.

[1] https://www.youtube.com/watch?v=qQkx-4pFjus


Jeez, that's pretty bad. This seems like the most basic case that autopilot is supposed to solve, and yet it still crashes?


IMHO, autonomous car should use at least two autopilot systems from independent vendors, to avoid single point of failure.


https://en.wikipedia.org/wiki/Segal%27s_law

"A man with a watch knows what time it is. A man with two watches is never sure."


Two is not enough for voting. If you have two clocks you don't know what time it is.


It's enough to catch errors to alarm human and brake.


Sounds like we need three, à la Minority Report.


Dangerous. What if they end up with different interpretations, which one is actually correct ?


Brake.


If you are on a highway at full speed, a sudden brake can lead to an accident (folks behind you not reacting on time).


So don't brake suddenly when unnecessary, only in case of emergency. In other cases just slowdown to full stop and alarm human driver in process, giving him more time to react.

IMHO, emergency braking must be mandatory for every new car with top speed greater than 60 km/h.


Last I heard, nobody has been able to verify if the car was actually in autopilot mode. However, the emergency braking also clearly failed, if the police report that no attempt to stop was made is true.


In regard to safety its equally important to report doubtful cases as they may be a sign of something occuring.


It's classical Single Point of Failure: single fault of just one subsystem leads to crash.


Surely the car keeps a log?


In thr most famous Tesla case, the driver was a huge advocate and YouTuber who pushed autopilot to its limits intentionally, not being a good copilot.


It's not anti-autonomy.

It's the offence that an engineer feels about something being marketed as something it's not.

Tesla is fooling the public. The opinion of the general public who don't drive Tesla's cars is that automated driving is already here and Tesla is leading the way.


In your reply to a well-argued post, you offer no similarly well-argued refutation of its points, just an emotional appeal based on generalities (I say 'emotional' because you take an explicit 'it is us against them, you are either for us or against us' position.) What you are missing is that opposition to simplistic arguments and false dichotomies where safety is an issue is not opposition to autonomy. What you are missing is that one can look forward to autonomy while advocating reasonable caution.


Autonomy will be great. We aren't there yet. Tesla is/was deceptively marketing their capabilities so they can risk their customer's safety with the opposite of informed consent (mislead consent? [0]). They are doing it in order to collect data that will get them to real autonomy first. That's fucked up and greedy. The other comment demonstrated that it is in fact risking their customer's lives. It's safer than some random human alone, but not against a comparable human alone.

[0] You can say they tell you to keep your hands on the wheel and all that, but they themselves manufactured/fanned a ton of hype to the contrary. It's like arguing that you should have paid more attention to the EULA.


i don't see any anti-autonomy in that post?

he's definitely anti "disguising level 2 as autonomy" though.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: