Hacker News new | past | comments | ask | show | jobs | submit login
Third fatal Tesla Autopilot crash renews questions about system (reuters.com)
182 points by SolaceQuantum on May 17, 2019 | hide | past | favorite | 329 comments



I suggest people watch the actual Tesla autopilot in action https://www.youtube.com/watch?v=sk0eZRVw9x4 before making definite judgements. I watched this video, and I counted at least 3 situations where the car would've had a minor crash and one situation where the crash could be head on collision as the car couldn't detect wavy lanes properly without road markings.

The technology Tesla is using in the autopilot is pretty cool, but the feature definitely has a wrong name. It should be called something like Driver Assist Plus etc.

Oh, and there is no way in hell they'll have functioning robotaxis outside predetermined geographic areas in one year's time. I'll print this post and eat it if they do!


The entire video is Autopilot in city driving. Autopilot, at this time, is only for "driving on dry, straight roads, such as highways and freeways. It should not be used on city streets."


So the wipers are on due to rain but the "autopilot" is not so smart to know that it's raining? Why can people activate it if it knows that these are not the right conditions for it to work?


This crash occurred in circumstances that met all these conditions: the road is a divided highway, straight and flat. The surrounding landscape is almost featureless, and there are no overhead light gantries or signs. The crash occurred two minutes before the start of civil twilight, but the weather was dry and clear - in other words, almost the perfect conditions for spotting a truck right in front, so there are problems with its performance even in its natural environment.


Completely agree, but then if users followed Tesla's instructions about AP then we would not be here.

There is precedence that "it's up to the user" is not sufficient as a societal standard for what is legally allowed. (For example seat belt laws). It is reasonable to ask: "Should we allow such a feature in cars on our road?" For example, what if the car had struck a cyclist rather than a truck and the fatality went the other way?


What precedence? If "it's up to the user" is not sufficient, then all vehicles should be speed capped at the legal speed limit and prohibited from moving if seat belts are not on. Clearly that's not the case. Unfortunately, freedom includes the freedom to do dumb shit. Some people do dumb shit and are responsible for the consequences.


> Unfortunately, freedom includes the freedom to do dumb shit.

You are not legally free to do all dumb shit. For example, you are not free in many states to ride around without a seatbelt on. Car makers are not free to make cars without seatbelts.

That's the precedence.


That's exactly my point. Seat belts are required by law. Obeying the speed limit is required by law. But users are not forced to obey these laws by restrictions placed on the vehicles by manufacturers. If the user disobeys the law, they are responsible. If a Tesla driver disobeys Tesla's Autopilot warning, they are responsible. Are you proposing that we need a law that makes Autopilot illegal to use on city streets? Or highways as well?


Actually he spends a minute at around t=4:00 talking about how the car had just saved him from a head on collision. That’s pretty sweet.

At 9:58 there’s a behavior I saw a few times two or three versions back, where Tesla didn’t want to make room for a car cutting you off. Since about 2019.8 it will back off even as soon as it sees a blinker to make space.

The biggest challenge which this gen AP has is crossing intersections which are on a curve and have no lane markings. It’s not supposed to be able to do that, and not surprisingly it doesn’t do it so well.

But as a counterpoint, MA was nice enough to grind off all the lane marking on the Mass Pike westbound out of Boston a couple weeks ago and, I have no idea how, but AP keeps in lane nicely with just the grinding marks remaining on the road.


"It should be called something like Driver Assist Plus etc."

It seems to me that a feature which randomly veers into objects is not in fact a form of assistance, but a hindrance which makes driving more difficult than baseline.

It's been said before, I'm sure, but telling people to keep their hands on the wheel doesn't solve the problem of determining when to take over or resist the algorithm, since you don't know what it will do next and the time between normal driving and disaster is generally very small. A crash could be caused by the combination of actions, just like when two people both try to avoid each other in a hall, but collide anyway.


Oh, that was fascinating. I've never seen Tesla Autopilot in action before. It's ... really, really underwhelming. I wouldn't ever take my hands off the wheel.


Underwhelming if you had inflated expectations based on a misconstrued idea of what it is.

It’s pretty good on freeways. It was never intended for small streets.

And yes you’re supposed to keep your hands on the wheel.


I had nothing except Elon's PR to go on. Well, I didn't really have any expectations. I don't spend my days thinking "Oh, Tesla Autopilot's such an amazing self-driving function". I've seen plenty of discussions about how limited it is and I'm quite aware that it's far from any sort of self-driving solution. It's not like discussions about it are rare on HN. But still, seeing it is much much much more different from observing discussions about it.


And experiencing it is much much much more different from seeing a video of some cherry picked road segments.


Define "functioning".


"Doesn't kill people in case where it shouldn't that they then end up calling << autopilot is not an autopilot ! everybody knows that, except when we say otherwise when selling it or to our investors, also the name is just a coincidence ! >>" ?

I think the point parent was making was that Tesla was playing fast and lose with what they define as what is full auto pilot and what is merely driving assistance, especially by changing their wording depending on who they're facing, but it's mostly been able to go along because so far they're still in enhanced driving assistance and it's a gray area despite all the existing rules.

But robotaxi is much clearer: at no point can you make it a requirement / way out that whoever was having a ride should have driven the car 100% of the time at 100% attention. And that's basically how they got out of every single one of their major crash.


Speed bumps: interesting, I never thought of that so far.

Potentially quite difficult to detect them if the car doesn't have them marked in its GPS-map, or?


I suggest you drive with autopilot first hand before you think you understand anything about it. It DOES require human supervision and intervention; that is an expected part of the deal... and if you judge it in a different light, that is on you.

As far as taxis, Elon said they would have supervision drivers at first. This fact continues to escape the notice of people misquoting him and claiming he “promised” things that he didn’t.


Downvotes, is that all you’ve got? Can’t explain yourself because you’ve got no good response?


"immediately removed his hands from the wheel."

Tesla keeps saying this, but as far as i know, they can't actually detect that (because it's a torque sensor and can only whether they are moving the wheel).

Does anyone have different info?

You can even find plenty of reports that even light touching of the wheel causing it to issue warnings that hands are not on the wheel.

They never have let one of the lawsuits get past discovery (AFAIK) before settling, and this would almost certainly come out there if true.


I rented out Tesla S recently with my friends to try out - doing SF to Monterey and back.

When I was driving, holding hands on a wheel all the time, the car kept giving me notices to put hands back on the wheel. After the third time it refused to reengage the autopilot.

I guess my grip wasn’t strong enough for it to notice.

On the way back my friend did a few short stretches letting go of the wheel completely (still fully alert and hands an inch away), and the car failed to notice.


It's based on wheel torque, not grip or anything. People have bypassed the check by wedging an orange in one of the gaps in the steering wheel (so its weight applies a constant torque to the wheel).

(This based on /r/teslamotors so it could be out of date or wrong. >.> )


The orange seems to be an urban legend, there are several Youtube videos showing it not working. Perhaps it used to. tesla have also recently update the software so that it requires more frequent applications of turning force. For a while after that the car kept disabling Autosteer because kit thought I wasn't attentive enough.


One of my friends has a Model S and attaches a wrist weight to the wheel when he's on long interstate trips.


I hate when I see replies from Tesla stating that so-and-so didn't have their hands on the wheel for X number of seconds/minutes. It's bollocks. As a model S owner, for a few weeks I didn't realise it was sensing turn pressure, rather than grip pressure. So I had numerous warnings about my hands not being detected, even though they were constantly on the steering wheel.


True, but usually just the weight of your arms applied on the hand is enough to issue micro-turns to the wheel.

I believe the detection relies on torque + angular position (the model S manual says it will sense resistance to wheel motion as well as motion itself), and the latter should be more more sensible because those mechanisms are usually made of encoder-motor-high ratio transmission-load, so a small turn on the load gets multiplied 10-100 times by the transmission before being sensed by the encoder.

I guess the lack of precision comes from Tesla wanting to avoid false positives


So the sensor is completely useless on a straight road.


Nah. Autosteer is constantly applying counter-torque unless you exceed the takeover limit, deactivating autosteer. Someone calculated the required force and it's something like 2.5 lbs. My commute has lots of straight freeway and resting my hand on the wheel works fine.


Suboptimal for sure.


There's a tiny in-cabin camera above the rear view mirror.


If Tesla backed down on the Autopilot branding and called the feature what it is, fancy cruise control (Cruise+ maybe?) they wouldn’t have to have a PR battle every time something like this happens. It’s way too late for that now though unfortunately.


Imagine a driver driving down a parkway at speed and thinking "I'm on a parkway, so I should shift the car into Park."

That's how dumb this argument is.

If a driver ignores the requirements of their driver license AND the vehicle's clearly worded warnings, then the driver is entirely responsible. Claiming that the driver is not at fault because they interpreted the word "autopilot" to mean "I can disregard my responsibility as a driver and ignore the vehicle's very clear warnings" is deeply absurd.


I would be more sympathetic to your argument if Tesla marketed Autopilot as a driver assistance feature. But that's not at all how they market it. When you go to Tesla's Autopilot page, you immediately see a video of a Tesla driving itself- the user doesn't touch the steering wheel for minutes as it navigates complicated scenarios: https://www.tesla.com/autopilot

Tesla's own Autopilot page video has this disclaimer in bold at the very start of the video: "THE PERSON IN THE DRIVER'S SEAT IS ONLY THERE FOR LEGAL REASONS. HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF". Then there are multiple videos of Elon Musk driving hands-free on Autopilot for extended periods of time, like this one:

https://youtu.be/MO0vdNNzwxk?t=123

So Tesla can bury whatever carefully worded legalese they want in their terms and conditions, but they know that these videos will get shared and watched millions of times, and that they strongly convey the message that "Tesla's drive themselves on Autopilot". These videos set the expectation of how Autopilot should function, and I don't blame Autopilot users for not understanding this.


So the driver is not there to take over if the system does something dangerous? But that's what Tesla says every driver killed in one of these crashes was not doing, and why they were at fault. This is an unambiguous case of Tesla sending a mixed message, and completely unacceptable.


You really have to be intentionally unsympathetic to come to this conclusion. On the AP page you linked, you literally scrolled past this description:

> All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the future

What's more, when you activate AP in a Tesla, a message appears to keep your hands on the wheel and be ready to take over at any time.


While the driver is ultimately responsible, your analogy makes no sense. Auto(matic) means pretty much the same thing in most languages. Something that works without human intervention. An automatic gearbox shifts automatically, auto-shutdown shuts down automatically, and the auto pilot pilots automatically. Or that’s what many people expect: even if confined only to motorways (as opposed to everywhere) it will be able to work by itself.

The park-parkway analogy is about as good as cruise control vs. cruise ship. It’s a coincidence of words that nobody will ever confuse.


> That's how dumb this argument is.

So, no problem changing the name then - because in your opinion the name isn't effecting the purchaser or driver expectations.


That isn’t something you can extrapolate from what I wrote, but yes, of course they could change the name with minimal real consequences.

They could name it Jeff.


>Claiming that the driver is not at fault because they interpreted the word "autopilot" to mean "I can disregard my responsibility as a driver and ignore the vehicle's very clear warnings" is deeply absurd

Boy are you going to be in for a shock when the investigations start. "Autopilot", with the way Elon has promoted it, shouldn't slam into stationary objects.


What investigations?

Has Tesla ever said that Autopilot will stop for stationary objects?

Has Tesla ever said that Autopilot is intended to allow the driver to take his focus off the road?

I do think that Tesla can do more to emphasise that Autopilot isn't a synonym for full self driving, but that isn't an excuse for people who are supposed to be acting in accordance with their driver license.


... we all love Tesla but seriously?

We have a serious problem if "autopilot" cannot avoid an object in plain sight. Everyone interprets "autopilot" to be self-driving capabilities. Some people understand that the performance of this system is tied to the conditions of the road. If you cant handle the most perfect road conditions then you shouldnt call it autopilot. It is gross negligence. Tesla can add whatever warnings they want -- they will not win that case in the long run. Either they will lose in the court of law or in the court of public opinion.

I wonder how Tesla feels about lidar now.


I don't see what lidar has to do with any of this.

No car on sale today has lidar, and yet plenty of them perform speed-adaptive cruise control with varying degrees of steering assistance, from none to hands-off-wheel. They use radar. Tesla cars also use radar. The question is whether the various software algorithms that interpret sensor information are being too quick to dismiss the apparent presence of stationary objects on the road.

My understanding is that most—if not all—adaptive cruise control and autonomous emergency braking systems are unreliable when it comes to reacting to stationary objects at highway speed.


> Has Tesla ever said that Autopilot is intended to allow the driver to take his focus off the road?

What the hell is the point of it then?


I’d recommend reading some actual reviews of AutoPilot by the hundreds of thousands of people that use it every day, watch some YouTube videos or read some auto review sites explaining it.

You keep your attention on the road. AutoPilot “takes the strain” off micromanaging the vehicle in the lane while you do the executive planning.

It’s a supremely comfortable way to drive. But you are still responsible for driving the damn car.


IMO quick emergency stopping for surprise obstacles falls under "micromanagement" not "executive planning".


Okay, but with this autopilot you can be dead if your attention slips for 8 seconds and when nobody else does anything wrong. Are we really convinced that the Teslas are safer than humans when it fails in such fashion?


This is true of driving in any case. 8 seconds is an eternity at 70mph on the highway. You have to look out the window. You can not watch a movie, or check your phone, or fall asleep.

You can, for example, adjust the radio, or take a sip of coffee.


The vehicle's clearly worded warnings can be at odds with the reality of how they market their vehicles. If you don't believe me, read your car's manual some day. They go to the point of pretty much warning you to never drive.

If a warning makes a feature less than advertised, or is impractical in general, the company has still come short of their duty. The problem with Autopilot is that basic human nature will make you less attentive if it's taking that level of control. You can't warn that away.


I have a couple cars with TACC (tesla and not-tesla) and sit in bumper-to-bumper commute traffic daily. I will admit I sometimes let my mind wander because I trust the system to stop & go correctly in this kind of traffic scenario. However, if the system stopped working and rear-ended the person in front of me with no warning, I would be 100% ready to take the blame -- I'm behind the wheel, thus I am responsible, barring some kind of freak malfunction (pedals don't respond to input or something crazy)


The question of whether the driver should have had their hands on the wheel (clearly they should have) is distinct from the question of whether the Autopilot should have avoided this crash.


I agree, the issue in this case—and indeed with most Autopilot-related incidents—wasn't anything to do with the driver taking his hands off the wheel, it was the driver taking his eyes off the road.

The warning message says "Be prepared to take over at any time." That is clearly the exact opposite of "You may now focus your attention to that laptop on the passenger seat."

Yes it is a concern that Autopilot is failing in some circumstances. But whatever Tesla says or does, the road rules say that the human driver is always responsible in cases like this.


In fact, the driver should have had his eyes on the road, in order to have the degree of attentiveness that Tesla always claims was missing whenever this sort of crash occurs. Tesla cars are unable to detect whether the driver is doing that, and if they did, it would make clear that this technology is nowhere near as complete as Tesla claims.


They can drive a car automatically but not tell if a driver is awake and alert and looking at the road?

Seems that would be easier to me.


Would you buy a car that had a camera pointed at you constantly, recording for use in an accident investigation, uploading periodic short videos to Tesla for use in their training data?


Absolutely yes, because I don't drive like a stupid idiot—and if I ever did I'm not going to deny responsibility.

However I don't see why having a camera pointed at my face necessarily means that Tesla have to willingly provide information to insurance companies. As long as it is only available to law enforcement after being granted a court order, I'm fine with that.

(Note: I do not live in the USA. I trust the law enforcement in my country. My answer would be very different if I were in the US resident. I've heard enough about law enforcement in the US to know that I'd never live there, and if I did, I wouldn't ever trust the police.)


Firstly, no-one has a right to drive distracted, any more that they have the right to drive drunk. Secondly, monitoring the attentiveness of a driver, as a backstop for weaknesses in the automation, does not require recording. Thirdly, this might be a good point to remind ourselves that this is, according to Tesla, a beta-testing program. The goal should be to get to the point where constant driver attentiveness is not necessary, but until that is achieved, additional safety measures are justified.


It is ironic because autopilots on airplanes are actually really simple, but somehow when the term is applied to cars people expect abilities that airplane autopilots have never historically had.


From Wikipedia's "Autopilot" page:

>In 1947 a US Air Force C-54 made a transatlantic flight, including takeoff and landing, completely under the control of an autopilot.

Airplane autopilots are already at the fully autonomous stage. They have the advantage of air traffic control removing obstacles from their path, but IMO this just means ATC is part of the autopilot system. An autopilot is something that gets you from point A to point B with no input from the pilot/driver, and if it's not capable of doing that reliably then it's not an autopilot.


Er, you made a jump from „there were some flights that were done 100% on Autopilot” to „airplane autopilots are completely reliable”.


Airplane autopilots require human intervention only in rare emergency scenarios. Tesla's "autopilot" requires human intervention as standard. I don't think it deserves the name.


Define ‘rare’... I remember reading plane autopilot ‘disengagements’ aren’t actually rare or uncommon but happen 1000s of times every day worldwide.

Also, this is completely ignoring that a) autopilot doesn’t work at all without lots of infrastructure on the ground (try landing on autopilot when the airport ILS is down or absent, for example), and b) there’s a non-trivial set of flight conditions in which autopilot cannot be used at all.

People really should stop comparing plane autopilot to autonomous cars, the technologicalies share almost no similarities whatsoever. Plane autopilot doesn’t do any object recognition, AI, machine learning, adapting to the environment, etc. At the very best it will tell you when there’s another plane flying too close (based on an explicit location sent from the other plane or the ground), or when you almost run into the ground. That’s it. Everything else the autopilot does is just setting the flight controls to follow a preset path, and ensure the plane stays within the flight envelope.


I think a normal person’s interpretation of the “autopilot” term even when it comes to planes has nothing to do with the technicalities but rather an overview of what it can do: fly the plane for hours with little to no intervention. A plane never needs to avoid lane dividers though. And when the plane’s AP disengages, pilots have more than fractions of seconds to avoid a crash and possibly death.

I think this is what people expect from AP now, because that’s how it’s marketed: to do the heavy lifting 90+% of the time and when it fails and disengages It should give plenty of time to react.

Going forward AP will start to mean more and more “fully self driving”. Because that’s what some manufacturers claim it can do short of pesky legislation. So no supervision, no disengagement, just “piloting automatically”. And everyone pretty much has similar expectations from “automatically”.


> People really should stop comparing plane autopilot to autonomous cars

Agreed, but it's Tesla doing that for marketing purposes that's the cause of these comparisons, not the people objecting to Tesla doing that for marketing purposes. Aircraft autopilot systems are relatively simple systems with a lot of instrument support and redundancy that relieve pilots of the need to hold a yoke in normal flight (with the latter bit being what the public tends to focus on). Tesla offers extremely complex software which makes more decisions based on less straightforward inputs and relies entirely on the driver holding the controls at all times in case it makes a bad decision.


Airplane autopilots require human intervention only in rare emergency scenarios

That is not even remotely true. Pilots must babysit the autopilot and do. When flying on autopilot you still constantly check instruments to make sure that the plane is going where it should, how it should, as fast as it should. They sometimes have faults, and they have caused crashes. Pilot training is VERY clear that AP can never be relied on.

Eg: If you tell your AP to climb at 2000 ft/min, many will happily stall you trying to maintain that climb well above where your engine(s) provide(s) enough power to do so; if you tell your AP to descend to 3000 ft over sea level, most will happily fly you into a mountain that is in your way and rises to 4000 feet above sea level (in a better-equipped plane, TAWS will beep at you to tell you about this, but most APs will press on); if you tell your AP to maintain altitude, many will happily overspeed the aircraft in a large updraft, nosing down to avoid the climb.


I think a good comparison here is TCAS [0] - installed on every airliner today, it's a system that could be summarized as 'you're going to hit another plane, do XYZ NOW to avoid death'. It doesn't take the actions for you - just tells you what to do, even if the aircraft was operating under autopilot at the time.

And yet autopilot on cars is expected to intervene in those cases and far more. (Of course, cars are often in those situations far more often, which is partly why the expectations are different)

0: https://en.wikipedia.org/wiki/Traffic_collision_avoidance_sy...


You mean it had collision detection and avoidance?


Most autopilots (especially) on smaller planes will happily crash you into buildings without a second thought. If we had a similar central automotive control, planes can rely on this would rarely happen


I'd describe it as less simple than adaptive cruise control in the clouds: they can put planes in holding patterns.


When your traffic lane is mostly a straight line in 3D and measured in 1000's of feet rather than just a few feet autopilot mostly just needs to maintain heading and speed.

You can get autopilots for boats as well and they can even do collision avoidance using radar, but its non obstructed 2D space with no real lanes in open water and lanes measured in 100's of feet in or more in channels.

Cars share such close proximity with obstructed curved paths that makes a safe "autopilot" orders of magnitude more complex which should be obvious. There is just no comparison due to the space constraints cars must navigate actively.


Traditional autopilot is a fancy cruise control for the airplane, so name is actually pretty apt.


I don't think this argument counts, drivers are not airplane pilots and don't know the airplane lingo


Also, that's not what public perception of what "autopilot" means.


Meh, there were plenty of similar stories when cruise control became a thing: people stuck travelling at high speed, people crashing too fast into things, etc.


Cruise control is just a simple control loop. AP provides much more functionality (both for convenience and safety). That would be similar to calling an automobile a carriage+.


> they wouldn’t have to have a PR battle every time something like this happens

Perhaps a PR battle is better than no PR at all.


Maybe I misunderstood but why was the vehicle doing 68mph on a 55mph road? Is that the driver's fault or autopilot?

Perhaps they should rename it to assisted driving or something like that. I think I saw something about someone sleeping while on autopilot. There's only so much parenting the manufacturer of the car can do. The ethics of all this are much more complex of course - but perhaps the messaging needs to be clearer that autopilot is more akin to advanced cruise control as opposed to you can go to sleep, the car can drive itself.


> perhaps the messaging needs to be clearer that autopilot is more akin to advanced cruise control as opposed to you can go to sleep, the car can drive itself.

You mean like every other car company, and the exact opposite of tesla's entire marketing strategy?


> the exact opposite of tesla's entire marketing strategy

Citation needed. Do you have any examples of Tesla’s marketing strategy indicating that “you can sleep at the wheel”? (Not what they predict of the future of the system, but about it’s current state)

The term autopilot comes from the avionics industry. Correct me if I am wrong, but I am pretty sure a pilot cannot go to sleep while auto-pilot is engaged either.


Full self driving hardware.

Elon musk takes hands off the wheel. https://www.businessinsider.co.za/elon-musk-breaks-tesla-aut...

Tesla very much makes it sound like that you just need to sit back and enjoy a coffee while your car drives. Many imagine that the car will give them 5-10 seconds of heads up before takeover is needed.


Musk was demonstrating auto braking. I don’t see him in that video or anywhere else promoting careless driving or sleeping at the wheel.

> Tesla very much makes it sound like that you just need to sit back and enjoy a coffee while your car drives

Again citation needed. Although this might sound like a fact in the HN forums, it is far from the reality.


Citation needed?

Coast to coast autonomous drive, cross country summon mode, full self driving on highways from entry to exit, the fsd fake demo video with "driver is there only for legal reasons", million robotaxis that earn you 30k$ a year from 2020, lidar is a fools errand etc etc.

You need more? Here's Elon musk in 2016 proclaiming self driving to be easy and solved. Showing entire lack of understanding what driving is like and why computers fail at it. https://youtu.be/wsixsRI-Sz4?t=1h17m57s


Let’s go over them one by one:

> Coast to coast autonomous drive, cross country summon mode

They have not claimed that they have achieved that yet.

> full self driving on highways from entry to exit

This is from the Tesla website when I search for the above: “The currently enabled features require active driver supervision and do not make the vehicle autonomous” [1]

> the fsd fake demo video with "driver is there only for legal reasons"

Based on what evidence do you claim it is “fake”? That video is to demonstrate FSD and show the future value of the system. It does not say here is how you should operate AP.

> million robotaxis that earn you 30k$ a year from 2020

Oh shit, is it 2020 already?

> lidar is a fools errand

How is that even relevant to this thread?

> You need more?

More than the zero citations you have provided so far? Yes. Yes I do need more.

> Here's Elon musk in 2016 proclaiming self driving to be easy and solved. Showing entire lack of understanding what driving is like and why computers fail at it.

Again not relevant here since we are not talking about future promises here. The discussion is on how Tesla encourages its drivers to use the car today. If they say they are working on a flying car would anyone try to drive their car of a cliff?

Again, if you think any of the above are not accurate please provide citations for your claim. (I’m starting to think folks around here don’t know what citation means)

[1] https://www.tesla.com/support/full-self-driving-capability-t...


First off, you need to remember that Tesla has made Musk's twitter feed an official announcement platform, so literally any promise, claim, or offhanded remark he makes there can (and often does) get treated as fact by people and by media.

Second, regarding your link, it also states you can

>[...] summon your car to come find you.

How can it do this but _also_ "require active driver supervision"? This statement implies that it can, without you in the car, drive to come find you, and it _definitely_ implies the vehicle is autonomous.

Yes, they also say like you said, but these are conflicting statements, and it's VERY easy between this, average Joe's understanding of autonomous cars, and Elon's behavior on twitter to see why so many people think that the Tesla is fully autonomous, and thus get into accidents like they have by simply not paying attention. You can link stuff from Tesla all you want showing how they do a good job covering their ass, but if people are still doing this, then Tesla isn't doing a good job of making this clear. They simply hide it inside the legal documents so that when people _do_ treat the car as autonomous, they can't be sued for it.


> ... treated as fact by people and media

Sure it is a fact that Elon Musk predicts they will have robotaxis in a year. It is not however a fact that Elon Musk advises the drivers to drive carelessly.

> How can it do this but __also__ "require active driver supervision"? This statement implies ....

Yes it does. In a parking lot. At parking lot speed. While you are not in the car.

> and it __definitly__ implies the vehicle is autonomous.

If by autonomous you mean it can automatically come to you in a parking lot then yes. If by autonomous you mean you can sleep behind the wheel while it is driving in the streets then it __definitly__ does not imply that.

> But these are conflicting statements...

No they are not. One is about operation in a parking lot. One is about future functionality/value of the product. And one is about the current functionality of the product.

> people think that Tesla is fully autonomous and thus get into accidents like this

Citation needed. I'd say it is more likely that they are not using the car as intended, not that they mistakenly think their car is fully autonomous.

> ... covering their ass

Not sure what your point is here. So they are damned if they do damned if they don't?

> but if people are still doing this then Tesla isn't doing a good job of making this clear

People don't always use things as designed. This is not exclusive to Tesla cars.

> They simply hide it inside the legal documents

Not true. It is on the first page when you search for Tesla FSD. And I am pretty sure the warning is shown to the users at different steps before AP can be activated.


The start of their Autopilot marketing video [0] clearly states "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."

[0] https://www.tesla.com/autopilot


Your citation doesn't support your point at all. It’s not an autopilot marketing video. It’s showing how the future could look. It’s probably using a current at the time development version of some FSD software.

What does the header say right above that video? In a really big headline font, that’s really hard to miss?

“Future of Driving”


> perhaps the messaging needs to be clearer that autopilot is more akin to advanced cruise control as opposed to you can go to sleep, the car can drive itself

Tesla (and more specifically, Elon) seem to continually go out of their way to make that less clear, not more.


Less clear in the marketing materials, but when you actually use the car it's very clear.


Ahh, very useful when people die. It was their fault for falling for Tesla's lies.


I think this is a tired argument. Every time you turn it on it says please be ready to take over at any time. It gives constant messaging that your hand needs to be on the wheel. No one driving it should think that you can fall asleep.

A better angle would be that, some people who were gonna fall asleep anyways, had their lives saved.


I think that's a question of culture. You agree to an EULA every time you use anything. Any time you call up customer service "Your calls are recorded for quality assurance purposes". All of the new party drugs you can buy have "Not for human consumption".

Warning labels are not an effective way of communicating to people what the limits on a product are. Especially so when the company's CEO heavily implies that those warning labels are exclusively there for regulatory reasons.

On the one hand you have a boiler plate warning, on the other hand you have the mulit-billionare genius CEO of Tesla doing interviews with 60 minutes making a point of taking his hands off the wheel completely and bragging about it.[1]

>A better angle would be that, some people who were gonna fall asleep anyways, had their lives saved.

I don't buy this, people are more likely to not pay attention when they think they've been told they can not pay attention.

[1] - https://www.cbsnews.com/video/tesla-ceo-elon-musk-the-2018-6...


>Warning labels are not an effective way of communicating to people what the limits on a product are. Especially so when the company's CEO heavily implies that those warning labels are exclusively there for regulatory reasons.

emphasis added


It seems like more people will fall asleep due to a false sense of security than would have fallen asleep without autopilot turned on.


"Some people who were going to drink and drive anyways had their lives saved. We should be encouraging them to drink and drive, and do it in a Tesla!"


> but perhaps the messaging needs to be clearer that autopilot is more akin to advanced cruise control

Maybe. Here's the thing. The feature, by its very nature, does not punish inattention like normal driving. If you let go of the wheel and focus on your phone -- even using your knee to hold the wheel, chances are the longer you are distracted the more likely you are to have a minor wakeup that says "don't do that." Maybe your car starts to hit the bumps between lanes. Maybe your peripheral vision picks up something and you get an adrenalin jolt as you refocus attention on the road.

AP is not like that. 99.9% of the time it does a pretty good job on the expressway. (95% if you are in traffic and have defensive driving tendencies where you are projecting ahead 10-15 seconds and moving lanes accordingly.) So you literally can look at your phone for 15 seconds look up and find your car at reasonable distances from other cars and properly in the lane. All of the normal "don't do that" feedback loops for inattention are removed.

But then there's that unusual case, like this one, where AP gets it wrong and the consequences are extreme. Is this different from normal inattention? I think yes because normal inattention often leaves you feeling "I just took a risk" due to the natural, and usually small, consequences.

My point is this: It may not be that the labeling of the feature is dangerous. It may be that the feature itself, by its very nature of removing the less serious consequences of inattention, is dangerous.


"My point is this: It may not be that the labeling of the feature is dangerous. It may be that the feature itself, by its very nature of removing the less serious consequences of inattention, is dangerous."

This reminds me of sports car reviews where they talk about how the combination of extremely grippy modern tires and traction/stability control mean that there is no warning before losing control, whereas older cars had gradual feedback.


Is it a question of marketing/naming or of feature design?

Ultimately, I'm skeptical these are a good idea regardless of messaging.

I can see the appeal. Drivers get to experience cool new tech. Tesla/bmw/etc get a ramp that lets them develop autonomy gradually.

But at least in my uneducated opinion, autonomy-with-human-on-standby is a bad idea. Uncanny valley and whatnot.


Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day

Human in the loop is a terrible idea. We love doing make-up, texting, driving drunk, etc. We love road rage and silly egotistical lane changes or blocks and reduce efficiency of the roadways. It was fine at speeds of horses but not anymore. Especially with smartphones in the loop.

Partial autonomy and then transitioning to full autonomy is the future of transportation.

A car should be more like an elevator in functionality. Press a button, and take me there.


It should, sure. But full autonomy does not exist right now and is as hypothetical as general AI. Elon promised a coast to coast road trip across the US all the way back in 2016. If it was at all possible wouldn’t Tesla have hit at least that milestone by now? They couldn’t even drive the autonomy day demos more than 10 minutes without needing human intervention. And sure Waymo has good tech that is far beyond Tesla’s but those are very tightly controlled and geo boxed and driving with ladar. Still a long way from full autonomous driving.


> If it was at all possible wouldn’t Tesla have hit at least that milestone by now?

What? Because Elon wasn't able to achieve a milestone you consider it not possible at all? That's just absurd.

> Still a long way from full autonomous driving.

Yeah, still a long way from full autonomous driving, but having that training data make it that much closer. Theses cars have driven more miles than they ever could on test roads in a century.


Tesla’s self driving strategy is to throw more data at a neural network until it learns to drive. That’s as out of touch with reality as a person thinking they can learn to fly if they just flap their arms enough times.


I think most people agree with a big chunk of that.

The question is, does the path to full autonomy necessarily lie through partial autonomy? And related to that - is partial autonomy actually more dangerous than no autonomy at all?

Personally I think that full autonomy might be easier to hit from scratch on prepared roads, with sensors built into the environment, and the roads/lanes not shared with pedestrians/animals/animals driving cars.

I would consider making the equivalent of e.g. the London low emissions zone into an autonomous only zone, with the roads, environment and static cameras providing assistance to the cars, possibly centrally guided, and extending it out from there.


The driver sets the speed, just like any cruise control. It’s controlled through the right hand scroll wheel on the steering wheel.


> It’s controlled through the right hand scroll wheel on the steering wheel.

Not on my 2015 S 80D. There it is a small stalk on the left of the steering column. Up to go faster down to go slower. Double pull toward driver to enable Autosteer.


As an owner and autonomous driving fan I keep asking the same question. Will autonomous cars be legislated to always obey the law or will that still be a judgment left to the driver if the car can be driven either way.

I am surprised there are not laws to have cars alert you to the fact your speeding for the systems that can read speed limit signs in addition to have current maps.


>>I am surprised there are not laws to have cars alert you to the fact your speeding for the systems that can read speed limit signs in addition to have current maps.

Well, there are systems that do this. Many new cars have speed limit sign recognition built in and they will display the current limit on the dashboard(and sometimes show you an icon indicating that the vehicle thinks you are speeding).

The reason why this is not more aggressive is because it's only seemingly a simple problem - in reality it's far far from it. In my experience of driving around the EU - I had a Mercedes CLS(brand new 2019 model) with adaptive cruise control and speed limit signs recognition, and while amazing when it worked, it worked maybe 50% of the time. Issues I noticed:

1) it kept detecting speed limit signs on roads parallel to the one I was on [0]

2) it kept detecting speed limit signs on side roads - leading to actual dangerous situations, because I was on a 50mph road, going past a 20mph side street and the car would suddenly hit the brakes because it detected a 20mph speed limit. That's 10000% unnacceptable in a production vehicle. [1]

3) it kept reading speed limit signs on the slip roads leaving the motorway - so you're doing 70mph on the motorway, and it suddenly reads the 50mph speed limit that's for the motorway exit. Same problem as with point 2.

4) In situations where there are no speed limit signs, it would use the map data - and even though this was a brand new, March 2019 vehicle, it had no idea that a stretch of motorway next to my house is not limited to 50mph anymore, and hasn't been for the last 2 years.

In practice, I had to keep overriding the speed limit chosen by the car constantly, and after few days I switched it fully off as it led to me nearly shitting my pants as the car hit the brakes for seemingly no reason whatsoever few times.

And then I have concerns about countries where the speed limit is only valid to the nearest intersection - and the definition of an intersection varies. There are paved roads merging with the main road which are not intersections because the side road leads to private property, and then there are dirt tracks which are intersections because technically it's a public road open to motor traffic. There are no signs to indicate either - the driver is just "supposed to know". It's idiotic, I know. And in some countries(Poland) the speed limit can change as often as 4-5 times in a span of a single kilometre. What's even worse is that there are situations when the speed limit can actually increase(!!!!) the speed limit set in a certain zone(built-up zones have a 50km/h speed limit, but this can be increased by a regular speed limit sign - and then yes, it goes back down(!!!) to 50km/h after an intersection). All of this stuff would be incredibly hard to code into a system like this, and it's one of the reasons why I suspect the proposal made by EU to have automatic speed limiters built into cars will either fail, or people will just keep disabling it altogether, making the whole idea pointless.

[0] Here it would think the side road has a speed limit of 50mph, because it would read the wrong sign:

https://www.google.pl/maps/@54.9929676,-1.5788994,3a,75y,78....

[1] Here for example the 20mph sign is placed at an angle to the road, so the car would read it going past it - and set the adaptive cruise control to 20mph which is incorrect for this street:

https://www.google.pl/maps/@55.0074774,-1.615172,3a,41.3y,90...

Same here:

https://www.google.pl/maps/@55.0071611,-1.6214135,3a,75y,240...


Not to mention that "speed limit recognition" is a huge waste of resources in the era of GPS. If countries want that, they should just keep an up-to-date database of roads with their speed limits, and update that appropriately. Alternatively, companies could drive cars around (like Google does for its Maps), run speed limit recognition, do manual adjustments, and upload that to their online maps.


Well.....I wouldn't trust that either.

My sister has an insurance policy with a tracker(because she's quite young and there was no other option for her) and one day we noticed that her insurance score has fallen down through the floor, hugely negative points for....speeding. Of course the website doesn't tell you where or when, just that you're being penalized for being naughty. Ok, so we call them up, and they say they will send us the exact coordinates where the "offence" happened. Ok, cool - and turns out that they thought she was driving 70mph on a road going next to the motorway - which had a 20mph speed limit. Which was literally physically impossible, as previous GPS points were showing her on the motorway and there was no direct exit to that smaller 20mph road. So the GPS was off by maybe 10-20 metres at most, but the insurance has automatically penalized her massively for speeding. What's stopping the same system from failing when used for automatic speed limit adjustment? Especially in some extremely dense cities where you literally have roads of varying speed limits overlapping each other.

The insurer did correct that score penalty manually after I complained. But it's still a joke that it could make that mistake in the first place, especially since they can literally cancel your insurance due to speeding.


Unfortunately, you really can't solve these kinds of questions with the full on deterministic approach. The world is big, speed limits change, databases have to be updated.

Google maps was (dangerously in the winter) directing people to an old road, into mountains after a new one had been built. It took national news, and several months of attempts to get them to update it.

The just build a big very accurate database approach is the one I would absolutely guarantee will never work.


Google also for a long time didn’t properly account for Rock Creek Parkway in DC going from one-way south, to bidirectional, to one-way north depending on the time of day. Also, it still doesn’t realize that the street I live on is a pedestrian street with car barriers at every other intersection. (You have to drive down the correct subdivision road from the main road, instead of just getting into the subdivision and expecting to be able to drive down the pedestrian road to the right house.) So Uber drivers will constantly call me saying they are stuck and can’t figure out how to get to me house. (The road was turned into a pedestrian road maybe 5-7 years ago now.)


Roads which have restrictions changing on a hourly/daily basis google doesn't understand, but there will be a warning in the directions ('this route contains private or restricted roads').

If there is something out of date, hit the report button and it's typically acknowledged immediately, confirmed in a few hours, and public within a few days.


Google recognizes it now, but it took years. (For a major thoroughfare in the nation’s capital.) Just reinforcing OP’s point that you can’t rely on the map being up to date.


You still need to deal with temporary speed limits in construction zones.


And smart motorways


And AP1 does that perfectly.


> and it's one of the reasons why I suspect the proposal made by EU to have automatic speed limiters built into cars will either fail, or people will just keep disabling it altogether, making the whole idea pointless.

If it's a proposal by the government, they could do better: they could mandate not only automatic speed limiters, but also something embedded in the roads like balises (https://en.wikipedia.org/wiki/Balise) to reliably indicate to the car the correct speed limit.


My 2015 S 70D is much better at recognizing speed limits than your Mercedes. Gets it wrong rarely and in fact notices signs that I miss. There is one parallel road where it persistently incorrectly recognizes the speed limit as belong to the road I am on bt that's just one out of many of thousands of km of road that I have driven in the car over the last 18 months.


> Many new cars have speed limit sign recognition built in and they will display the current limit on the dashboard

Tesla Model 3 has this feature and, while autopilot is engaged, will decrease the speed of the car when the speed limit decreases.


So, just like that Mercedes CLS I was talking about. It will keep itself in its lane and slow down for any speed limit that it detects. My point of that entire wall of text was that this detection of limits is unreliable, because in real life the signs are sometimes placed in such a way that the cameras detect them as applying for the road you are currently on while in reality they do not.


Eh not quite. Only HW1 Model S did the sign recognition based on camera, HW2+ currently just uses GPS database.


I'm only half joking when I call for a bloodlust dial with a murder level on it.

The core idea is to continue to grant enough agency to the operator that they are held responsible as long as the vehicle faithfully executed the intent.

The tongue and cheek bloodsport throttle in the real world would instead be a proactive user choice of how much human guidance and interaction is required (solving the handoff problem, for example, 1 minute automatic, 1 minute manual or an automatic system that you'll need to irritatingly press a button every few minutes to stay engaged).

If a user options for "zero engagement" (let's pretend they're drunk or just got out of surgery), then there has to be a visceral experience of the underlying risks; a constant beeping and flashing, some real sense of urgency and warning.

I know it's a luxury product but cars are machines of carnage where death is never more than a mistake and mere seconds away. That reality should supercede.

This may be eventually removed, but a system like this would honestly solve all the current issues


>an automatic system that you'll need to irritatingly press a button every few minutes to stay engaged

Just like trains! There is an alarm system just as you describe. After a certain amount of time, there are audio and visual warnings for the engineer. The engineer must engage the switch to "snooze" the alarm, and in the event that the engineer does not engage, the train automatically cuts power to throttle to brake.

I'm not sure flashing lights and a klaxon are what we would want in a car, but maybe beeping akin to lane assist that quickly increases in volume + a light on the dash would be a good start.


It works that way in a Tesla. If the system doesn't detect your hands on the wheel (using rotational force on the steering wheel as a proxy), first a message appears on the screen, then seconds later the screen flashes blue, then a few seconds later the volume on the radio is lowered and the car beeps at you urgently. If you still don't touch the wheel, a few seconds later the hazard lights come on and the car slows to a stop.


> I am surprised there are not laws to have cars alert you to the fact you['re] speeding

Europe is mandating this (combined with limiting) for new cars soon. It will be possible to override 'temporarily'.


It seems inevitable that autonomous vehicles will be required to obey speed limits in the future. The question in my mind is will it be the same speed limits as for humans?


Yes, because:

1) I don't believe we'll go "autonomous cars only" for predictable future. I can see that happening on limited set of streets(say, strict city centres), but not overall. Mostly because I think requiring everyone to buy an autonomous car is ridiculous, and then there are cars made even right now that I can totally see people driving in 20/30/50 years from now. We'll always allow manual cars on the road, just like we still allow horses - it just became so rare that in practice it's not an issue to have an animal with no safety features on the road next to modern motor vehicles.

2) there will always be users of the road that cannot be made automous - motorcycles. Unless you believe that motorcycles will be banned, but I don't think that will happen.

So yes, I think any speed limits of the future will be set for humans first and foremost. I do look forward to those speed limits to be revised up though - cars are far safer and more efficient at higher speeds than they were few decades ago, and yet speed limits have never been revised to reflect that. US is particularly bad at this, with empty stretches of straight road having a speed limit of 50mph for instance. That should have been fixed years ago.


I don't see how that is even a question without 100% autonomy and proven above-human performance, human drivers on the same roads still have to react so different speed limits don't seem to make much sense.


Well seeing as you can get a ticket for going at or under the speed limit in some major cities because it is a hazard that might be interesting.


I believe the law in NY state uses the phrase "impeding traffic" and doesn't reference the speed limit. So (although I am not a lawyer) I imagine you could violate it whether you were going under, at, or over, the limit.


We already have different speed limits for different users in some highways (for instance, one major highway near where I live has a limit in some places of 110 km/h for cars and 90 km/h for trucks and buses), so there's precedent. But I believe that, at least initially, speed limits for self-driving vehicles will be lower.


The driver sets the speed. Unless the autosteer determines that it is on a two lane road with no central reservation or other divider in which case it restricts the speed to the speed limit plus whatever margin the user has set. It never increases the speed beyond what the user has set. At least that's how my 2015 S 70D with AP1 (1.5?) works. And it's pretty reliable. I think I have driven about 30k km in the last 18 months and I must have had autosteer on for close to half of that. It misbehaves a little on the kinds of roads that Tesla says it is unsuited for but still the car and I drive better together than either would alone.

Where anyone who actually drives a Tesla would get the idea that you could sleep while the car drives I can't imagine. The configuration screen where you enable it makes it perfectly clear that this is a driver assistance function that requires the driver to be able to take control. And every time you enable the function with the steering column stalk it reminds you in the dash to keep your hands on the wheel.

Yachtsmen sleep while on autopilot but that kind of autopilot is considerably less capable that Tesla's and is only safe while in the open sea away from busy shipping lanes, ditto aircraft. As far as I know commercial aircraft require at least one pilot to be awake and in the seat in the cockpit at all times whether or not autopilot is on.


>Perhaps they should rename it to assisted driving or something like that.

Only a few short weeks ago Elon proclaimed Tesla will have 1 million fully-autonomous, value-appreciating robo-taxis driving the country within a year. Way too late for them to admit their technology is nothing but "assisted driver".


It doesn't seems to have been mentioned here that this happened over a year ago, in March 2018.

I believe that the time between each crash is lengthening, even as the number of cars with AP balloons. This would speak to the regular improvement.



No, 2018 is the model of the car. Commenter above misread the first sentence.

Accident happened on March 1, 2019, and the car was a 2018 Model 3.

> Tesla Inc’s Autopilot system was engaged during a fatal March 1 crash of a 2018 Model 3 in Delray Beach...


Or that people are taking the “hands on wheel” recommendation seriously now that they know the AP might kill them.

Since Tesla doesn’t say how many time drivers had to intervene and avoid a crash then we can have no relevant statistic.


How would they know?


The same way they know the AP is safe over x billions of miles and saved lives of course.

Both Tesla and Musk do everything in their power to overhyped the AP at any cost (for drivers). Between insisting it's overall safer than human drivers and Musk suggesting the driver supervision is only needed due to pesky regulations we can assume they already have all the needed supporting data. This means they know who was responsible for saving the situation each time. Which makes them not publishing any kind of data (beyond a report that was already refuted [0]) that much more telling.

> The consequences of the public not using Autopilot, because of an inaccurate belief that it is less safe, would be extremely severe. [1]

Supported by comparing the number of deaths between a fleet of new, modern cars driving almost exclusively on freeways in good(ish) weather and a fleet of random cars even decades old, in every possible road, traffic, and weather conditions.

In the meantime Tesla AP is the only system that killed multiple drivers. An infinite times more than any other system if we make the statistic. Don't you think it's reasonable to assume it's because its capabilities are overhyped to the point where drivers are so confident they actually put their lives on the line?

This isn't trying to say the AP is bad, just that Tesla and Musk created unrealistic expectations at the expense of people's lives.

[0] https://arstechnica.com/cars/2018/05/sorry-elon-musk-theres-...

[1] https://www.tesla.com/blog/update-last-week’s-accident


Every incident is obviously tragic, but statistically with the number of Teslas on the road is self-driving roughly on par with the number of expected deadly accidents with manually driven cars? This is still a nascent technology with a lot of promise and it is naive to expect that there will never be a fatality or issue.


Statistically, Teslas are much, much more dangerous than other new cars in the same price bracket.[1] To be fair, I suspect the driver demographics skew younger, maler and more urban than most luxury cars and that has at least as much impact as any actual driving characteristics, but the statistics really don't bear out the people killed by glitches or systematic flaws in Tesla's software pale into comparison compared with those saved by it line of defence.

I find it truly remarkable how on the one hand HN can rage for days over Boeing introducing software with flaws that killed people and on the other hand when it's Tesla's software flaws killing people in a similarly repeatable manner, the general consensus veers towards deaths from poor software implementation not mattering nearly as much as the potential of the technology...

[1]https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


>I find it truly remarkable how on the one hand HN can rage for days over Boeing introducing software with flaws that killed people and on the other hand when it's Tesla's software flaws killing people in a similarly repeatable manner

In the Boeing, the pilots were aware and trying to mitigate the problems, but the system prevented that and ended up crashing the plane.

If the Tesla kept accelerating and steered straight into the truck, after the driver disengaged autopilot because he saw the truck, this would be a fair comparison. We don't know whether that happened, but it seems highly unlikely.


Sure, the two are not exactly the same, as the MAXes appear to have crashed because some pilots weren't trained/knowledgeable enough to understand how to address a software malfunction, whereas the Teslas appear to have crashed because some drivers weren't trained/skilful enough to react quickly enough to a software malfunction.

I'm not convinced that distinction makes a tendency to accelerate into static objects a comparatively acceptable teething problem for the autopilot software though, even less so for people who support Tesla's stated aim of full autonomy asap...


>I find it truly remarkable how on the one hand HN can rage for days over Boeing introducing software with flaws that killed people and on the other hand when it's Tesla's software flaws killing people in a similarly repeatable manner, the general consensus veers towards deaths from poor software implementation not mattering nearly as much as the potential of the technology...

Tesla is a virtuous startup. Boeing is an evil defense contractor. What do you expect.


They may or may not be evil, but Tesla is a defense contractor, too. https://dod.defense.gov/News/Contracts/Contract-View/Article...:

”Tesla Industries Inc, New Castle, Delaware, has been awarded a maximum $17,056,347 firm-fixed-price contract for battery power supplies.”


The Medium article referenced points out the difficulties in getting accurate up to date information out of the NHTSA (National Highway Traffic Safety Administration) database. The survey described in the article include data about the Model 3.

Also, they're shorting Tesla. They say that their position on Tesla is due to their research (as opposed to the outcome of their research being driven by their attitude), and that's fair. Nonetheless, I'd be as skeptical of their claim that Teslas are significantly more dangerous than average as I am of Musk's claim that full autonomy will be available and safe before 2020.


>I find it truly remarkable how on the one hand HN can rage for days over Boeing introducing software with flaws that killed people and on the other hand when it's Tesla's software flaws killing people in a similarly repeatable manner, the general consensus veers towards deaths from poor software implementation not mattering nearly as much as the potential of the technology

And you can replace Boeing with banks, car companies, oil companies or any other legacy company. Fraud, worker exploitation, shoddy product...these things are acceptable to some here, as long as the company has come from Silicon Valley.


Are statistics available on airbag deployments?

It seems the numbers of deaths are low enough to make stats difficult, but 'car crashed badly enough to make the airbag deploy' might have more data, and might also help separate out the effects of autopilot vs crumple zone design.


If you read this article, definitely read the top-rated comments too.


> Statistically, Teslas are much, much more dangerous than other new cars in the same price bracket.[1] To be fair [...]

To be fair, how many of the other new cars in the same price bracket and category class (midsize sedan, large sedan) can be ordered with factory 700+hp and sub-3.3s 0-100km/h times?

I haven't seen numbers but I'd suspect that the vast majority of Tesla fatalities are due to the fact that the cars are stupidly much faster than anything new Tesla drivers will have driven before.


the model s with the performance characteristics you mention starts around $90k (or $110 with ludicrous mode option). at this level you can start getting into 911s, BMW m4/m5, or the highest trim American muscle cars. these cars have comparable acceleration figures to the performance model s and the sports cars will thrash it in any situation that doesn't involve 0-60 in a straight line.

edit: I reread your post and realized you are talking about midsize sedans and larger. Mercedes amg e63 is a better counterexample. most of the cars I mentioned don't have 700hp either, but you don't need it when you start with a 500-1000 lb weight advantage.


Part of the discrepancy is also that I'm from the Australian car market and our prices for fast cars are insane compared with the U.S. so my baseline is skewed. :/

Also a P3D is much cheaper for similar price (https://forums.tesla.com/forum/forums/p3d-base-price), and the 3.3s 0-100 mentioned was for a P3D+ not a S100D.


oh okay, I shouldn't assume everyone can buy cars at the US price. still if you move down to BMW m3 or amg c63, price and performance are comparable to p3d+ (though the Tesla does have a 0.3 - 0.7 lead in straight line acceleration).

I think your point still has some merit though. it could be that fast and quiet EVs don't provide as much sensory feedback to say "hey guy, you're actually going really fast". it could also be that people who are not usually interested in performance ICE vehicles might go for a performance EV for the novelty.


tbf I tend to agree and nearly put "sportier" as another reason why Tesla might have more driver errors than, say, a 5 series BMW (through more aggressive drivers choosing Tesla, or it being more tempting to manually drive too fast) but wanted to avoid a completely tangential debate on the extent to which Tesla was responsible for that (unlike young men on busy exurban freeway commutes having more accidents than the average driver regardless of car type, which definitely can't be blamed on anyone at Tesla). But the key point is that not only is gleaning statistically significant and properly controlled info from accident data hard, but also a lot of aggregate stats don't paint the rosy picture Tesla's PR figures do


One of the 4 fatalities investigated did happen after the tesla was trying to outrun a police car... Not exactly Tesla's fault!


Statistics don't include situations where Tesla went wrong and the driver manually intervened to avoid an accident. One is just human drivers and the other is a team effort between human drivers and AI, if the results are the same then that's quite condemning of the AI.


At the moment the relevant choice is between human and human+AI, not human vs only AI. And I believe human+AI outperforms in what rough statistics we do have.


We'll have 'full self-driving' in a year, so option 3 is on the table very soon, according to Tesla's marketing.

I assume that this is mostly pr in terms of how autonomous the ai will be once again, but many people may buy it and then get blamed for it when accidents happen.


Of course. But there obviously is a real issue with overhanging obstacles, since there have been multiple of these crashes now.

Do we really accept a known flaw just because it's still better than the alternative?


.. yes? I mean, it depends on what you mean by "accept", but yes we choose the option that has the least risk of killing people, even if it has "issues", because those issues are already encapsulated in the relative risk estimates.

Airline flying is safer than driving even though planes have known issues too.

Really struggling to find a logical point in here. Maybe some nuance I'm missing?


Interesting that you would bring up airplanes. Because when the Max 8 is suspected to have a systemic problem which may repeatably cause crashes, the entire worldwide fleet of them gets grounded. No one makes the argument that a Max 8 with a faulty AOA sensor is still safer than driving your car so whatever.


> Do we really accept a known flaw just because it's still better than the alternative?

Would you really choose an overall-worse alternative because the better option has a single known flaw?

If you're that worried about Autopilot, just don't buy or use it. The rest of the car is still best in class.

Edit: Be less inflammatory.


Honestly? Yes. We take the best option available at current time, and simultaneously work on improvements.


So... human drivers.


Adding Lidar is an alternative - but Tesla/Musk keep ragging on it as their cars keep slamming into objects


Elon needs to turn it down a notch on marketing the autopilot system as a full self-driving option or he risks becoming seen as the Theranos of automobiles


Tesla PR on this incident: "our data shows that, when used properly by an attentive driver who is prepared to take control at all times, drivers supported by Autopilot are safer than those operating without assistance" [1]

Elon: Let's call it "Autopilot," also "Full self-driving capability is there" [2], and "We already have full-self driving capability on highways" [3]

Tesla's driver assistance software is extremely cool, but the Elon-driven hype is ridiculous, especially when compared to how Tesla PR has to describe it.

[1] https://arstechnica.com/cars/2019/05/feds-autopilot-was-acti... [2] https://www.theverge.com/2019/1/30/18204427/tesla-autopilot-... [3] https://www.teslarati.com/tesla-elon-musk-full-self-driving-...


And of space exploration, transit tunnels, lithium batteries, and solar panels!


He already is the Theranos of solar panels. Promised a fake product (solar roof) to bailout himself and his family out of a failed company.


Except that they have actually installed solar roof on a few dozen houses as test cases while they continue to refine their product.

Theranos never had a functioning product, they used tradition equipment secretly behind the scenes.

Is this house secretly using solar panels?

https://youtu.be/zMR10vbo3A0

Tesla wants to iterate on Solar Roof a few more times to increase yield and performance before they install it on 10,000 houses with a Lifetime glass warranty and a 30 year weatherization warranty.


The value proposition is fake, not the technology. Solar roof tiles have been commercially available since 2005, they are just too expensive.

Musk promised that their roof tiles will be cheaper than regular roofs. I have seen zero evidence of that. The roofs on houses you mention cost WAY more than regular roofs.


Cheaper than regular slate roofs. Not cheaper than asphalt roofs. Geez.

And that’s before factoring in the energy generation.

It’s always going to be a premium product for people who want solar but also want a roof that looks amazing;

> “The key is to make solar look good,” said Musk at the time. “We want you to call your neighbors over and say, ‘Check out this sweet roof.’”


Precisely. Musk is a con artist, none of his businesses are or ever will be profitable. He exists to facilitate the flow of money from middle class to the powerful few.


This is just plain libel. SpaceX is profitable. Tesla has been profitable on a quarterly basis.


What does he have to do with Panasonic's batteries?


Panasonic is a joint owner of the battery plant.


But the battery tech is theirs. Tesla does some impressive stuff with cooling and power management, sure, but not batteries themselves.


Terrible analogy given that Tesla has created literally the best automobiles ever, and Theranos created nothing.


> literally the best automobiles ever

I'd call that a huge exaggeration.


At the Model 3 price point? I own a TM3 and there’s no question in my mind that it’s far and above the best.

That’s my opinion, but factually it is the best selling luxury car in the US.


Which, no matter how much you might like it, does not make it anything approaching a luxury car, no matter how Elon tries to spin it.Even Model S doesn't quite cut it as a luxury car, unless you consider a $30K Hyndai a luxury car. But then, Hyundai is much better build...


I always wonder if the Autopilot software has an exception that lets it drive through white rectangular objects in the middle of the road, because tunnel exits can look like a white rectangle to a low dynamic range camera.


I suspect the real problem will turn out to be extremely deep and difficult. That human drivers drive not only with great pattern recognition around shapes, but also with an actual understanding of the idea of a truck and the idea of a tunnel and the idea of a sign. There is a substantial chance that will turn out full self-driving is a fully general, human level AI problem.

That said, even the existing partial self-driving is extremely useful. I would be hard-pressed to consider another brand of vehicle until the others catch up with this.


Good point. The difference in dynamic range between eyes and cameras is HUGE. Maybe partially an illusion because the point we happen to be looking at is recognizable, but in any case it works in practice.


> The NTSB’s preliminary report said the driver engaged Autopilot about 10 seconds before crashing into a semitrailer, and the system did not detect the driver’s hands on the wheel for fewer than eight seconds before the crash.

> Tesla said in a statement that after the driver engaged the system he “immediately removed his hands from the wheel. Autopilot had not been used at any other time during that drive.”

It was obvious from the crash that autopilot was engaged and the driver was distracted. It’s heartwrenching to think the system had only just been engaged mere seconds earlier.

It seems like the driver had something happen in that 10 seconds and thought they could attend to it with their eyes off the road.

I hope one day they get the system to the point where it is intended to be used like that.


I’ve read many times Tesla owners saying the warnings pop up even if you have your hands on the wheel if you have a light touch. Has Tesla fixed this issue yet?


On the Model 3 at least I believe the sensor is based on torque. You have to apply turning pressure to the wheel. Just enough so it knows you’re there, not too much to disengage AP.

I find jiggling the volume wheel or the cruise speed wheel on the steering wheel up/down a click is more reliable because that won’t ever disengage AP.

I have no idea what they mean “did not detect hands on the wheel” because it doesn’t detect your hands on the wheel at all. It detects steering input on the wheel.


> I have no idea what they mean “did not detect hands on the wheel” because it doesn’t detect your hands on the wheel at all. It detects steering input on the wheel.

Wow! Really?! That sounds like one hell of a lie on Tesla's part... cynical part of me wonders if they deliberately avoided trying to detect hands for liability reasons?


Hands on wheel doesn't really mean anything. You could be asleep. Applying a slight torque to the wheel or nudging one of the buttons requires conscious action, unless you deliberately hack it. (from this thread it sounds like there are some tricks that work, if you don't particularly value your life or the lives of others on the road.)

That said, Tesla's PR often seems misleading at best.


Yea, they should give you the Valve Index controllers instead of the wheel to be absolutely sure.


This is still how it works today. The wheel must sense a certain amount of turning resistance or it will say there are not hands on the wheel even if there are. Short of installing new sensors in new steering wheels, all they can do is adjust that sensitivity level which they might not be able to do at risk of false positive detections.


It's actually the same for ACC in my VW Touran. Steering input is needed to calm the warning. After some time it will even stop on its on without any input :)


I don't understand why a person would say his cars will be fully self driving within a year when they still run into semi trailers. I guess it's because he wants people to give him money. It's sad from all directions.

And let me pre-empt the replies by saying yes, what matters in safety is the statistics, not a single data point. And let me pre-empt the replies to the reples by saying yes, but the statistics on self-driving cars are not at all encouraging.

I suspect Autopilot's running-into-semis rate is worse than humans', for three reasons:

(1) Tesla has been cagey about their Autopilot safety stats. If Autopilot was clearly safer, I think they'd be less cagey. Absence of evidence is evidence of absence. This is just speculation, of course.

What concrete data they have shared is summarized here: https://www.tesla.com/VehicleSafetyReport

Teslas with Autopilot on crash roughly once every 3 million miles. To their credit, that's really good! However, they don't say: (1) how often Teslas crash on freeways with Autopilot off, or (2) how often drivers have to take control to prevent Autopilot from crashing.

(2) The fatal crash in 2016 was also with a semi trailer. 2/4+ fatal crashes having this failure mode suggests this failure mode is especially high for their vision based system. Not a good look.

(3) Pretty much all self-driving companies have disengagement rates way worse than humans. Waymo is leading the pack - by far - at something like once every 10,000 miles in California. There are tons of caveats in interpreting these numbers, and they should probably be treated as an upper bound, since it's at the discretion of the company to only self-report disengagements that would have led to crashes, I believe. But human crash rates are something like once every 300,000 miles. So even lidar based systems are an order magnitude away from humans, by one super rough measure.

Lastly, a small correction to the title: This is the third known US autopilot crash. I believe there was a suspected autopilot crash in China some years back as well, and I'm not sure about other countries.


This is the third known FATAL US autopilot crash. There are other non-fatal crashes. Many with video.

Fatal crash in China: [1] Rammed into street sweeper at full highway speed.

Non-fatal crash in California.[2] Rammed into fire truck stopped on freeway.

Tesla's system does not detect big solid objects in front of the vehicle. It's "recognize rear of car", then "brake for car". No recognition, no braking. Machine learning at its finest.

I know, I've posted this before, but I keep seeing denial.

[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo

[2] https://abcnews.go.com/Technology/utah-driver-slammed-tesla-...


I use Waze during my daily commute and it regularly gives me a warning about a stopped vehicle on the road or side of the road; would a crowdsourced system like that help for autopilot so they change their algorithms slightly to detect stopped vehicles more better? Or just reduce speed? Or alert the user and disable autopilot well before the location?

I'm aware that would be creating an exception for an exceptional situation and not really a root cause solution, but still.


I'd be extremely extremely hesitant to throw on more dependencies for life-death scenario handling.


"Unfortunately my son didn't pay his Verizon bill on time, so they cut him off and since his car couldn't get updates it rammed him into a fire truck".


My experience with machine learning and vision systems amounts to one lab assignment a few years back, so forgive me if this is a stupid question - but why is there not a 'complementary' hard coded system built in that can override cases like this? I can think of no reason as to why stopping a car driving full speed into an inanimate object would be a thing that a human can't code. Surely there must be something in place - an "if(hurtlingTowardsWall){ignoreAI; stopCarQuick;}".


How do you know there's a stopped object in front of you?


Don't the Tesla have forward facing radar?


Traditional radar is bad at spotting stationary or slow moving objects, and to avoid false positives at every traffic sign or other stationary object above or next to the road, any radar-based object detection has to be tuned conservatively or the car would be flipping out all the time.

That’s why other companies use lidar, which is much better at object detection, but has its own problems (like fog or heavy rain, for example)


Stationary objects are very detectable with radar as they have relative motion to a moving vehicle. Also objects with no relative motion are only tricky for CW doppler radar.

The issue is mostly your last point. The radar sensors used have low resolution and would have trouble distinguishing a turning semi from an overhead sign.


Parallax via optical cameras. Like the human eye or Subaru's Eyesight.


Apparently many or all of the current self driving cars don't do well when running into fixed (ie stopped) physical objects. It must mean somehow in the way that they detect things, they must have to discount stopped objects, and 'moving objects', like cars are detected in a different way. I'd love to hear more background about why this is the way it is.

I agree that when you can't handle this kind of situation in a tesla it really challenges the idea that we'll have total self driving in a year. There must be a lot of software that is under testing that could be more reliable than what they have today, yet the issue is still there. The last part (5 or 10%) of regular software dev is really hard, and it should be even harder for self-driving, never yet approved or safe enough self driving software.


As far as I remember, it is because radar gives you too many points that are usually false positives if you don't discard stopped objects (which include signs, guiderails, anything on the border of the road (which you'll be driving right at in a corner, after all)). This is something that ACC in all radar-using cars does, AFAIK. AEB works differently in that stopped objects are tracked, but it only works well at slower speeds. If I had to guess it's because they have to track a potential stationary object over time to determine that, yes, it's an object that concerns us, it's in the way, and it's not just noise.

In case of the first trailer crash it apparently looked similar to an overhead sign to the sensors, also approaching a stopped object would be AEB territory, not ACC, which doesn't work well at higher speeds.


The first trailer crash wasn't just that it looked like a sign to the vision system. It's that the radar saw it and alerted, but the AEB was configured to do nothing unless both the radar and the vision system alerted. Since it was a white trailer against a bright sky, the vision system couldn't see it.

Tesla later pushed a software fix for the AEB to trigger even if only the radar alerted.

https://electrek.co/2016/07/01/understanding-fatal-tesla-acc...


Step 1: identify where stopped objects are relative to the sensor

Step 2: if stopped object is immediately not in front of the car, otherwise, ANALYZE IMMEDIATELY because a stopped object is right in front of the car.

It's not rocket science, and it's something so ridiculously basic that Tesla's failure to do something this simple borders of gross criminal negligence. It's something their competitors already do; in fact Waymo's problem is that it's too sensitive to stopped objects in front of the vehicle, hence the Waymo's constantly stopping and starting or driving very slowly...but that's the better and safer way to do it, especially when you're testing on real streets with third-party vehicles.


It's an issue that needs to be fixed but it's not extremely simple.

The car always has still-standing objects in front of the car (the road, anything around the road, barriers, anything on opposite sides of barriers, things in the direction of the ar but the road is turning etc.

The car has to know that the road doesn't slope up over the truck or around it. I'm guessing that's where the split-barrier fails occur.

It really needs to create a real-time 3d map of the entire environment (including what's visible from eye-height, not only front of hood) and map a route through that.


The car always has still-standing objects in front of the car (the road, anything around the road, barriers, anything on opposite sides of barriers, things in the direction of the ar but the road is turning etc.

Right, none of which are directly in front of the car when the objects and care are plotted in 3d space. Easy (but not trivial) to do with LIDAR but not yet doable with purely visual systems like "Autopilot." Which is my point, and which is the reason every other self-driving car company uses a combination of sensors to get data about car's surroundings.


To my understanding, this is why they don't have a real model of the world. They don't understand that the road signal right ahead of you poses no danger, because you're actually in the middle of a curve and your trajectory won't actually intersect it. Or that driving below a bridge is fine, even though you're pointing straight at it.

Once they are sophisticated enough to have an actual world model (something which, to my knowledge, only Waymo actually has ATM) they can stop ignoring static objects. Until then, they would keep slamming the brakes every 30 seconds for all the stuff they don't understand.


They probably bet on having the DNN figure this out from the data.

Of course this seems like a really hard thing. Probably with enough layers and with the right learning tasks it might be able to do it eventually. But the probability for noise and various learning artifacts is enormous, when you want to learn a sort of model of everything road related.


Autopilot is not self driving. The human is still driving the vehicle and the human is still responsible for stopping the car from hitting things.


Well, then what is it doing? Crashing your vehicle into semis when your attention lapses for a second?

Then it should be properly marketed as autocrasher, not autopilot.

An autopilot that requires that you have your hands on wheels and pay attention at all times is missing the auto part of the autopilot (reminder that auto means self). So such a system can't be called an autopilot.


It's not too bad if you compare it with the meaning of the original word as used in aviation context: a relatively primitive device that automatically keeps the trajectory of the aircraft (regardless of what is ahead by that trajectory). But it does sound misleading when presented to the general public (as opposed to highly trained professionals, which is the case in aviation context).


It’s auto pilot where pilot means the person steering ng a ship. There is still executive oversight required as in a captain or at least a navigator. It’s not “autodrive”.


sigh Piloting here means "controlling a vehicle", i.e. driving. As in "Formula 1 pilot". Do you think "Formula 1 pilots" are sailors?


It's analogous to an autopilot of an airplane. It keeps going straight, direction, speed, altitude. An airplane can fly straight and level right into an airplane. For some reason (partly Teslas'), people look at a tesla autopilot and think it does everything reliable. They used this misleading name, but people should understand it same way they understand a plane's autopilot.


Tesla calling their driving assistance "autopilot" is a marketing move. I don't see why we should accept their definition. Autopilot to me means that the human pilot can remove attention from immediate steering tasks. This is not true for the Tesla "autopilot".


AP is like a student driver. Perfectly safe as long as the instructor is there to take over with very short notice. So saying AP is safe because % crashes in $ millions of miles is pretty irrelevant. AP only drives with supervision and when this is missing it crashes.

How many times did the driver have to intervene to avoid the crash? Tesla provides no data about this. But next time you hear “AP is so safe it only crashed 3 times” just remember that student drivers also almost never crash. As long as it needs constant attention, not considering this in the statistics makes them irrelevant.


I predict we won't have fully autonomous ("total") self-driving in the next 25 years. To think it will be out in the next year or even in the reasonably expected lifespan of currently shipping Teslas is utter folly, IMO.


When I see long haul trucks driving along highways with humans only taking over in the populated areas I'll believe full self driving may only be a decade away. Tackling the full self driving consumer market when there's lower hanging fruit makes no sense.


They could wipe out Über.


Based on what?

It seems this is a classic estimation problem, and we need performance data points to fit a curve. Do we have anyting? Do we even know if this is a problem where we have a chance at accelerated compounding (so exponential improvement rate)?


Based on the fact that we don't have acceptable performance on clear, sunny, well-marked roads today, I don't expect that we're within a year of handling snow-covered roads, road detours, human flagged construction details, etc.

I think the self-driving car efforts are dramatically underestimating the effort required to close those gaps to zero at least in the hype machine but also perhaps in the daily working cycle. (That's all well and good; I'm sure that any audacious engineering program should discount some of the hardest parts lest everyone get discouraged and never begin.)


> they must have to discount stopped objects

Well, there are just a lot of them, outside the lane markers, and those never kill you. And while the systems surely try to classify parked cars as well, detection quality isn't under much scrutiny. Then when that one exception is sitting in the lane, good luck, your life depends on a second class feature.

Anecdotal evidence suggests that a similar perception vulnerability exists in long distance cyclists who spend many hours on the own road: lanekeeping and pothole/junk avoidance get more and more delegated to subconscious processes which work with eerie (and enticing) reliability. But that "mental autopilot", tuned to the open road, knows the back of cars only as something that pulls away from you, the simple model knows them as something not worth "raising an interrupt" over. Similar learned routines certainly exists with highway driving as well, but when driving, there isn't a class of objects which is routinely there but almost universally not a concern, like backs of cars are for a cyclist. Reports of that cyclist hitting a car parked in the open road always earn a lot of ridicule, but it's not quite as surprising as it seems at first glance (the very majority certainly succeeds at "disengaging" in time, but the exception make the news, same thing in every field)


> Teslas with Autopilot on crash roughly once every 3 million miles.

I also wonder if this statistic includes situations where the autopilot had disengaged immediately beforehand and the human had failed to respond.


If they named 'auto pilot' 'auto driver' it would at least be possible to have a sane discussion about the extent of the gap between the name and the actual performance. Since they named it 'auto pilot' we can not have that discussion because 'auto pilot' is a term that applies to aircraft, not to vehicles and you can not expect your average driver to know what the auto pilot in an aircraft can and can not do. Presumably laypeople will assume that an auto pilot in a plane is more complex than one in a car because they perceive aircraft as more difficult to operate than vehicles (and rightly so).


> Pretty much all self-driving companies have disengagement rates way worse than humans.

What’s the human disengagement rate?


When you fall asleep at the wheel? No idea…


Exactly. The parent casually asserts a difference in a measurement that doesn’t apply to humans. We can’t usually disengage to a safety driver. The comparison is not meaningful.


Parent comment compares human crash rate (1/300,000 miles) to self-driving disengage rate (1/10,000 miles). With the assumption that the self-driving system would have crashed had the safety driver not intervened (a reasonably safe assumption, given the purpose of having a safety driver in the first place), this seems like a meaningful apples-to-apples comparison to me.


I don't think that the general human crash rate is appropriate to compare. It makes more sense to compare, say, a professional bus driver.

As it happens, my uncle drove for a living and I recall he had a plaque for a million miles without an accident.


Or "Jesus, take the wheel!"


Perhaps because the first step to achieving anything is to believe firmly in it.


What about those non-autopilot vehicles?

- Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day

- An additional 20-50 million are injured or disabled

From https://www.asirt.org/safe-travel/road-safety-facts/


This idea gets brought up every time autopilot is discussed here but it’s not really relevant. If autopilot were safer than human drivers there would be an argument for allowing it, but there’s no evidence that it is. Sure Tesla claims that it is based on very specific, cherry picked internal metrics, but independent analysis shows there’s just not enough data to say. Also the class of errors that autopilot makes is completely different than a human. Most human car crashes are preventable, either driving under the influence or distracted driving. I would much rather be in a crash with a drunk driver where liability was clear than have my car drive underneath a tractor trailer in the middle of a clear day just because I hit some neural network edge case.


> I would much rather be in a crash with a drunk driver where liability was clear than have my car drive underneath a tractor trailer in the middle of a clear day just because I hit some neural network edge case.

Preferring to die by human error instead of shitty software doesn't make logical sense to me and feels like your trying to justify something irrational. Autopilot either drives safer than me or doesn't. And I wouldn't let a drunk friend drive me either.

> but independent analysis shows there’s just not enough data to say

Do you link a trustworthy analysis? I can't remember reading one that wasn't from a highly polarized source.


>Preferring to die by human error instead of shitty software doesn't make logical sense to me and feels like your trying to justify something irrational.

The poster, camjohnson26, did not say die, the poster said crash.

Camjohnson26's point as I understood it was to compare two situations:

1. A crash that occurs because one of the drivers was extreme drunk,

2. A crash that occurs because of an autopilot in one of vehicles did not make the current action.

In the first case most people and the legal system will likely conclude that the drunk driver was at fault. In the second case it will be more difficult to determine who was at fault. This second case may add legal, insurance and ethical complications that camjohnson26 wishes to avoid. For instance what if your insurance decides you are at fault for activating the autopilot under unsafe conditions?


> what if your insurance decides you are at fault for activating the autopilot under unsafe conditions?

Seems reasonable to me. The driver is always responsible.


Which driver? How do you determine it was actually the autopilot that caused the crash? What if it wasn't the autopilot?


>Preferring to die by human error instead of shitty software doesn't make logical sense to me

It certainly does if both auto pilot Tesla’s and standard vehicles are resulting in death, but the law will provide a remedy to your survivors and heirs in one instance and actually turn around and hold you liable in another instance.

After all the post was about liability and clearly nothing to do with how the poster prefers to die, which you somehow read into it.


This makes a lot of sense, and is actually seen a lot in the wild. Having control feels good for us, and I would argue that's probably why there is such a visceral response to drunk drivers---they're not in control. You can empathize with a mistake, but something just sort of happening with no real impact or control over it feels wrong.

That being said, I'm on board with the self driving cars, we should approach it with a logical head and really try to figure out what we should do, not what feels good.


Average rates appear to be 12.5 per billion miles, and it appears Tesla Autopilot is at 3 for over a billion miles with the system engaged. That's a quarter of the usual fatality rate, for anyone counting. Saying "there’s no evidence that it is" isn't true; the above statements (and supporting facts) are evidence.


”That's a quarter of the usual fatality rate”

I beg to differ. https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...:

”Comparing motorways (controlled-access, divided highways) […] the United States had 3.38 road fatalities per 1 billion vehicle-km on its Interstate-type highways, often called freeways”

I think that number better reflects where Tesla’s autopilot is used. Also, if you remove motorcycle deaths from that number, I can’t see how the Tesla could be significantly safer, let alone four times less deadly.

I think you should also correct for vehicle age (older cars and lighter cars are less safe). You could (and, IMO, should) also correct for vehicle size/price, as Tesla drivers won’t be interested in whether their car is safer than other cars on the road, but in whether it is safer than what they could have bought instead.

And of course, there’s the elephant in the room. I expect that Tesla drivers prevented at least half the deadly accidents that would have occurred if Autopilot did all the highway driving.


People keep saying (in defense of it) that the Tesla Autopilot does not drive the car for you. As such, it makes no sense to compare its fatality rate to the fatality rate of humans who are driving cars. It seems arguable to me that all those Autopilot fatalities were excess fatalities.


You aren’t controlling for other factors. Luxury cars and new cars have lower rates.


>What about those non-autopilot vehicles?

They don’t make fraudulent misrepresentations to the public regarding their technological capabilities to self-drive which people rely on leading to fatal accidents.

But when those car manufacturers are at fault for the accidents vis-a-vis some manufacturing defect...they or their suppliers are liable.


Driving is dangerous and many drivers suck at it. But if you want a relevant comparison only consider similar circumstances and driving conditions (motorway or city traffic? blizzard or sunny?). Then normalize for the number of vehicles of each type and their age/capability (20 year old clunkers with failing brakes aside). And in the end count in the times AP made a mistake and the driver corrected it (Tesla has the info, only they know why they refuse to make it public).


> - Nearly 1.25 million people die in road crashes each year, on average 3,287 deaths a day

20% of those are from motorcycles where collisions are far more dangerous. Of the remaining cars, a number of them happen in stronger weather conditions (where autopilot does not function yet) or cars that are old and unsafe. The number of fatal accidents for luxury cars with modern safety ratings under good weather conditions is much, much smaller.

This data is all available from the NHTSA at https://cdan.nhtsa.gov/tsftables/tsfar.htm , though it might take a bit of finagling to get the queries you want out of these tables. If there is a better source, please let me know.


If I were made god emperor of some country I'd switch everyone to autonomous cars as soon as they were safer than normal driving. But sadly I'm not and both Tesla and the government are going to be constrained by public opinion. And one important aspect of that is that people are more accepting of danger when they feel like they're in control than when they don't.

So for a successful rollout they need to be much safer than human drivers. This is something that looks achievable. But if you try to push them out too early you might get a backlash that delays progress and costs more lives in the long run through slower adoption when they're ready.


Why not just make driving a car as serious as piloting a plane? Planes are so much safer not because they are easier to fly, but solely because we take it seriously. If driver's licenses were giving and enforced like pilot's licenses...automobile fatalities would plummet!


There's something like 200,000 Teslas on the road vs. over 250,000,000 "non-autopilot" vehicles. You can't make a straight up comparison with numbers like that without weighting them accordingly.

Though I'm not sure you could compare them really at all in this specific case regarding the autopilot. I'm not sure how many of the traditional vehicle fatalities involve someone flipping on their cruise control and taking their hands off the wheel and ignoring the road in the same manner that the autopilot fatalities seem to have occurred.


There are way more non-autopilot vehicles than autopilot


Do those non-autopilot vehicles exclusively drive on the highway? Highway driving might be more deadly but accidents are less likely to happen in the first place due to lane separation, large curve radius, no stationary obstacles, no pedestrians, lower relative velocity, etc.

It's like trying to compare Rotweilers to Pitbulls. Both are dangerous but one of them is 20x more dangerous.


which one?


Okay, so do the numbers. 3.3 trillion miles per year. 1 fatality per 100 million miles.

That means Tesla should have driven 0.3 billion miles. Might be at the same level. Definitely not orders of magnitude different.


Literal whataboutism.

Automated systems are held to a higher standard. Telsa's Autopilot apparently has trouble spotting trailers and decapitates the occupants. Cite all the accident figures you want; it won't stop the reporting about Autopilot wrecks.


I drove this morning, a new car with adaptive cruise control. Having never experienced this tech before, it was pretty amazing. It makes the cruise control in my own vehicles seem crude and dangerous by comparison. I venture to guess, if standard cruise control was introduced today, it would be rejected by the public as too dangerous. Is there any data on how many accidents occur with standard (non adaptive) cruise activated?


Part of the safety of standard cruise control is the fact that it is clearly crude and dangerous - it keeps the driver engaged.


Good point. It was my first time, and as a result was very engaged. But I could imagine as I got used to and depended on the feature, how I could lose some of that attentiveness.

It was my first experience with lane departure warnings as well which was pretty neat but soon became a small distraction. I could see myself turning that off, if it was my daily driver.

The best feature of adaptive cruise control is how it keeps you a safe distance from the car in front (about 4 seconds) - whereas if I was using my own judgement, the spacing would probably be halved.


The things that actual “autopilot” does in aviation are also crude and dangerous: hold this heading forever, change flight level without regard to ground level, etc.


I find it amusing that in marketing Musk is always like the “hands on policy is just there to appease the regulators.” But after every fatal crash, the first thing in the press release is “the driver took his hands off the wheel!”


Would you have a link on the first one? Would be nice to keep around for later!


This is precisely their messaging on the embedded vimeo video on https://www.tesla.com/autopilot

"The person in the driver's seat is only there for legal reasons"

https://i.imgur.com/74XHPiL.png

In fact, the entire video shows the driver with their hands off the wheel.


This is misleading. That video is demonstrating Tesla's "full self driving" feature, which is not yet launched. You can future proof your car by paying for it, but Tesla is clear the FSD is still forthcoming, and that their current cars are not fully autonomous.

I think you can rightfully give them shit about calling the current gen "autopilot", and think their full of shit for claiming FSD is as close as it is, but this video is not intended to suggest you can operate the existing car in that manner.


You should look at things from the perspective of someone not familiar with Telsa's product line. Sure, if you own or interact with Telsas, the feature set of autopilot vs FSD is clear.

But, if you're a newbie, you just Googled "tesla autopilot" and clicked on the first result. You see that tesla has the "future of driving" - that must be autopilot, right? You scroll down and watch the video. There's a notice that people are optional and a guy being driven around hands free.

The messaging is painfully there - autopilot does not require hands and the feature set of FSD vs autopilot is conflated in a very misleading way on that page in general. If you scroll down (albeit a heading that describes FSD), you see "The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."

Tesla's autopilot landing page is clearly designed to mislead unsuspecting consumers into believing that autopilot is a self-driving capability.


How does one demonstrate a car driving itself (at any level) in an online video if there is someone in the driver's seat with their hands on the wheel?

To the viewer, that would just look like a person driving a car.


>To the viewer, that would just look like a person driving a car.

...which is exactly how current autopilot systems are meant to be operated, if you listen to the post-crash analysis.

Which, yes, defies the whole point of having these systems entirely.

Which is why they would be hard to sell.

Which would be right.


Every single manufacturer demos their cars with hands off the wheel in YouTube videos explaining their lane keeping system.


They also label it as lane keeping system, not self driving capability.

The difference in labeling is what killed several people already.


Literally every person that has been killed has been aware that their Tesla is not a fully autonomous vehicle, and has been aware of Tesla's rule that your hands should remain on the wheel at all times.


Why can you give them shit for calling it autopilot?

The only other context where we currently use "autopilot" is in aircraft, and Tesla autopilot demonstrates a significantly higher level of autonomy than the autopilot of your average aircraft.


When an aircraft pilot engages autopilot she can disengage herself and focus on other tasks. She must still monitor the trajectory but not continuously.


Only if other assumptions are made which have nothing to do with the interaction between the pilot and the aircraft.

Namely, that the skies are clear.


That's not an assumption but a reliance for IFR. You clear your path with air traffic control before engaging the autopilot. Collision-avoidance is not part of the autopilot in aircraft. And the pilot decides on the path.

In conclusion there are few similarities between an aircraft pilot and driving assistance in cars.


Tell that the Boing 787 Max pilots...


It was not an autopilot that crashed the two jets! Even if it had been, that would not change the fact that autopilots are a solved problem for IFR. The operating conditions where they can be used are clear, regulations are in place, and the risks are known.

I'd actually compare the Boeing "assistance" to the Tesla "assistance" as sometimes the craft will decide to steer into solid objects based on misinterpreted sensors. But the comparison is a stretch.


The hands on policy is reiterated constantly by Tesla, and Musk.

In a recent release note adding the ability to turn off lane-change confirmations when in 'Navigate on AutoPilot', they have the following;

> With confirmation turned off, lane changes will only be made once we've confirmed that the driver's hands are on the wheel. Disabling lane confirmation does not abdicate the driver of their responsibility to keep their hands on the wheel, be engaged at all times and carefully monitor their car's surroundings. [1]

Here's what Elon actually thinks;

> [C]an’t make system too annoying or people won’t use it, negatively affecting safety, but also can’t allow people to get too complacent or safety again suffers. Latest update should have a positive effect on latter issue especially. [2]

Fundamentally, as a TM3 driver, you just absolutely must be looking out the damn window when using AP. Hands on or off the wheel is a proxy for "are you taking responsibility for your vehicle" which first and foremost means; looking at where the car is going.

I'd like to see Tesla add gaze tracking on the driver facing camera, even if it doesn't work when wearing sunglasses. If it is able to track your gaze, then it should use that. Fallback to wheel torque detection as a secondary. I couldn't find a sample video of what that camera captures, but on the PCB it is apparently labeled "SELFIE".

[1] - https://imgur.com/a/gxePQ75#6cklbYQ

[2] - https://twitter.com/elonmusk/status/1005879049493725186


Not 6 months ago, Elon Musk went on 60 Minutes and showed the world you can drive a Tesla hands free. A bunch of disclaimers does not make that OK. In the real world, people more or less ignore the fine print.

https://www.wired.com/story/elon-musk-tesla-autopilot-60-min...


Having your hands off the wheel for the purpose of a demonstration is required if you want to demonstrate to a spectator that the car is actually driving itself.


What exactly is your point? If you can't demonstrate your product while following your own safety procedures, that doesn't give you the right to break them.


Do you also complain when [literally any other automaker] shows off their new car doing donuts or skidding around corners because that is "unsafe use"?

Promotional material is not instructional material, and should not be treated as such.


If James Hackett were flooring a Mustang around a corner on public roads during a 60 Minutes interview while boasting about their new traction control I would be in a fit of rage.

The "you're being unfair to Tesla because you aren't complaining about other automakers" meme has to die, because Tesla often does what no other automaker dares. This leads to their biggest triumphs (and there are many), and their worst mistakes. I guess that's what happens when you optimize for decisions per unit of time, right?


> The "you're being unfair to Tesla because you aren't complaining about other automakers" meme has to die

I too am tired of people misdirecting conversations. Maybe it's our attention deficit culture, or maybe it's whataboutism spread by masters of the game such as China.

Why does everything need to be on the same level? I can criticize Tesla for one thing, and another company for another.


Quoting from an old comment of mine;

Every other other Level 2 system advertisement I could find shows the driver with their hands off the wheel, at least to some extent;

[1] - Nissan - https://youtu.be/HkL67DgleQY

[2] - Ford - https://youtu.be/Z_9218dTXXY

[3] - Audi - https://youtu.be/nUlK6fpveXg

[4] - BMW - https://youtu.be/xsQvq4WlUYU

https://news.ycombinator.com/item?id=19290518#19292141


You seem to be having trouble figuring out what the actual issue is.

Tesla can't market its system in a way that indicates hands free use is ok, then turn around then blame the driver when they crash driving like that. Like Elon!

Other car companies need to be held to the exact same standard.

Audi says you can take your hands off in traffic jam assist in its ad, the car monitors alertness and tells you when to take over.

If a customer takes their hands off as directed and the car accelerates suddenly in to the car in front of them, Audi is at fault. If they then turn around and say the customer isn't reading the fine print, then they've crossed the line in to being a bad actor just like Tesla.


Are we being distracted here? The issue in this case—and indeed with most AutoPilot-related incidents—wasn't anything to do with the driver taking his hands off the wheel, it was the driver taking his eyes off the road.


I've never seen Tesla marketing like that.


Somewhat related... in The Netherlands the police tried to stop a man in a Tesla tonight, who fell asleep drunk while driving on the highway in his Tesla with the autopilot enabled:

(translated from Dutch to English)

https://translate.google.com/translate?hl=en&sl=auto&tl=en&u...


How many Teslas are on the road, how many have been involved in fatal crashes, and for how many of those crashes was autopilot turned on?


How many times would the AP have crashed the car but the driver intervened? Are these instances showing up in any statistic?

A student driver never crashed because the instructor is there to take over. Would you say student drivers are good drivers based on this?


and how many times has AP avoided accidents the driver wouldn't? The stats arn't that simple any more are they.

There are plenty more news articles, especially in local news, about fatalities involving non autopilot cars, but they're not as interesting so don't get shared as often.


Such stats are never simple. But when important data is withheld they definitely become irrelevant. They become more of a PR tool.

Statistically speaking 100% of people are able to pilot a plane with their eyes closed and not crash it... as long as a qualified pilot is ready to take over at any time. Such a stat is only valuable if you want to show the standby pilot is useful.

What would you base your assumption that AP saved even 1 life on when that data is not provided? All we know is that it crashed a number of times exclusively when it wasn’t supervised as required. The pragmatic conclusion is that it works because it’s supervised. I imagine Tesla would release more data if it supported the performance of the AP.


Here is a cool video in which shows a model 3 dodging a car in front of it [0] after it got rear ended causing it to jut forward. Owner says it wasn't him but autopilot that did it.

[0]: https://www.reddit.com/r/teslamotors/comments/bl5pa9/tesla_m...


They report numbers quarterly:

https://www.tesla.com/VehicleSafetyReport


Where? No report is linked from that page, just a marketing paragraph without raw data.


Tesla won't release comprehensive data because they don't want you to know.


I think it is an amazing achievement by Tesla that these hit pieces cannot find more than a handful of incidents going back 3 years while there are half a million Tesla’s out there. Their quarterly safety statistics is a great strategy to counter the propaganda that “Tesla cars are unsafe” or that “Tesla cars caught fire more often”.


If only Teslas killed fewer people than cars that sell in many millions. But alas...


As an owner of a TM3 I have never believed any promises with auto pilot, full self driving feature. I did buy the upgrade to FSD when it became available for two thousand dollars as a hedge against resale value.

I do believe their approach with a camera focuses system will work. however if video on youtube showing all cameras in action and that of our available dash cam is representative of the color richness it might be an issue where there is not differentiation to properly see all objects in some lighting conditions.


You'll soon get an option to reclaim those 2000. Or the option to join a class-action suit seeking the same.


It doesn't renew the questions. The questions started before there was ever a crash and continued without stopping!


Does autopilot beep annoyingly when you take the hands of the wheel just like any car when you forget to fasten your seat belt?

And if not, why not?


I wonder how many people see this headline but don't realize this happened over year ago when they also saw a similar headline for the same accident. Combine that with constant headlines about Tesla vehicle fires, financial troubles, production problems, and Elon's tweets, and you end up with a terribly misinformed, skeptical public, hesitant to embrace the technology and company the world desperately needs.


Sorry, but why does the world need, let alone desperately, any company? Let alone as questionable an enterprise led by a dishonest sociopath as Tesla?

Unless Elon is going to release Real Soon Now super-special spectacles that let you really see those reptiloid shortsellers from Nibiru that are conspiring in their reptiloid lairs against all that is good (for Elon)? Maybe bundles them with those flamethrowers?


Was there a whitelisted billboard in the vicinity?


Insane how negative even this community is about this tech. Is coming regardless of regulation because whos going to tell tesla No once they add in lights and stop sign detection but still.

Actually, don't they already do those things in beta for some users?


I've said it before.

I'm negative because Tesla has a track record of killing people willingly.

AP1's entire sensor suite was in other cars at the same time it was in Teslas.

Other companies rightfully used those sensors for Lane Keep Assist, Adaptive Cruise Control and Automatic Emergency Braking.

Lane Keep Assist does EXACTLY what it says on the tin. It ASSISTS in keeping a lane. It will not automatically center in a lane. It will not let you let go of the wheel.

It WILL save you in 99% of situations where one may naturally be distracted, like drowsy driving.

Adaptive Cruise Control, does EXACTLY what it says on the tin. It is cruise control... that adapts to traffic conditions. It will not stop the car by itself. It will not lull you into a false sense of safety by trying to claim it detects stopped objects. It will not give you the confidence to stare at your phone in traffic.

Automatic Emergency Braking, does EXACTLY what it says on the tin. It's a backup that will only activate in a real emergency. It will precharge the brakes opportunistically, but it won't activate until the last moment possible. It instead alerts the driver BEFORE action is taken.

AEB from Mobileye alerted me to an overturned box truck on an interstate in pitch darkness and heavy rain before I could visually confirm it.

-

Now you'll notice if you just take these building blocks, and increase how often they engage to continuous, you can lie and say you have a rudimentary self-driving car. I say lie because this level of autonomy is not what people have in mind when they say "self-driving car"

And of course no one would do that.

And certainly no one would do that and then call it AUTOPILOT.

Except Tesla did. And people died.

And they're still keeping up the charade.

And that's (part) of why I went from a Tesla fanboy to a staunch hater of Tesla and what they represent.


If a driver uses a system based on information from advertisements instead of reading the manual that comes with it, they might not be suited to drive any car either because using a safety-critical system without knowing what its limitations are is reckless and irresponsible.


If that's what you got was the problem out of everything that I said, Tesla should extend you an offer.

They've fostered this culture of setting up a fundamentally inadequate system, getting the user to use it with criminally optimistic language, then blaming the user when the inadequacies, which they knew about, result in death.

Before the accident Musk will say:

"Your hands are only on the wheels for regulatory reasons"

To imply your hands are not on the wheel for safety reasons.

After the accident Tesla will say:

"Their hands were not on the wheel as they should have been to safely use our product"

Rinse and repeat.

-

Honestly what's impressive is how many people on here condone this.

Imagine a plane's autopilot suddenly banked into a mountain. It wouldn't matter if both pilots had been sitting in the back of the plane chatting up passengers, no one would hesitate to ground hundreds of planes until the cause was fixed.

The NTSB would not release a report saying "As long as autopilot kills less people than manual flying it's fine!"

Yet every time this happens apologists come pouring out of the woodwork to parrot this stuff. It's honestly getting a little disturbing.


totally not a cult"


>If a driver uses a system based on information from advertisements

They would be just like all the consumers who expect that misleading advertisement is illegal.

Nobody is reading the small print, especially on the "accept this long text to use your thing" kind. Not you, not me, not anyone, really; and everyone knows it, including the courts. In fact, for that reason EULAs aren't even enforceable in all legislatures (like Germany, IIRC).

And putting critical information into the small print that you have to accept just to use the device is plain sleazy.


If there is a ruling that actually exonerated a driver from their duty to be familiar with functions of their car based on prior knowledge of (misleading?) advertisements, then I'd really like to see it. It would be surprising at least.


It'd be surprising that you're not expected to somehow reverse engineer your car to find out how the features really work? Is that a joke?

You think lay people are supposed take apart their Teslas and find out that AP1 was using radar and research that to find out that can't be used to detect static objects because of confusion with things like overhead signs?

What's next, you'll be expected to reverse engineer ML models to find out your self-driving car doesn't recognize red bicycles at sunset?


[T]o be familiar with functions of their car means that they should have had a look into the manual.

Edit: You might wanna read the comment guidelines by the way. Snarky comments are actually discouraged in favour of more thoughtful and substantive ones -> https://news.ycombinator.com/newsguidelines.html


That's obviously not ok either, you can’t advertise the car as being able to do whatever you want to, then not deliver what you advertised because the manual says otherwise.

That’s not a car specific concept, that’s the definition of false advertising.

And I’m perfectly familiar with the guidelines, a comment can be thoughtful, substantive, and contain a little well-earned snark.


It's 2019. Adding lights and stop sign detection means they are almost half a decade behind their competitors.


[flagged]


Ah yes, the famously low-risk/low-reward environment of a tech startup...


Startup folk don't seem to be a dominant presence on HN these days. More people here seem to be employees in medium-large companies, and many are not in tech at all.

It's probably mostly that the HN audience has grown a lot since the early "Startup News" days, whilst startup founders/employees haven’t grown by the same numbers, so much of the growth has come from other quarters.


This. Running a startup forces you to see risk/reward in a manner you can’t unsee.


Boeing should buy Tesla, they seem to be of the same mind as it pertains to safety.


How many millions of miles have these cars self driven and people lose their minds about a handful of deaths. I mean don't get me wrong - it's death and I'm atheist. It's a big deal. But how many deaths did people incur in the first years of cars? Hell you could die just from the crank on the front kicking back and crushing your skull - and that was just to turn the things on.

People take risks when new tech comes out. If they wanna be the Guinea pig then let them do it. Everyone sees musk as two faced and playing both sides but honestly - if you're an adult paying for a tesla then I'm fine with it being in your hands to decide if you're willing to roll the dice. Marketing or not, no one should be stupid enough to think ANY device has a 0% chance of killing you, let alone a computer driving.


Musk does play both sides. After numerous crashes, Tesla blames the driver for not driving. However it's always a good time to tweet about FSD being just around the corner. Watch the video at https://www.tesla.com/autopilot. It immediately says the driver is only in the driver's seat for legal reasons. It should say he's there to wrest control from the car whenever it encounters the dozens of edge cases it fails to handle before it accelerates into a concrete barrier. It's probably more dangerous than a human driver because if they had stats showing otherwise they would have promoted them. Early drivers took risks on the first cars because moving away from horses revolutionized society. People turn on autopilot because they'd prefer to look at their phone instead of drive.


I've told this story here before, but I've had an oncoming Tesla change lanes onto my side of the road and drive right at me. Fortunately there was time and space for me to stop and the Tesla driver to take control again. It could have been very different.

Why should I "roll the dice" with me and my family?

And no, they weren't overtaking, etc - if you need more details I can provide them.


I'd like more details if you're willing.


Driving down an 80km/h straight back road, single lane each way, with a car and truck following me. The lanes are free of snow but there is old ice/snow packed on the center line and some small drifts of minor snow, so the white lines are obscured in places. Two teenagers in the back of my car.

Coming towards me is a Tesla model 3, followed by a Peugeot.

The Tesla neatly lane changes into my lane and drives straight at me. Closing speed is around 160km/h. I hit the brakes and brake to a halt. The Tesla driver seems to take control at some point as the car first heads back to his side, but doesn't change back into his lane (maybe due to the Peugeot), then moves across my lane to the narrow shoulder on my side of the road, while braking fairly heavily (as was I).

We both reduce speed quickly enough that we aren't going to crash. The vehicles behind me have also done so. As we pull level with him (facing the wrong way on my shoulder) I can't see the driver as I am in an SUV and he is low down, opposite side of the vehicles of course. He has stopped. I continue. The car behind me stops to check on him.

The teenagers in my cars saw the whole thing and are exclaiming about it.

I guess maybe his car was seeking the white line and couldn't find it in the snow, so it hunted into my side. I don't know though.


> Everyone sees musk as two faced and playing both sides but honestly - if you're an adult paying for a tesla then I'm fine with it being in your hands to decide if you're willing to roll the dice.

It also has a chance of killing me, a non-Tesla driver. Where's my informed consent?


It comes down to statistics. We deal with road raging, texting, drunk drivers on a daily basis. We know our risks each time we leave the house. Safety is exponentially better than what the past generation had to deal with.


If the statistics showed that autopilot was safer than humans driving modern luxury sport sedans on the highway, then Elon would be blasting that out. So now drivers have to share the road with level 2 self driving cars sold to lay people as something more.


Right, thinking like that is why we have laws against misleading advertising.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: