The aviation industry has experience with aircraft automation leading pilots into trouble, and so does the NTSB. Here's a former head of the NTSB commenting on the Asiana crash at SFO in 2014: “How do we design this whole human-machine system to work better so we don’t lose the professionalism in the humans who are doing this?” Crash investigators are very aware of the problems of a poor man-machine interface leading pilots into a bad situation. That is the kind of thinking needed to make these kinds of systems work.
It's not about blame. Most air crashes have multiple causes. That's because the single points of failure were fixed long ago. This is somewhat different than the traffic law enforcement approach. The NTSB's previous investigation of a Tesla collision resulted in Tesla changing their system to enforce the hands-on-wheel requirement. That may not be enough.
Playing hardball with the NTSB is not going to work. The NTSB rarely uses it, but they have broad authority in investigations. They can get subpoenas. They can inspect Tesla's facilities, hardware, and software: "An officer or employee of the National Transportation Safety Board ... can ... during reasonable hours, may inspect any record, process, control, or facility related to an accident investigation under this chapter." The NTSB even has the authority to publicly disclose corporate trade secrets if necessary. Aviation companies routinely cooperate, knowing this, and the NTSB seldom has to compel disclosure.
Incidentally, lying to the NTSB is a federal crime. The CEO of Platinum Jet Services tried to cover up some safety violations that resulted in a crash.
He became Federal Prisoner #77960-004 from 2007 to 2013.
The question is whether repeated incidences could cause the NTSB to suggest removal of the autopilot feature entirely (in its current state).
Your comment suggests that Tesla's claims about the driver having his hands off the wheel for x seconds before a crash could be wrong. If the sensors detect the drivers hands are not on the wheel when they actually are on the wheel, then data logged about how long the driver's hands were off the wheel should be considered suspect.
I hope that's being investigated.
Personally, I'm having trouble believing the driver who died in the recent accident ignored the warnings for six seconds before the head-on collision, especially when he knew Autopilot didn't work well at that section of road.
I'm not familiar with how the system works. If the warning engages, is there a guarantee that the driver will have to take over shortly? Or are there scenarios when the warning turns off by itself and so the driver could have been waiting to see if the car would correct?
[Edit: lolc and Vik1ng pointed out that the warning isn't related to unsafe conditions as I implied. It's used whenever the sensors think the driver's hands are off the wheel.]
Who cares if he had 5 seconds to see the barrier if he only had 0.5 seconds to realize that the car went into casual murder mode right as it veers into the barrier. They make it sound like it had to have been driver error with facts that are irrelevant to the question at hand. Your assumption that Tesla claimed he didn't have his hands on the wheel at the time of the accident is exactly what I'm talking about, Tesla's PR release is filled with weasel words to do exactly that.
I'm not even trying to suggest that Autopilot is less safe than an alert human driver, but one thing is clear, we certainly can't trust Tesla to determine how safe it is.
This video is quite relevant:
A fully alert driver trying (and succeeding) in reproducing this behaviour. Observe how long the barrier is visible, how long it takes him to react, and how close the car was to hitting the barrier.
Musk has commented publicly before (though somewhat obliquely) about this flaw. He indicated that the company is trying to build a map of reference data so that it can be filtered out automatically, and real hazards can be seen/found.
This is part of why trying to second guess a driver or autopilot is insane; a crashing and non-crashing car are almost identical, except for the crash.
Its foolish to oversell thru marketing the capabilities of AP, as it's fairly easy to conflate reliability with observed behavior.
As long as the system mimics enough of the capabilities of full autonomy and it's oversold as such customers will underestimate the risks and become victims of an unfinished product
If you use this to market your car without any caveats, don't be dismissal of some people who might read into that your car is capable of self-driving, when in reality its clearly absent of anything of the sort.
Owners see marketing occasionally. Owners see the reminder that they have to stay alert 100% of the time each time they turn on Autopilot. Owners get the alert that they need to put their hands back on the wheel. And so forth.
If you've got data about owners being confused, please share.
Meanwhile, next time an owner offers anecdata about what they personally have learned about Autopilot's capabilities, perhaps you could respond to that, instead of telling the owner that they're being dismissive.
You are wrong: "the driver’s hands were not detected on the wheel for six seconds prior to the collision."
Perhaps you shouldn't attack Tesla for using "weasel words" in their press release if you don't take the time to read it.
Edit: You seem to have edited your post without any disclaimer. Your edits were not visible when I responded. Most people on HN will edit their comments with an "Edit: I was wrong about ____" rather than ninja editing the mistaken claim away. This help make discussions easier to follow and limits misunderstandings.
Originally when this was posted there was concern over if "prior to the collision" was talking about the 6 seconds before the collision or earlier in the drive. In any case, Tesla is implying that the victim wasn't driving responsibly and there's no data released to the public that would back up such a claim.
So, if the statement is actually carrying more information about the reliability of the hand-detection system when it sounds like it should be a claim about the driver's hands (the subject of the passive-voice sentence), it is misleading.
Tesla seems to think they know exactly what happened after only reviewing data collected from the car. They see what they want to see and have become blind to any other possibility. I think that's terrifying. I couldn't trust an automaker with that kind of arrogance.
Dumb question: if I have to constantly stay vigilant with 1-second reaction time because at any time my assisted-driving car might try to kill me (exactly how changes every time the software updates) -- isn't it less effort to just drive my car myself, so I mostly just have to worry about the drivers around me?
There were never warnings at that time. Tesla just worded it like that to mislead people. It only talks about videos early in the drive.
This seems pretty clear:
"The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision."
Autopilot is a well known word which suggests a certain level of autonomousness and a car that might kill you if you take your attention away for 6 seconds simply does not qualify for that name. Tesla changing the name from autopilot to something better resembling reality would send a strong message to people to be more careful with using it while the attention is somewhere else
pilots are level 3 upwards according to this classification. Level 2 systems are called assistants. If tesla would adhere to the industry agreement, they would call their system an "assistant" and not a pilot.
The driver is licensed to operate the motor vehicle. It is their responsibility to be familiar with the safe operation of the vehicle.
> how to handle autopilot and its common errors and how to correct them?
Do you believe that anything additional was required here other than keeping the car in it's lane?
Also, note that the beep is the driver taking over from autopilot.
But blaming this on the use of the term "Autopilot" is idiotic and discussion centering around it is silly when there are much more important questions to answer about the design of autopilot, other driver assistance systems, and the future of self-driving cars.
99% invisible has a really interesting story about this topic:
Airplane autopilots really are solving a much simpler problem than even a level 3 autonomous car has to solve.
The most advanced autopilots can automatically takeoff and land, but that requires substantial ground equipment at each airport to support.
You say that it would take a massive investment but realistically existing signage, reflectors, and road markings probably cost more than the equivalent for an autonomous vehicle would.
The exciting thing about self-driving cars this time was that they worked on existing roads, making them financially feasible.
Perhaps final stages of ILS approach? That'd be the exception, but pilots are trained and certified for this - and definitely not allowed to snooze off then.
(I'm not a pilot, but have spent a fair amount of time and money on various PC flight sims; edit: and used to drive a Mazda 3 with a very good radar cruise control and AEB, before ditching it for a bicycle because getting old and fat).
GA autopilots may be more limited, but that’s not what the general public is thinking of when you say autopilot.
This is far from a defense of Tesla. On the contrary, the public's ignorance about what capabilities are implied by 'autopilot' suggest to me that Tesla should not be using the word.
I once had the privilege to sit in the cockpit during an entire flight. That was in the late nineties and probably would be completely out of the question nowadays.
The weather and visibility were shitty and what was really interesting was that they initiated the landing automatically, but when the runway become visible (about 400 meters) the first officer took over and landed the plane manually.
Two other things I learned is the amount of manuals they have available in that very cramped space (maybe maintained electronically nowadays) and that when you think that they just key in a couple sequences into the flight management system to then play solitaire you would be very wrong.
Both pilots where permanently active and on high alert during the entire one hour flight. That may be a bit more relaxed on long distance flights, but people who believe it's like driving a bus are really, really wrong.
it was an amazing and very interesting experience.
Tesla is claiming that if the drivers hands are not on the wheel then they are not responsible what what their "autopilot" does.
Autopilots in commercial aircraft - the kind that you'd be flying from San Francisco to New York are capable of far more then just dumb cruise control, including navigation and landing.
If 'maintain pitch and direction' were enough to be 'autopilot', then my SO's '97 Toyota Avalon had 'autopilot'.
If I name something warp drive, are you then entitled to believe that you will now be able to travel faster than light? Will you blame me for naming the product incorrectly if you travel at a speed less than c?
well, you are driving a car from a paypal mafia founder. why everyone forget that Musk started by ignoring banking regulations and mostly laundering money quasi legally, while later on holding funds from everyone for arbitrarily enforced rules, while completely ignoring user complaints. (bye HN karma :)
seriously, why is that everyone hates paypal ethics but loves musk and hold him as holy?
It is fairly easy to find out what PayPal used to be like when Musk ran it - just ask some older eBay sellers.
Not unique to PayPal. Deposit a check of an amount that's unusual for your "profile" at Chase and it'll sit on the money for three weeks.
When I said 'first new thing in aerospace since the 70s' I meant shipped. Things that never shipped don't exist.
The Shuttle I suppose could count but it was a mixed bag and in some ways a step backward from the Saturn V.
Seems like people have flipped from mindless Musk worship to being haters all the sudden.
Tesla may fail but they validated the market for high-end electric cars. Before Tesla the popular narrative was that EVs are impractical, slow, have poor range, and can't possibly ever compete with ICEs. People argued that modern transportation was absolutely and fundamentally inseparable from oil, even spinning this into popular peak oil doomsday narratives.
The biggest threat to Tesla today is that now that they've validated the market larger more experienced car companies are jumping on the EV bandwagon.
Perhaps Delta and Atlas rockets were amazing innovations. They did great jobs with the recent mars missions. I don't recall getting particularly excited for many mars launches. (the missions were cool though).
Those boosters landing together though made me fell like a little kid watching a shuttle launch. But those boosters man. Maybe that's not innovation but that landing sure felt like a big moment to me.
I think the parent poster is trying to say, for a long time it felt like small refinements.
The funniest part of Tesla's statement is the fact that it basically renders "Autopilot" worthless. Or, at best, turns it into nothing more than an Adaptive Cruise Control system with an extremely misleading (and dangerous) name.
Tesla has been all too happy to let people believe that Autopilot is what the name implies. They only make a real effort to correct that misconception when something bad happens. Then they finally say "It's not actually autopilot, you're still supposed to keep your hands on the wheel and your eyes on the road at all times", to which we collectively shrug and say "Well then what the hell is the point of Autopilot? And why did you give it such a misleading name?"
You know what I really want? I want a vehicle to save me when I screw up. I love skiing, but it's a dangerous sport, and the most dangerous part is driving home. You're in the mountains in the dark, the road might be icy, and you're tired from a long day of strenuous exercise.
I want to see humans and machines working together. A car that could spot deer on or beside the road would save lives. They can be very difficult to see in the dark, and hitting one could send it through your windshield at highway speed.
Augmenting human ability has a lot of potential. Even just adding sensors that humans don't have would be huge. I bet those deer are way more obvious in infrared.
If you fall asleep while driving, it may prevent you from driving into a barrier or on-coming traffic. It's not meant to take over the act of driving from you, but to provide assistance.
...and be ready to correct it should it try to steer into objects. IMHO that's even worse than plain old "manual" driving --- at least in that case, the car doesn't have a mind of its own and won't suddenly decide to steer itself. It'll keep going in a straight line even if (absolutely not recommended on a public road, but a good way of checking the suspension and steering) you take your hands off the wheel.
To me, it sounds more like driving with Tesla autopilot is like being a driving instructor for a not-too-great learner.
This is literally what Teslas autopilot system is.
It is adaptive cruise control and lane assist.
They do not claim to be self driving. It is not a level 3 system. It is level 2.
The purpose of autopilot is to have both the features of lane assist and adaptive cruise control.
What's the big deal? Why is this so surprising to you? This is what it is. Go buy a car from Google if you want a self driving level 3 car.
If you don't want a level 2 system then don't buy it.
The FTC should require them to stop using the term Autopilot. The common understanding of that term makes it misleading.
Words matter and Tesla is trying to have the benefit of the term without the responsibility for what it implies.
I don't think the issue here is Tesla misusing the word. The issue is that the common (non-pilot) understanding of the term is wrong. People piloting heavy machinery have an onus to maintain their currency & proficiency, as well as be ready to correct for faults in their instrumentation and pilot aids.
Autopilot is intended to be a tool to reduce pilot workload so you can focus on other aspects of maintaining correct control of the vehicle. (Namely in an aircraft it assists you with aviation, leaving you better able to navigate & communicate.) Instead what we are seeing with these assists in cars is that people are using these pilot aids and then engaging in unrelated distractions.
Airplane autopilot: Engage, divert attention for minutes, not die (consistently).
Tesla autopilot: Engage, divert attention for 10 seconds, die (also consistently).
Airplane autopilot!=Tesla autopilot. Name is misused.
Also take a moment to appreciate what you say "It's ok to use this name on cars, because professional pilots know you can't completely rely on plane autopilots. People should know that and if they die it's their fault."
The video you provided shows a very unusual case. Most autopilots on small aircraft are not really running software so such "bugs" are virtually non-existent, and in large aircraft they only use tested-to-death systems and they practically never bug out.
New autopilots in commercial airliners can do many things (follow GPS tracks or radio navigation waypoints, control engine throttle, line up and bring the airplane 50-100ft over the runway etc) but they never do collision avoidance, landings or terrain mapping.
Airliners also have alarms for low altitude (most have radar altimeters) but they prompt the pilot for action, they're not supposed to avoid anything on their own. Apparently Tesla doesn't even do that.
The car is warning you if you take your hands off the steering wheel. I don't see how anyone but a moron would value their misconceptions about the term "autopilot" over the clear signals that your hands are required to be on the steering wheel.
I wish people would stop bringing this idiotic point about the term "autopilot" and instead talk about how the different designs of autopilot can encourage or discourage attentiveness in operators.
>I don't think the issue here is Tesla misusing the word. The issue is that the common (non-pilot) understanding of the term is wrong
In fact, because it's well-known that there's a common misunderstanding of the term (as you've acknowledged) but the company chooses to use it anyway, then that's sufficient to represent intentional misuse. They are leveraging this misunderstanding in their branding, then hiding behind the "real" meaning when it's convenient.
Companies test and invest heavily in their branding, which includes a full reasoning of the connotations associated with the words they choose. There is literally no way that Tesla is unaware of people's common misunderstanding.
So, maybe it's clearer if you look at it another way: why choose a word that could create any confusion when there are countless other choices?
>the pilot in command in this video is actively scanning for traffic, he is physically positioned to take control of the aircraft, he is paying attention to instrumentation, and is actively participating on frequency. In other words despite having an autopilot: he is still piloting the aircraft.
If a driver took a similar monitoring posture there are n-situations in which he/she would not have time to react to avoid an accident. There is generally far-greater margin of error and time for correction when an aircraft's autopilot fails. This is why a system that requires such monitoring in an automobile is a fundamentally flawed design. There are too many situations in which there is simply not enough time.
Because drivers are expected to a.) allow the system control of the vehicle but b.) recognize its failures and take back control to correct within milliseconds? That's super-human and, at best, adds n units to the human's reaction time--with potentially devastating consequences.
And, remember, it's beyond "environment monitoring". Drivers must now correct for when the vehicle does not recognize a hazardous situation and also respond when the vehicle itself suddenly creates a hazardous situation (like veering towards a barrier). There is no amount of "environment scanning" that can predict such a malfunction.
Pilots may realize that autopilot is a term that can refer to extremely simplistic systems that require constant pilot attention. The public at large that Tesla is selling to, however, equates the term with big airliner autopilots that could land the plane if needed.
Which is a tiny tiny percentage of the general population.
As has been said it is what the general population understands "autopilot" to mean which is the important part. The fact that the niche and general understandings are different should not be a gap that a company inserts itself into to mislead the public as to their products capabilities.
Yes, most people don't know the nuances of what "Autopilot" means in the aviation industry, but trying to educate the general public is a losing proposition vs just telling Tesla to rename the system.
I got into the habit of periodically giving it a good squeeze while resting my hand on it.
What I haven't seen discussed: Are these cars from other manufacturers having similar problems with crashing into things? Are Tesla crashes considered more newsworthy, which is why we hear more about them? Or are drivers of these other cars more attentive? Or do these other cars actually have better technology?
Just curious why the discussion is always Tesla "autopilot" vs Waymo "full autonomy" vs unaided humans, when other manufacturers are also putting level-2 "lane keeping" systems on the road.
Other car manufacturers are more careful than Tesla in what they let get out of the lab. For example, they wouldn’t have to send out a firmware update that, as a new requirement, forces users to keep their hands on the steering wheel (https://thenextweb.com/artificial-intelligence/2016/09/23/te...) because they wouldn’t yet dare deploy one that doesn’t.
And I don’t think their technology is worse than Tesla’s. See for example https://www.technologyreview.com/s/520431/driverless-cars-ar... (2013).
Because, ironically, Tesla and Waymo (and Uber) have been more suscessful at deliberately drawing media attention. The thing is, once you draw media attention, it tends to stick around even when the story goes someplace you don't want it to.
I trust the intelligence of drivers, but this brings up a good point. Could calling it "autopilot" be perhaps a bit much right now? I'm not claiming it's false advertising kind of bad, but I can see it being better for marketing than giving a reasonable sense of what the feature is capable of.
This is not what I experience. Maybe the sensor in your steering wheel is faulty? For me, the warning goes away as soon as I lightly jiggle the steering wheel (I stop as soon as I encounter any resistance).
.@brand on a public twitter with enough followers to care about usually gets a pretty quick response.
Hell, how many times have YC companies prioritized support issues because a user posted it here on HN? I see it all the time with coinbase and crew, even googlers hanging out seem to be able to get account bans reviewed relatively easily.
I would argue the /r/teslamotors sub would be BETTER than going to Tesla directly. The motorist that died in this crash had contacted Tesla multiple times regarding this issue, since the crash plenty of others have made videos of similar behavior at similar median splits across the interstate system.
If the driver had made those complaints public perhaps others would have corroborated his accounts and or recreated the issue with video proof. Certainly putting somewhat public pressure on Tesla to fix the problem quickly.
When autopilot on airplane crashes the plane, what do you think is the primary cause?
If it does not detect hands on the wheel it disengages.
The argument they make is that having hands on wheel is, effectively, having the driver in the loop as a part of the safety system ready and able to take over 'instantaneously' if necessary.
As noted in the article, the information being released includes blaming the victim and other PR spin. This doesn't serve the public or further highway safety; it further's Tesla's commercial goals.
If Tesla's withdrawal hampers the investigation, let's hope the agency uses its subpoena power to get the info it needs to determine the cause of the accident and what Tesla needs to do to prevent similar incidents from happening in the future.
So Tesla files an FOIA and claims they are the transparent ones. This is a bad attempt at PR.
I disagree. People need to know that autopilot doesn't eliminate their responsibility for safely driving the vehicle and paying attention. People need to know they HAVE to respect the autopilot warnings. If that is happening too much in an area and so is too cumbersome, then don't use autopilot. And to wait a year for that to be released would not be in the public interest. Neither would a year of unanswered BS from short interests and competitors, considering that Tesla's vehicles overall are incredibly safe.
I think NTSB's action to remove Tesla is fine. Tesla felt they had to release such information (and there's a good argument for that, IMHO), but that is against NTSB's policy as it could compromise the investigation. So be it. Removing them may be best for all.
To tell people that you have to pay attention 100% of the time your don't need any of the details. You have stated that over and over again, it just doesn't do enough, because a few weeks later people have forgotten all these statements and new customers or people who just like the car, but don't follow the company might never even have read them.
You could make the same silly linguistic argument about "cruise control."
The problem with this strategy is that literally no one is clamoring for a self-driving car that requires you to have a license, both hands on the wheel, both eyes on the road, and to be alert at all times.
If you have to mention it, fine, but engineer the ways you mention it to have low penetration and low retention.
That is only to weasel out of the legal issues. They are doing marketing stuff that pushes the feature as vastly more capable on the side..For ex Elon Musk tweeting videos that shows cars traveling by itself on unmarked roads..And having declarations such as "The human behind the wheel is only for legal reasons, the car is driving all be itself" and such..
So the actual marketing is based on this unrealistic projection, and only these "statements", which the public will soon forget, is based on its true capability..
Which supports my point. Theses responses likely have had a bigger influence from lawyers than PR folks. The goals of those two groups are almost diametrically opposed to each other. The best thing for Tesla from a legal perspective is to downplay the capabilities of autopilot. The best thing for Tesla from a marketing and PR perspective is to exaggerate the capabilities of autopilot. Tesla is doing the former in these responses. They are choosing the legal response over the PR response. It also should be noted this is also probably the best move from a public safety standpoint. I don't know why people are claiming these responses are PR spin or how these response could possibly sell more cars.
Regarding the rest of your post, I was not talking about Tesla's general marketing. I haven't seen Musk do anything like you claim. That is certainly misleading and irresponsible if he does do that and implies that any current Tesla can "drive all be itself"
Please take a look at the very start of the video attached here . It says "The person in the driver seat is only there for legal reasons. He is not doing anything. The car is driving itself".
Now, the next one , where Elon Musk retweeted a video of a Tesla navigating an unmarked road. There is a reddit discussion about the same here 
Both videos also show attentive drivers who are able to jump in at a moments notice. The first video is a promo video and the second is from a Tesla fan. I wouldn't be surprised if they were specifically edited to showcase the car's features. I can certainly understand how someone might view these videos and think that autopilot is flawless, but I don't think Tesla is doing anything unethical in those videos.
I wouldn't be surprised if there's some fear over allowing uncertainty to fester for a year while awaiting the results of the investigation. But I think there's a much bigger risk in being perceived as unwilling to play by the same rules as everyone else.
The problem with this is that Tesla clearly believes that autopilot plus an attentive driver is safer than an attentive driver alone. I think that is most likely true.
It is possible that autopilot plus an inattentive driver is less safe than an attentive driver alone. I think this is plausible, but I don't think there is any real evidence to prove this one way or another.
The question then becomes does Tesla have an obligation to save the people in the second group from themselves and in turn put the people in the first group in greater danger?
That avoids prejudging whether it was primarily a defect with the vehicle, or driver error, or both.
Above all just respect the process. It's important to yield to an impartial entity when people have been hurt or killed.
Any faith the driver had in Tesla's autopilot system would be due to Tesla's marketing. If Tesla believes its self driving capabilities are not quite ready for market, maybe they shouldn't be selling it and calling it an "autopilot".
And Tesla said they'd continue to cooperate with the investigation, so why do you think a subpoena is suddenly needed?
We don't know and that's exactly the reason why NTSB has their disclosure policy in place.
It's to stop biased actors from dominating or setting the narrative early, before impartial actors have had a chance to corroborate what happened.
What if the car malfunctioned and the warnings weren't sounding? What if the warnings were sounding even though Huang had his hands on the wheel (as another poster above indicated has happened to them)?
I simply can't take what Tesla says at face value given the huge vested interest they have in controlling the story here.
> Over a year ago, our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%.
Why don't they talk only about Autosteer instead of Autosteer+AEB (and other features combined). Why not discuss how many accidents could have been caused by Autopilot if the driver did not correct it. Why is there no commentary on if whether their active driving system would be better run as a passive driver error correction system, as other manufacturers do.
> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.
Ahh, so a Tesla is safer than a 10 year old beaten down car with no airbags driven on country roads, I never expected that. Tesla is safer than even a motorbike, what a surprise. And just having Autopilot in your car (even disabled), makes you 3.7 times less likely to die. Does that say anything about Autopilot, or are they just touting non-Autopilot premium safety features?
> There are about 1.25 million automotive deaths worldwide. If the current safety level of a Tesla vehicle were to be applied, it would mean about 900,000 lives saved per year.
So, motorbikes driving in Himalyas are more deadly than a Tesla on California highways, who would have guessed. What does this say about Autopilot again?
> We expect the safety level of autonomous cars to be 10 times safer than non-autonomous cars.
Elon musk also expected driverless cars to be everywhere in 2017. And autonomous doesn't need to mean hands off driverless cars, they keep conflating assistive feature benefits with their hands-off technology.
Of course none of this data is very meaningful. You have to compare the rates of autopilot users to non-autopilot users of similar demographics, and have all crashes recorded, not just fatalities, to get close to a comparative samples.
Road fatalities per 100,000 inhabitants per year (2013, WHO):
Seriously, would Tesla's autopilot lower the crash rate in the UK? In Thailand? I guess it wouldn't change much in European countries, and any good car would do the same in poor countries.
Via https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r..., here is another table (same countries, same order) giving road fatalities per 10⁹km in 2015:
It's hard to gather statistics about thing like that because it just look like a normal drive unless the driver records the near miss which is unusual unless they're driving with a dashcam or have had something happen in the same place multiple times.
Saying clearly that abdicating your responsibility to drive the car can (and has) resulted in deaths does serve public safety. Its critically important that people driving this section of road (and others like it) know about this as soon as possible.
Call it victim blaming if you want, and certainly there was some spin, but it is unfair to say that this has no safety element.
One such example of this is at https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...
> Yep, works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH.
> It's important to remember that it's not like you got the update 5 minutes before this happened. Even worse, you may not know you got an update if you are in a multi-driver household and the other driver installed the update.
The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual
Both buckets also have plenty of miles with adaptive cruise control on - the data can't be used for a human vs autopilot comparison.
And even taking those considerations into account, there is a lawsuit  asking for the data used to come to those findings which NHTSA and Tesla have both resisted.
Tesla has never proven that Autopilot is safer than a human driver.
That's (a) comparing Teslas to Teslas, not to all vehicles, and (b) not necessarily comparing safety, but crash rates.
however, if you want to compare AutoSteer vs no Autosteer, you need to compare Teslas to teslas, and compare crash rates not fatalities.
A lot of the other car-safety innovations usually have a failure state where it doesn't put you in a worse off position that you normally would have been in, should that innovation not have existed (e.g., crumple zones, crash-safe interiors, highway design/signage, ABS, etc.).
Reducing the absolute # of car deaths is always a good thing, and undoubtedly # of deaths saved by AutoPilot > # of deaths caused by AutoPilot, but looking at that comparison makes people worrisome, especially if you can picture yourself on the RHS of that equation.
Personally I'm not sure where I stand. I do find it uncomfortable having ever-increasing levels of software complexity in my car.
> Tesla Was Kicked Off Fatal Crash Probe by NTSB
> The National Transportation Safety Board told Tesla Inc. on Wednesday that the carmaker was being removed from the investigation of a fatal accident, prior to the company announcing it had withdrawn from it, according to a person familiar with the discussion.
> NTSB Chairman Robert Sumwalt relayed the decision in a call to Tesla’s Elon Musk that was described as tense by the person because the chief executive officer was unhappy with the safety board’s action. NTSB is expected to make a formal announcement in a release later Thursday, said the person, who spoke on the condition of anonymity.
Technically, that's sorta true. By releasing information and opinions about the ongoing investigation, Tesla decided not to be a part of it.
"You can't fire me, I quit!"
Also laughable from Tesla's statement. "We are committed to transparency as a company, and NTSB was telling us we couldn't release information about AutoPilot, and that was unacceptable to us".
Huh, all those people looking for service manuals may be curious to hear about Tesla's corporate commitment to transparency.
Any 'artificially intelligent' car that can't avoid running straight into something solid right in front of it is dangerously stupid. Elon's age is starting to show.
The issue at play here is far larger than a car having a crash. The issue I was referring to is the actions of Tesla as they respond to the crash. That is cultural and it affects both the process of creating and releasing products as well as the process of responding to their failure. If your product can kill people, and lets be clear - cars can easily kill people SDV or not - you need to act like professionals when something happens. Playing the blame game, in public, and violating the expectations of a body like the NTSB shouldn't garner trust, no matter who is right at a technical level.
EDIT: fixed my abysmal grammar.
So instead, Asimov's Rule#1: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
Call me a sceptic, but saying that with absolute certainty is troubling. As an engineer, knowing how a lot of software and hardware systems are actually implemented, 6 seconds is such a minuscule time frame. Data about hand position could be in a write cache somewhere waiting to be flushed, inside monitoring service, inside OS, inside RAID controller, inside storage controller, or even de-prioritized by a busy IO scheduler. Then at a moment of a crash that the data might never make it to the storage for recovery.
I'm not saying that it is, but if the hand monitoring service is glitching out, it will take a lot of time to resolve the issues completely, mostly because just looking at the data as a remote observer (without camera installed in a car) it would just look like the hands were not on the wheel.
Also it doesn't help that for Tesla it seems to be a nice way out of this kind of crashes – they can just say that driver's hands were not on the wheel and that's that. Who even has the data and can decode it? How can we make sure the data was not tampered with?
This needs to be investigated properly and not just by Tesla, they are a biased party.
It's a geological amount of time to a real-time system, which anything related to controlling a car absolutely will be. "Multiple seconds of unpredictable latency are OK" is an attitude that doesn't get you far when interacting with the physical world, and that's why there is an entire discipline dedicated to dealing with it that has to use special chips (e.g. the ARM R series), special OSes (e.g. VxWorks), and special tools (special IDEs, oscilloscopes, special cameras) to rise to the occasion.
If they let a "latency doesn't matter" system near any of these data streams, that's the scandal.
The logging service that writes it down isn't really critical to the operation of the car. Even the notification has a geological amount of time before it has to notify the driver when compared to many other systems that operate on a near self driving car. I wouldn't be surprised if less critical components shared a non realtime timeslot between the actual realtime tasks.
Or hardware failure, possibly injecting faults into the system. See Page 14: https://www.hq.nasa.gov/alsj/ApolloLMRadarTND6849.pdf
s/will be/should be/ -- defects in design notwithstanding. Note that the latency involved with the sensing equipment (skin conductivity? pressure sensor?) could be large enough to be significant.
Of course, I don't actually know what the logs look like, but I'd rather err on the side of assuming that the designers of a real-time system for monitoring something related to driver safety took into account the idea that, when the car is involved a severe crash, they should have as much accurate data recorded as possible rather than leaving it sitting around for ages in un-flushed caches.
The car should be conservative about the threshold it uses for the sensors in determining if there are hands (because it would rather err on the side of telling someone to put their hands back on, than err on the side of thinking someone has their hands on when they don't), but engineers analyzing the logs after a crash wouldn't have to use that same threshold when trying to determine if the driver was in control of the car at the time of the crash.
Plus, it's a pretty common complaint in Tesla forums.
On the other hand, this is effectively an admission by Tesla that there are serious problems with the autopilot system. It should never guide the car into a barrier, no matter where the driver's hands are.
You are vastly overestimating things here.
Critical functionality in a car would rarely if ever involve non-deterministic delays. Furthermore, the OS itself (if there even is one) would be a real-time OS.
But even if we assume such delays are present in a Tesla steering wheel monitoring task, absolutely none of the delays you listed would approach anything near a second...
It might not be safety-design-critical to have a record of the hand-presence sensor with high resolution, merely high enough resolution for safe operation of the vehicle.
Flushing these records to storage is an independent function for the sake of service/support/maintenance and is likely not a regulatory requirement. Though regulatory agencies are bound to be interested in the data.
That may be why they were removed, there's an inherent conflict of interest when only Tesla engineers are able to decipher Tesla's crash data.
with the "last update" (the one that triggered the NTSB reaction):
as it allows to easily identify the "boilerplate" parts, but, more than that, the "first update" closes with:
"Out of respect for the privacy of our customer and his family, we do not plan to share any additional details until we conclude the investigation.
We would like to extend our deepest sympathies to the family and friends of our customer."
So, logically, either the respect for the family vanished or Tesla concluded their investigation (in three days time).
I don't understand why the change. They've already co-operated with the NTSB in the first Autopilot death (back in 2016)  with a much simpler PR statement  that didn't upset the NTSB.
- Original: https://www.tesla.com/blog/what-we-know-about-last-weeks-acc...
- Update: https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...
- Newest: http://abc7news.com/automotive/exclusive-tesla-issues-strong...
None of which address the multiple videos of people reproducing the accident at the same and different locations, and none take any responsibility for this likely bug (lane centering in non-lanes) introduced in an over the air update.
Even Reddit's Tesla Motors sub which used to be a 24/7 Tesla celebrating has been pretty negative about Tesla's handling of this incident, and I'm talking regular posters and verified Tesla owners.
Mr. Musk seems like he's becoming increasingly erratic and controlling, and there's a record of him being unable to take criticism. This is a pretty intense form of criticism.
Given all that, it seems fair to assume that Musk is under a lot of pressure at the moment; and stressed-out people make bad decisions.
Now Tesla is kicked out of the investigation, so I assume any leverage they had to help with the investigation is now gone.
The investigation will continue on, and the NTSB will conduct it fairly, Tesla just won't be involved. I understand both sides, Tesla desperately wants to tell it's side publicly, and the NTSB wants all participants to stay quiet till investigation completes.
Telsa just gave up their opportunity to share their own perspective on the investigation. That could have consequences.
Why should they wait a year before making a public statement?
And releasing it doesn’t prejudice the investigation in any way.
2. As the NTSB noted, part of the reason for these rules is also PR-related - they want to make sure that other parties' decision-making processes on information-sharing aren't warped by the need to get favorable information out in public faster.
3. Most importantly - showing no respect for the procedures of the agency investigating you indicates a mentality that puts you above third-party judgment.
...says nothing about the hands. It only says something about the sensors, until there is conclusive evidence from a secondary source.
> TESLA: "The only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, ..."
Nobody knows what happened. Huang may have had his hands on the wheel, which the system failed to register, and the car made a turn that he wasn't able to correct in time without hitting other vehicles.
The other hands ask: what don't we know that's causing them to act like this.
Moi? I would presume the NTSB is being manipulated somehow in someway to slow Tesla down and/or knock them from there perch. Look at their valuation. Certainly their must be long time auto industry ego who find that offensive.
Something is drifting sideways but it feels like there are other less obvious forces and influences involved.
And, if there is evidence Tesla has been negligent, the only way they can win is by not being forced to release this evidence, perhaps by attacking the NTSB as being biased against Tesla.
The radar can't distinguish between overhead signs, trees and things on the side of road. It only track moving things, otherwise there would be too many false positives and the car will keep braking every few minutes. Other Level 2 autonomous systems have the same limitation.
You might recall, Tesla hit a stationary fire truck in Jan . This was caused due to a similar limitation in their software. Tesla was following a car at a slow speed. When that car suddenly moved out of the way to avoid hitting the Fire truck, Tesla AutoPilot accelerated to match the set speed and hit the fire truck. To summarize, if a moving car suddenly moves out of the way, Tesla will almost always hit whatever is stopped further down the lane. They even call this out in their User Manual.
Tesla is hoping that it would be able to get it's Vision capabilities to the point where it would be able figure out obstructions using cameras working in conjunction with the radar. But there is no committed timeline for this.
LIDAR is able to avoid this by using directed LASERs to build a 3d outline of all the surrounding objects. A system relying on LIDAR would have challenges in RAIN and falling snow. However under natural conditions (where most Tesla deaths with Autopilot have occurred - hitting a SEMI from side, hitting the barrier recently), LIDAR would be able to completely avoid the accidents.
However, if and when, Tesla vision capabilities are able to detect obstructions, they might be much cheaper and more effective than LIDAR in all kind of weather condition. Hoping they can get there soon. And I also wish they would fix their marketing to more accurately reflect the capabilities of the driver assist system and stop misleading the public.