They identify a subset of vehicles and run an analysis on just those vehicles which seems to determine airbag deployments increased for those cars, and then run a regression that states the more miles driven the less likely an airbag is to deploy. They state this is unexpected but don’t explain what was wrong with their data to cause it.
The headline numbers that they do present in a footnote;
“The summary of Column AX (Miles before Autosteer) is given in this worksheet as 64,788,137. The summary of Column AY (Miles after Autosteer) is stated to be 235,880,377. We counted 86 airbag deployments “before Autosteer” in this Table and 192 de- ployments “after Autosteer.”
Basic math shows that airbag deployments per mile is decreasing from once per 753k miles to once per 1.223m miles, right?
The jist of TFA is that those “before” autopilot miles are significantly undercounted. Without understanding more what the labels “Previous Mileage before Autosteer Install” and “Next Mileage after Autosteer Install” mean, and what it means when they are not reported, it’s hard to say.
But there do seem to be several significant cases where airbag deployment count prior to Autosteer was increased for certain cohorts of vehicles where there was no increase in total mileage before Autosteer. Obviously a car with zero miles can’t have crashed, so this is what calls the entire calculation into question.
I think this is one of the main points of the article. The NHTSA had the same data, and made a very sensational graph, without any clarification on what the headings mean. The authors essentially recapitulated the initial analysis and then figured out its shortcomings.
> They identify a subset of vehicles and run an analysis on just those vehicles which seems to determine airbag deployments increased for those cars
The article clearly explains why that subset was the most important. This was the only subset for which complete data was available.
> and then run a regression that states the more miles driven the less likely an airbag is to deploy. They state this is unexpected but don’t explain what was wrong with their data to cause it.
Actually, they do offer explanations in their discussion. They even refer the reader to the discussion right after they post the table. (i.e., See more on this topic in the Discussion section).
Also, see my initial point; this is not 'their data' so much as what was relied upon by NHTSA to make the 2016 claim about autosteer safety.
> Previous Mileage before Autosteer Install” and “Next Mileage after Autosteer Install” mean, and what it means when they are not reported, it’s hard to say.
See my first point. However, they show pretty reasonably that those two headings could mean 'recorded mileage right before the update' and 'recorded mileage right after the update'.
> Obviously a car with zero miles can’t have crashed, so this is what calls the entire calculation into question.
This is another one of their points. They state multiple times that, if the data can be trusted, it doesn't support the NHTSA's claim.
The article is actually very careful on all the points you raised, and would merit a close reading before being dismissed.
The author uses the redacted version, doesn’t attempt to find an explanation for anomalies, proceeds to analyse data based on faulty initial assumptions, and compounds this by using a cherry picked example where the poorly understood numbers best match their faulty assumptions.
When there is no explanation for “airbags deployed before 0km on odometer” there is no point analysing the rest of the data. It’s all bogus, especially the stuff that looks like it makes sense.
The fundamental fault of the author’s analysis is trusting data that is proven untrustable. In the same set of data which has some cars deploying airbags before ever being driven, the author assumes that the data which matches their model of validity is magically more accurate and thus worthy of study.
The author continues to trust this data because it supports their biases, even while pointing out the errors in the rest of the data. How is this not cherry picking?
The entire study is bogus because it starts with bogus data and attempts to make claims about reality based on a subset of bogus data that can be misinterpreted as less bogus.
It’s like picking the few pages of manuscript from the infinite monkeys room which have meaningful phrases on them and claiming that obviously monkeys are really good writers.
This is covered in the Discussion section, but not with much clarity. I think it makes more sense if you invert it:
cars that have been in a crash accumulate fewer miles than cars that have not been in a crash
I'm going to hold my breath until we see the data these claims are based on.
I know the data is there to help prove or squash claims like this. I hope it gets out there one day.
I think the biggest benefit, safety wise, is it gives the ability to put less mental focus in staying in the lane, and gives me the ability to spend more time focusing on my surroundings. Its easier for me to focus on keeping up with the traffic behind me (like if a car has creeped up into my blind spot) and any potential traffic that is about to cross my path (like a possible red light running coming in hot). In heavy traffic, its super helpful; the auto cruise control is very human-like and will build a gap when someone cuts you off instead of suddenly making one with a slam on the brakes.
There are AP users who prefer to use their time checking on that text message. Those people don't help the statistics / human gene pool.
But what we should compare to is a supervised steer to a assisted human. So autosteer with Lane keeping assist. If Tesla offered such a service, we should be able to compare. Till then everybody is just comparing unreasonably in my opinion
>Having had a front-row seat to the data, what's your take? Do you think Tesla's Autosteer helps, hinders, or keep accident rates unchanged?
From my subjective view point of the data - Tesla's autosteer overall helps to reduce accidents. But only when the operator is fully aware of the systems functions and alerts.
Driver assistance systems are wonderful technology and should be standardized like seatbelts, airbags and ABS are. The underlying issue is related to its slow adoption and use, and not a lack of functionality. How the alerts and information of impending caution/take-over required/accidents are presented to the driver is the main hurdle to its lack of wide spread adoption.
Tesla autosteer has a "Delightfully Counter-Intuitive" (as Musk might say) manner in which drivers are taught to learn how this system functions and what its abilities and limits are, therefore reducing the accident rates - in my naive (data) opinion.
Main point being, driver assistance systems could help reduce accident rates....as long as drivers know how to use them, how they function, and what all their literal bells and whistles mean, to the point where you're always supervising the system (whether you realize it or not).
You can't just throw that second qualifier on there. Whether Autopilot helps reduce accidents is 100% dependent on whether users take it as an excuse to let their attention wander from the road. If the behavior of many Tesla fans on Twitter is any indication, they already think it's fine to treat it like a full self-driving system: https://twitter.com/Scobleizer/status/1092477048482717696
And that's to say nothing of the fact that Elon Musk himself was shown on 60 Minutes letting Autopilot drive, in traffic, with his hands off the wheel and Lesley Stahl in the passenger seat. The user's manual may say to keep your hands on the wheel, and in court when faced with an Autopilot-related lawsuit Tesla always argues that drivers are 100% responsible for the car at all times, but Musk & Tesla are nonetheless happy to irresponsibly encourage the false perception that their cars are self-driving.
IMO constant in-car monitoring of driver attention (i.e. a camera focused directly on the driver's face) is the only way these "kinda-self-driving-but-not-really-when-it-counts" systems can be deployed safely.
The question is why is the data secret, I mean I understand why Tesla does not want to share the training data but I am referring to the safety records and related statistics.
There is literally an App to track the progress of the hundreds of FOIA lawsuits outstanding at any given moment. The federal Government routinely denies FOIA requests and forces applicants to pursue them in court. If attributing this to Trump would help you inculcate this reality then have a look at this. Agencies are deliberately not investing in IT systems that would serve to facilitate FOIA, according to MIT.
Now that your objection to a key premise of this story has been comprehensively discredited, perhaps you might make the necessary world view adjustment and reconsider your conclusion.
BTW, the foiaproject links above are concerned only with Federal FOIA; similar state laws exist and the number of FOIA lawsuits at the state level are legion. Your governments are actively thwarting FOIA at every level.
Has anyone got a full report on the NHTSA study?
"GOTTA BE THE OIL & GAS INDUSTRY!"
This is really just a way for Tesla and Elon Musk fans to inject a cloud of distracting squid ink into discussions of any remotely damaging info.
If you are looking for some third party validation, Edward Niedermeyer is a journalist who has followed Tesla & Elon Musk closely for years and is taking this study very seriously: https://twitter.com/Tweetermeyer/status/1094318155285946368
Ah, looks like he just protected his tweets. Probably worried about his employer or federal investigators looking into what he’s spending all his time on.
The idea that federal investigators are taking any interest in Tesla shorts is OTOH pure Tesla bull fantasy. (Actually, that's not quite true: the SF SEC reportedly spoke with one short seller to gather tips about the company.)
Anyone who wants to follow me without tweeting that my skepticism of Autopilot is tantamount to murder, and that "God will judge me" for it (as Omar here did), is welcome to send me a follow request.
At worst it is saying someone at NHTSA did the calculations incorrectly — a bold claim they admit even they can’t say for certain people because of redactions.
Someone calculating the data incorrectly doesn’t make AutoPilot any more or less safe than it has been since the NHTSA reviewed the data in 2007. In any case an analysis from 2007 is hardly relevant today given how much the software has changed.
Potentially, but the data is now available and you can repeat the analysis yourself, which is a better position than we were in two days ago.
> a bold claim they admit even they can’t say for certain people because of redactions.
I'm not sure where they claim this - their lawsuit actually asks for all communications related to the NHTSA figure in question.
> Someone calculating the data incorrectly doesn’t make AutoPilot any more or less safe than it has been since the NHTSA reviewed the data in 2007.
Note the year in question is 2016.
In fact, we did not know how safe autosteer technologies were - this report shows that the report so many of us had relied on was, at best, poorly constructed.
> In any case an analysis from 2007 is hardly relevant today given how much the software has changed.
Again, the correct year is 2016. This could be true - hopefully Tesla will release a response with the data made available for independent analysts to assess the safety of their autosteer technology.