Hacker News new | past | comments | ask | show | jobs | submit login
Tesla asked officials to redact info of driver-assistance in use during crashes (businessinsider.com)
126 points by thunderbong on Aug 22, 2023 | hide | past | favorite | 104 comments



Ive had the FSD beta for about a month now and never once has it gotten me anywhere without a disengagement, and has several times very nearly caused a crash. I know its anecdata but from all the youtube videos and press I was really expecting it to at least be half-way decent. At the moment its absolute rubbish and should not be allowed on the roads.


I have had both experiences. It has literally taken me all the way from Miami to Atlanta at least 6 times without issue and basically only disengaging when there is heavy road work spots and then I turn it back on. I have also used it to go from Miami to Key West 3 times back and fourth with almost no interaction other than road work situations.

That being said, when I visited areas with very little road traffic like in central Florida or in country areas in Georgia, it is very hesitant and I can only assume because of the lack of training data. Simple things like left turns become overly hesitant or it goes into a middle turning lane thinking it's a new lane because the lines are rough or non-existent.

I love using it for long trips, it has changed my quality of life when it comes to driving. I simply do not agree calling it 'rubbish' is fair at all and really is a gross oversimplification of it's faults in situations.


If you want to try something different for long roads, you can take a look at Mercedes cars.

Not only they are very comfortable and reliable for such, but also the manufacturer takes the legal responsibility when level 3 mode is activated, whereas if you use a Tesla, it is always your responsibility.

Means a lot.


For long road trips mainly on the highway, I don't think the Mercedes tech is particularly helpful. Their level 3 solution is only level 3 when it's below 40 mph and during the day. Most road trips (at least in the US) are highway >50mph.

It seems much more useful for city commuting where < 40 mph is much more common.


> but also the manufacturer takes the legal responsibility when level 3 mode is activated

Source? It turns out that was just marketing material from what I have read and they don't actually indemnify you.


You presume too much on what I currently own and have owned. Tesla's software is far superior even with its weaknesses.


Not going to assume what you have and haven't owned before, but your statement here is objectively false:

>Tesla's software is far superior even with its weaknesses.

https://www.engadget.com/mercedes-becomes-the-first-automake...

If Tesla's software was objectively better, why couldn't they get L3 approval in California?


Mercedes L3 has significant limitations.

It works only on highways, only in daylight, only when there's a lot of traffic, only under 40mph. Tesla's software does not have those limitations, so yes, it's objectively far superior than a glorified 'follow the car in front and do what it does' simple software that Mercedes is pushing.


Yeah, there are significant limitations to camera based systems. Although for Mercedes, I'd argue it's significantly more than:

>'follow the car in front and do what it does'

which is describing adaptive cruise control/radar based cruise control which has been on the market in vehicles for over two decades.

>Tesla's software does not have those limitations

Recklessly enabling a half baked vision only autopilot doesn't mean it's "superior". There's a reason actual self driving car companies (Cruise, Waymo, etc) report disengagements per mile as a means to show how "self driving" their vehicles actually are.

Until Tesla self-reports this data, it's nothing but a toy, and I wish folks the best who actually trust their life with it. [0]

>Officially, Telsa describes Autopilot as "an SAE Level 2 driving automation system designed to support and assist the driver in performing the driving task," as cited by NHTSA. Autopilot is not an autonomous driving technology, but the new numbers suggest people are treating it that way, sometimes resulting in tragedy.

[0] - https://www.caranddriver.com/news/a44185487/report-tesla-aut...


He or she does not care so I don't even bother. I have already discovered here that people hate Elon therefore hate everything under his eye regardless of the facts around the successes of Tesla's engineering teams that have nothing to do with his personal politics or viewpoints.

This is fine, I enjoy my car and since I am the highest authority of myself, I do not need their validation of my personal experiences I have first hand.


https://www.sae.org/blog/sae-j3016-update

With Tesla's Autopilot, you are driving the vehicle.

With Mercedes new L3 system (which I'm going to assume you haven't ever used, since it isn't available yet on a vehicle for purchase), the vehicle is driving.

This isn't about Elon, it's about the capabilities of the vehicle you're sitting in. You're also ignoring TFA -- if Tesla's Autopilot is the best thing on the market, why are they asking officials to redact information on whether or not someone got into accidents while using it?


Easy to prod and be assured of a fan bias


If you use trains for long trips it will change your QoL even more.


You didn't say which direction QoL would change, so I guess this is technically correct.


Only in America will people crap on public transportation, at their own expense.


I guess you're welcome to take the 44-hour train from Chicago to Los Angeles if you want. Most folks will just fly there in a few hours.



What kind of a comparison is that, lol. Argument is Car vs Train not the Plane vs Train. Trains are always better than a car for the same distance planned.


This still really depends on locale. Don't get me wrong here, I love trains. I'm currently living in the UK, and we're taking the Eurostar to Amsterdam next month instead of flying, even though it's more expensive and slower. And when I was in the US, I took Amtrak from Seattle to New York.

But the reality is, trains are less flexible and generally slower than driving in the US. They may reduce stress, and they're definitely safer, but the trade-offs mean they aren't objectively better. For instance, Amtrak from Seattle to Portland is about 3.5 hours, and this is one of the fastest-traveling routes in the US (except the Acela); meanwhile, driving is a shorter journey on a good day, though it could be five hours or more on a bad day. Still, Amtrak is known for its delays, and there's travel to the station. When I lived in Seattle, Amtrak would probably take me around 5 hours, on average, door-to-door, whereas driving would still be about 3.5 hours on a good day, because I could start from home and go directly to my destination.

And, once you're at your destination, you have more options with a car. From Portland, I could drive out to Multnomah Falls; there are no good public transit options for that.

I wish trains in the US were better... but they aren't.


There was a train to Key West long ago. Now there is just the one highway. https://en.m.wikipedia.org/wiki/Overseas_Railroad


Interesting. We actually saw the rail lines when we visited Key West.


Waiting for the comment that goes like:

"but if you compare FSD with drunk teenage drivers you still get a lower number of incidents"

The point of tech is to improve the status quo significantly, not marginally.

No one will use a calculator that is wrong 2% of the time.

I know this is real life and no driver (AI or not) could perform perfectly, but AI should be WAY better than the average human driver instead of just merely equal to.


IDK, I would argue it is better than your average driver in a lot of cases. Not just "drunk teenagers." On my daily commute I constantly see people on their phone while driving, eating, probably too tired, etc. A couple times it has even reacted faster then I probably could and saved me when other drivers suddenly veer off.

https://electrek.co/2019/10/23/tesla-autopilot-safety-9x-saf....


Thanks for fulfilling my prediction.


any time ;)


I wish I could find the other HN comments from the last few years but I have seen your exact comment from different people over the last 2 years.

Why has nothing changed? If so many people are having problems, how is Tesla allowed to continue?


Between controlling all manned US space flights and all communications allowing Ukraine to fight against Russia, Musk has a lot of influence and gov't doesn't want to ruffle any boats.


This isn't really special to Musk. Feds have been largely hands off businesses for decades, basically jesus take the wheel is federal regulatory policy.


Well yes, because the sole reason for "fully engaged driver" is to shift fault to the "driver" rather than Tesla.

Tesla's falsely named "full self driving" cannot drive the car safely so they say you always have to have a driver in control of the car, only that is demonstrably not compatible with human physiology, and again directly contradicts the name.

We do not expect full attention from pilots in aircraft with autopilot, when they have potentially hundreds of lives under their control, but Tesla thinks it's ok to require that of car drivers and give them less time to respond to any sudden incident than a pilot has.


Instead of Tesla trying FSD with vision only, they should try with more sensors and charge more for the extra hardware. They cannot beat laws of physics by just using vision.


Well, they have a problem there. In 2016 they made a public statement where they claimed "all Teslas now have all of the hardware they will ever need to be full self driving." It was a blog post and key part of a lawsuit they lost. If they ever increase the hardware requirement for FSD then they'll have to retrofit every car made since that post.


How is that a problem ? They could just increase the cost of FSD


More hardware can improve reliability of a few things but doesn't address the fundamental problem, which is that Tesla FSD has no coherent way to deal with novel situations, which happen all the time when driving. IMHO we won't see "Go to sleep and drive somewhere" until there is something like a general reasoning neural net on top of the ones doing the object detection and physics and such.

"That looks like a firetruck" says the object identication net, "but after reasoning about it, it is actually just a mural on the side of a building"

And nobody knows how to build such a thing yet so it might be a long time.


Though I agree with your point, i don’t think it’s impossible to do full self driving with vision and an intelligent agent only.

that being said as far as I know the cameras they’re using aren’t good enough


It may not be impossible, but it may well be feasible only a good 5-15 years after it's successfully implemented with the assistance of LIDAR (or other technologies).

LIDAR won't read the posted speed limit, it will only see the shape of the post. But it won't be blinded by reflections, it will never confuse the side of a truck with the sky, it won't miss an object or hole on the road due to poor contrast, etc. It's becoming very affordable, too.


You can be an excellent driver even if you’re occasionally confused and blinded, etc. in my opinion the reason it doesn’t work today with vision is that it turns out more intelligence than musk thinks is required if you’re only using vision


When I think back to the situations where I was blinded as a driver, I remember being able to help myself by blocking out the sun with my hand. This simple physical countermeasure is something that vision hardware currently cannot perform.


I agree with you, and, furthermore: Why handicap yourself when there's an alternative? It used to be completely unaffordable for a mass produced sedan, so I could understand that, but 10 years later things have changed, and insisting on vision-only reeks of stubbornness and stupidity at this point.


also probably current cameras are not as great as human eyes - humans eyes are superior even to latest iphones pro. You still have to think about when taking photo e.g. Is the back lightning (sun) too aggressive ? Even HDR not always helps and all those processing still takes like ~1sec or more - with car driving at 100km/h you have to do such processing at 60fps minimum


The "cameras" will never be good enough because the eyeball isn't a camera and has way more info to work with than just a 2D array of colors.

For example, your brain has way better depth info than just parallax in most cases, as it gets info about what is required to keep a distant object in focus and aligned in both eyes. These two values will not ever available to fixed cameras and yet provide direct measurements of distance for your brain.


You also have memory to infer how fast an object is moving, and where it's trying to go, in contrast to the purely feed-forward neural networks Tesla uses.


Are you talking about on the highway or roads with stoplights. Normally, I rarely have issues when using Auto Pilot on the highway. On all other roads though, it definitely isn't worth it.


I also have FSD beta and my model Y can't even get out of my neighborhood successfully. really wish I could get a refund.


If I were in your shoes, I'd check w/ a lemon law attorney, and see what they think about using that to get a refund. After all, the car doesn't work as described.


I'm not a contract expert by any means, but this is what it says on the order page:

"The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving features evolve, your car will be continuously upgraded through over-the-air software updates."


The legal tension here is that the feature is called "Full Self Drive", which has the plain intuitive meaning that the car can drive itself. Adding a bunch of weasel language in the fine print may not absolve Tesla from the implicit promises it makes with the name.


I think you could find a sympathetic judge:

> The activation and use of these features are dependent on achieving reliability far in excess of human drivers

The fact that you are able to activate and use FSD (albeit not always well), implies, in Tesla's own words, that they've done this (otherwise it wouldn't be available for activation and use).

> as well as regulatory approval, which may take longer in some jurisdictions

These are the real weasel words, and Tesla likes to use them regularly, to imply pretty heavily to their fans that "this all works, but pesky regulators are keeping it out of your hands".


> The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions.

In other words, you can't rely on this until if becomes perfect. Which is meant to imply it is already almost perfect, but upon careful reading could also mean it doesn't work at all.


This is why I'm not convinced by arguments that FSD is no worse than normal humans: the ethics of the underlying company. To me it's like someone who regularly drives drunk pointing to the fact they've never had an accident.


As a current tesla owner I couldn't have said it better.

There have been multiple instances over the years where my car would make completely inappropriate and dangerous decisions (like slamming on the brakes on the interstate or in a parking lot for no reason) and when I ask Tesla for more information about the incident they'll flat our lie and say it never happened or that the car is or is not capable of what I'm claiming. It's lies and gaslighting all the way down even to specific repairs they do (claiming they fix something when they actually do no such thing).

At this point due to Tesla corporate's untrustworthy behavior I believe Teslas are a risk to drivers and pedestrians alike and will take every opportunity to make this known as a warning to prospective buyers.

Tesla, never again.


Tesla, who also created an internal team with the sole purpose to cancel service appointments and to thwart range complaints... I don't believe anything they have to say.

https://www.latimes.com/business/story/2023-07-27/tesla-rang...


And they only needed that internal team to gaslight customers into believing it's normal for lithium batteries to just suck consistently and range estimates are almost always wrong because they use an over-optimistic formula to estimate range INSTEAD of using the industry standard, which is to estimate future range based on remaining fuel/energy and your average economy over the past ten minutes or so, often scaled down a touch to guarantee that the estimate hits zero before you run out of gas/battery

They chose this route because claiming they get an extra 40ish miles per charge was more important to them than giving their customers accurate range estimation for the life of the car.


Yup. On paper you'd almost have to be an idiot to pick something other than a tesla however the reality of its performance doesn't match what they claim.

I'm in the market for another car and I periodically compare something to a tesla and it's "Better", but after owning a tesla I know what "Better" actually means.


Tesla is very good at gaming the EPA's rating system. In testing by Edmunds[0], every single Tesla has underperformed its range estimates in real-world conditions. Meanwhile, 52 of 53 non-Teslas have outperformed their EPA estimates, often by incredible amounts (20% for the BMW iX xDrive50, 59% for the Porsche Taycan, 36% for the Mini Cooper SE).

0. https://www.edmunds.com/car-news/electric-car-range-and-cons...


It made me think about this video where a person tests whether the car detects a small child that is very fast crossing the road.

The answer from Tesla: "It's all lies! It is a competitor!", without addressing the main concern.


> There have been multiple instances over the years where my car would make completely inappropriate and dangerous decisions

Dumb question, but is this with the FSD on or off?


Not a dumb question. My car slightly predates FSD. The behavioral issues were a result of:

- Enhanced Autopilot $5000 option that I can't get a refund for - Emergency auto-braking (has to be manually disabled for each trip)

I commented because the lack of ethics of this company still apply regardless of the specific tech.


> Emergency auto-braking (has to be manually disabled for each trip)

Wow, what a nightmare!


Or that they've had several accidents, but just didn't happen to be "officially" drunk when the accidents occurred, even though there's evidence they were drinking shortly before...


> I'm not convinced by arguments that FSD is no worse than normal humans

Has this claim been made by an independent third party? (About Tesla's FSD, specifically.)


I found this study, which sort of makes this claim with a lot of qualifications: https://engrxiv.org/preprint/view/1973. It's comparing Autopilot to "normal" Tesla driving, which still includes auto-braking, blind spot warnings, etc. It is still based on Tesla's own data and has to make a ton of assumptions, but it seems like they did the best they could.

> By correcting for roadway usage differences between the Autopilot and active safety only data, much of the crash reduction seen by vehicles using Autopilot appears to be explained by lower crash rates experienced on freeways. While the raw crash rate shows an average 43% reduction in crash rate for Autopilot compared to active safety only, this improvement is only 10% after controlling for different rates of freeway driving. Correcting for age demographics likewise produced an 11% increase in the estimated crash rate. Several other factors may explain difference in safety rates of new vehicle technologies based on who is using them, where they are being used, and when they are being used. Some safety features cannot be used in rain or snow, for example, which may bias the data towards clear weather and lower crash rates generally. In another example, drivers often disengage Autopilot to change lanes or prepare to exit a freeway (Morando et al., 2020), both areas of increased crash risk. While Tesla includes crashes where Autopilot was deactivated less than five seconds prior to impact, there remains a potential for Autopilot use to be biased towards safer, within-lane driving. With more data regarding not only the crashes but also the use of vehicle technologies can allow for more accurate and thorough assessments of vehicle safety benefits.


Nice! I've always been skeptical of coarse metrics like "X million miles per accident" because they don't differentiate between different driving environments like freeways and cities. I'm glad there's research breaking the data down in a more fine-grained way.


No.

In principle, even half-baked driver assistance can make things safer, but it all depends upon the way the driver is integrated into that system.

Humans are terrible at vigilance tasks, and if you ask them to passively monitor a system that gets things right 98% of the time, you're going to have a bad time. Even worse if you strongly imply that the system is good enough (e.g. "the person in the driver's seat is only there for legal reasons")


> Humans are terrible at vigilance tasks, and if you ask them to passively monitor a system that gets things right 98% of the time, you're going to have a bad time.

I understand this as a hypothesis but it hasn't been borne out by any of the Autopilot usage data. I suspect the reason is that humans suffer greatly from fatigue and ADAS systems reduce fatigue.


Humans being bad at vigilance tasks isn't a hypothesis, it has been thoroughly studied for aviation safety reasons and is one of the hundreds of social and behavioral changes that aviation made to better interact with the human machine.

Hundreds of people have died explicitly because well trained people with thousands of hours of experience got distracted from their vigilance task. This isn't a hypothetical. There are bodies in the florida everglades.


That isn't what I'm saying.

I'm saying it was a reasonable hypothesis that "you're going to have a bad time" with a system like Autopilot, because of the vigilance concern you're referencing. But there is clearly more at play than vigilance, because it's not shaking out that way -- Autopilot (and similar lanekeeping + adaptive cruise systems from other manufacturers) aren't killing people left and right.

So, I was sharing my own hypothesis for why that may be: that the benefits of reduced fatigue are outweighing the drawbacks of distraction and complacency.


As mentioned by another poster, there is actual research[0] showing that, once controlling for additional factors, Autopilot actually increases crashes by 11%. This is utilising Tesla-provided data, too, which could mean the data itself is already biased.

0. https://engrxiv.org/preprint/view/1973


>I suspect the reason is that humans suffer greatly from fatigue and ADAS systems reduce fatigue.

I understand this hypothesis, but has it borne out by any independent analysis of Autopilot usage data?

Tesla is selling expensive software and claiming it's safer. It should be safer, I want it to be safer. But is it?


With specific respect to ADAS systems reducing fatigue:

https://www.sciencedirect.com/science/article/abs/pii/S13698...

> We investigated the effects of ACC and HAD on drivers’ workload and situation awareness through a meta-analysis and narrative review of simulator and on-road studies. Based on a total of 32 studies, the unweighted mean self-reported workload was 43.5% for manual driving, 38.6% for ACC driving, and 22.7% for HAD (0% = minimum, 100 = maximum on the NASA Task Load Index or Rating Scale Mental Effort).

https://www.sciencedirect.com/science/article/pii/S136192092...

> Interviewees reported being less mentally and physically tired—though most focused on the former—both while in transit and upon arrival at their destination. Twelve (12) interviewees mentioned this. The reduction in tiredness seems to be because, from the interviewees perspective, partial automation takes over a substantial portion of the driving task.


> I understand this as a hypothesis but it hasn't been borne out by any of the Autopilot usage data.

Which independent Autopilot usage data are you referencing?


I don't have any independent data, just what Tesla reports: https://www.tesla.com/VehicleSafetyReport

I've read the many critiques of Tesla's analysis, and I tend to agree the benefits are overstated, but I haven't seen any reasonable empirical case that Autopilot hurts safety on net.


>but I haven't seen any reasonable empirical case that Autopilot hurts safety on net.

because tesla does not release their data


You can compare Tesla death rates per vehicule sold vs similar cars however:

https://www.flyingpenguin.com/?p=35819


If you want to look at the underlying reports they can be found here:

https://www.nhtsa.gov/laws-regulations/standing-general-orde...

In particular here:

https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...

You can see that Tesla always reports ADAS version as: [REDACTED, MAY CONTAIN CONFIDENTIAL BUSINESS INFORMATION].

Almost every other company reports the version almost always if it is known. Some companies have many unknowns, but most of those crash reports seem to be from customer reports which appear to not have associated telemetry.

Of note, most of those reports with unknown version include un-redacted summaries and injury severity information, so they do seem to be good faith efforts, if somewhat lacking in actionable information.

In contrast, Tesla redacts all narrative and does not confirm injury severity. In over 700 cases, around 95% of their reports, they choose not to investigate whether an injury or fatality occurred.

In the ~5% with a confirmed injury/fatality, exactly one has no 3rd party confirmation. The other ~95% have no 3rd party reports of injury/fatality and lo and behold, no injury/fatality is reported.

This is straight up impossible if you look at the crash reports. There are literally tens if not hundreds of high speed oncoming, t-bone, object, and pole collisions all with no recorded injury/fatality. They are absolutely reporting in bad faith.


> In contrast, Tesla redacts all narrative and does not confirm injury severity. In over 700 cases, around 95% of their reports, they choose not to investigate whether an injury or fatality occurred.

Tesla has now "decided" that if active restraints or airbags are not deployed, that incident "does not count going forward".

As an automobile manufacturer, there is no good conscience reasoning that Tesla is unaware that active restraint and airbag systems of today are far more nuanced and weigh multiple criteria when deciding to deploy (as compared to initial/early implementations, which were essentially "if collision=true and speed >= X mph then deploy"). You can have a very significant incident (two that I've witnessed recently as a paramedic involved vehicles into stationary objects at ~30mph) without airbag deployment. But if that was a Tesla in FSD that hit something at 30mph and didn't deploy airbags, well, that's not an accident according to Tesla.

That also doesn't account for "incident was so catastrophic that restraint systems could not deploy", also "not counted" by Tesla. Or just as egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", also not an accident and also "not counted".


As far as I can tell that has always been their reporting methodology. As can be seen in Column CV, SV (subject vehicle) Any Airbags Deployed?, around 90% of all reports have Tesla airbag deployment. I have not been able to find reliable information on airbag deployment rate in crashes, but it is nowhere near 90%, so the report distribution is statistically unlikely if that was not already their reporting methodology.


> In contrast, Tesla redacts all narrative and does not confirm injury severity. In over 700 cases, around 95% of their reports, they choose not to investigate whether an injury or fatality occurred.

The primary reason why all those Tesla cases lack those details, is that they're reported basically immediately when they happen, because all Teslas are connected to the mothership and report back when it happens.

The NHTSA page even explicitly points it out under the data limitations section, under "Incident Report Data May Be Incomplete or Unverified".

> This means that a reporting entity with access to vehicle telemetry may quickly become aware of an air bag deployment incident subject to the General Order, but it may not become aware of all circumstances related to the crash, such as surface conditions, whether all passengers were belted, or whether an injury occurred.

This is also the reason why Tesla are so overrepresented in the data. Almost no other automaker has connected cars, or are like the GM On-Star a subscription service that not everyone has.

All other reports have to come in manually though actual crash investigations, which can take time. Or not happen at all if nobody thinks those systems might have had been activated/relevant. For example, a car using driver assist technology getting t-boned by a manually driven car at an intersection will likely not raise any alarms about automated systems during crash investigations. But for Tesla they're reported, because they immediately know those cars were in a crash.

So in conclusion, as NHTSA has said, none of the data has been normalized, and lacks important contextual information to properly analyze it, and should thus not be used by itself to draw any conclusions.


No. Please do not make up or regurgitate baseless good-sounding bullshit to protect a bad actor.

https://www.nhtsa.gov/sites/nhtsa.gov/files/2023-04/Second-A...

As can be seen on page 16 of the order, outlining request format 1, each reporting entity is required to submit a initial incident report and a follow up updated report 10 days after receiving notice of the incident. This is registered in Column D, Report Type, as 10-day update.

As can be plainly seen in Column D, Report Type, and Column B, Report Version, the vast majority of Tesla reports are on version 2, the second updated report issued 10 days after initial notice of the incident. Most of the remaining, newer, reports are on version 1, a report issued 5 days after notice of the incident as it has not yet been 10-days since the corresponding released NHTSA dataset.

Also note that reporting entities are allowed to voluntarily issue subsequent reports marked as Report Type, Update. Tesla has intentionally chosen to not do any investigation to evaluate the operational or safety characteristics of their incomplete product in use. That is completely and utterly unacceptable for a safety-critical product.

In addition, even if you were not colossally wrong on the nature of the reports, safety-critical systems require a positive proof of safety. You do not get to bet lives on unproven systems. If no proof is available, and no conclusions can be drawn, then it is not acceptable for use. This is the absolute basics of safety-critical system evaluation. The absence of information or inconclusive results is not a defense as you seem to think, it is a admission of guilt.

In conclusion, you are completely wrong in both the particulars of report submission and the generalities of what conclusions can be drawn from the Tesla reporting methodology.


> Almost no other automaker has connected cars

Wait, is this true for EVs? I've written off purchasing an EV entirely because I don't want a connected car and I thought they all were. If there are ones that aren't connected, that changes the equation quite a lot.


While I don't know about the details of what all the other EVs have, most of the legacy auto EVs model year 2020 and earlier were basically built on ICE platforms, and mostly have the same features except having an EV drivetrain.

Many models coming out now are on new dedicated EV platforms, so you'd really have to check them to see what connectivity features they might have.

Nothing about EVs require connectivity though, it's just Tesla that have caused a shift in direction in many other automakers.


"Tesla requested redaction of fields of the crash report based on a claim that those fields contained confidential business information," an NHTSA spokesperson told Insider in a statement. "The Vehicle Safety Act explicitly restricts NHTSA's ability to release what the companies label as confidential information. Once any company claims confidentiality, NHTSA is legally obligated to treat it as confidential unless/until NHTSA goes through a legal process to deny the claim."

There's nothing confidential about which systems were active. The driver knows this, assuming they're not dead. You don't get to claim something is confidential if it's information that's already out there.


> There's nothing confidential about which systems were active. The driver knows this, assuming they're not dead. You don't get to claim something is confidential if it's information that's already out there.

Therein lies another problem.

Tesla will happily (and without your specific agreement) release any and all telemetry, or use it against you, if they feel it will protect their reputation.

One of the earlier fatal FSD-related crashes, Tesla held a press conference to defend FSD. They talked about "requiring driver attention" and that, in this case, "the vehicle had issued a warning to the driver about their inattentiveness".

What they didn't say in that press conference, and was only revealed much later, though they obviously knew it at that time, was that while the vehicle had triggered an inattention warning:

1) It happened only one time, and

2) It happened eighteen MINUTES before the collision.


If the data showed that Tesla's driver-assistance features were correlated with fewer crashes and better outcomes I'm pretty sure Tesla wouldn't be trying to get the govt to redact this information. Elon would be Xing about it left and right.


It is really unfortunate that the verb form of X is a sort of common name in Chinese.

That sentence parses like "Elon would be Jerry about it left and right"


I kind of hate myself for even referring to it as X. haha


I think the really important part of the article is this:

"The Vehicle Safety Act explicitly restricts NHTSA's ability to release what the companies label as confidential information. Once any company claims confidentiality, NHTSA is legally obligated to treat it as confidential unless/until NHTSA goes through a legal process to deny the claim."

This can't have been done merely at the behest of Tesla, there must have been other forces at work to get that put into the legislation. So while Tesla is the proximate cause in this case they are enabled by legislation that is hostile to road safety.


This type of language is found in many public safety type laws. It’s to encourage companies to cooperate with the public safety investigation without having to fear that their confidential information will be released. It may seem hostile to public safety on its face but it really isn’t - it’s harmful to public knowledge, which is hardly a proxy for public safety.

It’s certainly been abused by some companies but is likely beneficial on the whole because it encourages full disclosure and cooperation with the government public safety agencies. The opposite approach would be to not have the protection available and to have companies withhold information until forced to disclose by the government, most likely via a lawsuit of some kind which would take years to resolve. Years that could be used improving public safety.


This type of language is found in many public safety type laws. See e.g., Cybersecurity Information Sharing Act.



Wait, according to the article FSD is now 15k PER YEAR? Holy smokes.


Paying for the privilege of beta testing something that puts you and your passengers lives at risk.


Article is wrong. It's either $15k one-time or $200/mo.


Tesla pricing is wild. A base Model 3 is $40k. Increase that price by 50% to $60k if you want FSD beta and a color other than black or white.


per year doesn't seem right.


It’s not. It’s $15k one time or $200 a month


Nonetheless, that's quite a bit. Why would I want to pay that when I can plow my car into a gore point for free?


This "article" is pretty much regurgitating a fairly brief part of a much longer New Yorker article: https://www.newyorker.com/magazine/2023/08/28/elon-musks-sha...

I'd suggest reading the New Yorker article instead. There was a discussion on it, but most of the comments were flagged to death, so I'm not gonna link it.

Other bold claims include Musk personally talking to Putin and intervening in Ukraine's counter-offensive via Starlink, NASA officials saying he's basically treated as an unelected official, Sam Altman claiming he "wants the world to be saved, but only if he can be the one to save it" etc.


The article misstates that Tesla redacted when ADAS was in use, which is incorrect. They redact which version was in use.


Funny how it goes from Full Self Driving!!! When they’re selling it to ‘driving assistance’ when somebody dies.


Hey should get sued for FSD claim


If they marketed it as "driver assistance" and not "full self driving" I would agree with the redaction. Assistance software is just that, but when you sell it as set it and forget. Then it is relevant to the story.


The reason this isn't getting more attention: There are 99 fatalities per day in the US from car accidents. There have been 17 fatalities from Teslas in AutoPilot mode since 2019 per WaPo.


17 fatalities the WaPo was able to independently confirm.

I'm assuming there are more than 17, but we don't know about them, because "redacted"


They redacted the version of driver assistance (autopilot vs FSD), not that there was driver assistance. The article is does not make this clear, but another poster linked to the actual reports CSV and it's clear that's the case.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: