Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Tesla rear-ended me on autopilot, no one will investigate
413 points by raylad 32 days ago | hide | past | favorite | 397 comments
Yesterday I was stopped at a red light on a highway and a Tesla Model S rear ended me at high speed, twice! There were two impacts somehow. It was a 4-car pileup and my car was destroyed.

The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

I talked with the NTSB who said that they won't investigate because no one was killed.

I talked with the NHTSA who said that they can't take a complaint: only the driver can file a complaint and they will only investigate if they "see a trend".

This seems like a serious issue that someone should be looking at. Does anyone have any idea about how to get this actually investigated to see if it was another Autopilot failure?




I have a Model 3. The autopilot was acting weird while driving on the freeway, sometime disengaging, but mostly not tracking the lane smoothly. For an unrelated reason, I opted to view my camera footage, and was shocked to find that my camera was completely blurry. I’m driving down the freeway at 75mph, being guided by a camera that couldn’t pass a standard driver’s vision test.

Tesla came out to replace the camera, and it appeared that a film had been deposited on the windshield, obscuring the camera.

Living in Phoenix, my car is parked outside, facing south west. During Covid, I’d go many days at a time without driving, and in the summer, the interior temperatures can easily rise over 150f. Within the camera enclosure, there’s a reflection absorbing material, which likely creates a perfect miniature greenhouse.

I believe the glue in the camera housing melted/evaporated at these elevated temperatures and deposited on the windshield.

Concerned that others would experience this problem, I opened a case with the NHTSA. Crickets.

There could be many people driving around on autopilot, with obstructed vision, due to this same failure mode. It’s something Tesla could easily check, and something NHTSA should be investigating.

For something as safety critical as a forward facing camera, you’d expect both Tesla and NHTSA would be investigating. I have no indication that anything has happened as a result of my filing. Possibly because nobody else has reported the issue - maybe because nobody else is actively viewing their camera footage? There’s no other way for a Tesla owner to be aware of the issue. Frustrating.


Tesla and Carelessness, name a better duo.

From removing radar from their cars [1] to having their $200k super car be criminally underbraked. [2]

Tesla has a low regard for human life, and exemplifies the worst of silicon valley's progress at any cost approach. Elon is undeniably intelligent, but his ethics are dangerously sidelined by his need to create products that might not be ready yet.

[1] https://www.reuters.com/business/autos-transportation/tesla-...

[2] https://www.youtube.com/watch?v=Hn9QWjxFPKM


Radar removal improved accuracy rather than harming it, they don't actually sell a car at the price point you listed. and inference from Tesla's award winning safety records suggests that Telsa does care about human life.

The factual content in your comment is decisively lacking.

Careless of you, I think.


"Radar removal improved accuracy rather than harming it..."

Reference, please.

When more sensors don't perform better than less sensors, either your hardware or your software is flawed. Removing a sensor from design because your system doesn't work speaks poorly of your product.

Even following that simplistic logic, if autopilot isn't safe, How could they dare not remove it?


> When more sensors don't perform better than less sensors, either your hardware or your software is flawed.

Consider that 99 cameras plus one radar does not make a system perform better if the task is identifying whether there is the color blue in an image. If we add ten thousand radars to this situation it doesn't produce an improvement, in fact, it gets worse, because there is more power being used by the sensors and a lot more complexity.

> Removing a sensor from design because your system doesn't work speaks poorly of your product.

This is a convoluted way of saying that making improvements to things is bad. That is nonsensical.

> Even following that simplistic logic, if autopilot isn't safe, How could they dare not remove it?

They did remove it.


Just to show you mathematically that they're right consider that if you have two sensors that are both measuring the same thing. One is Norm(mean=0, std=0.00001) and the other is Norm(mean=0, std=10). The true quantity is always zero in the real world. Given this scenario and these two senors you get the result of 0.11 and 11 as your return from the two sensors. Lets say we average the result of these two sensors. The average is farther from the true value than the first sensor was. The less reliable sensor is adding noise to our measurement. It isn't high utility.


A crafty person here will draw on ideas like kalman filters and sensor fusion to notice that as the number of measurements increase you can gain precision even in the presence of an unreliable sensor.

This is very true and it might be where the bad intuition that having multiple unreliable sensors is good is coming from. Multiple noisy readings can be averaged to get a more reliable reading if the distribution of reading errors is normally distributed. You can take ten Norm(0, 10) and average them and the result of that will be a normal distribution with a standard deviation which is lower than the standard deviation of the distribution that we sampled from. See basic statistics for the e

There are a few reasons why this doesn't apply in this case:

1. The measurements aren't going to be of the same thing. At time step t, the state of the world is potentially different from t+1. Since we have one radar sensor averaging the results of the two sensor readings doesn't give us the true position. Kalman filters solve this with a dynamics function so this problem isn't something that is impossible to solve.

2. So lets say we solve it. Even if we did measure the same thing we have another problem. We don't have time to take advantage of the central limit theorem. Yes, multiple noisy readings of the same thing average to the true reading, but since we have only a split second to decide whether to slam on the break to avoid an accident we don't have the capacity to await convergence of the noisy sensor and noisy dynamic function to the true measurement. But lets say we did even though we don't.

3. The radar sensor isn't actually a normal distribution in terms of its noise. It's actually more accurate in certain low visibility conditions because it goes through occlusion, but less accurate given certain environments like going under an underpass. Since the noise isn't normally distributed you don't have the guarantees from statistics which you were hoping to rely on.


From the keynote of CVPR. CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.

https://cvpr2021.thecvf.com/

https://www.youtube.com/watch?v=g6bOwQdCJrcCVPR


thanks for intro to that car review channel. really good stuff


I don't know much about this but wasn't Tesla getting safety award after safety award in testing and that translating into lower insurance rates?


My understanding is that they are safe in the event of a crash, not that they have the best crash avoidance.


That's patently a hilariously non-sequitur answer to the OP saying HE got rear ended by a Tesla. HE is the one with whiplash and a totaled car. Who CARES about how safe the stupid Tesla was that did it....


So, it is safer "in the event of a crash" for the people inside the car, but if crash rates are higher (statistics pending) it might be more dangerous for everybody both outside and inside of the car.


Exactly. Tesla is 100% trying to keep you safe on the inside while disregarding safety on the outside. Just look at the Cybertruck, pedestrian safety is clearly their last thought.


Doing well in crash tests means the car does well in those specific scenarios, but may not mean much about other safety issues.


The first several model years of the Model S were generally considered to be among the safest vehicles in the history of automobiles.

https://www.businessinsider.com/tesla-tesla-model-s-achieves...


I continue to believe that minor differences car safety ratings are red herrings. (In a previous life I worked as a mech engg in car manufacturing)

It reminds me of benchmarks on MNIST datasets, to decide which model is better. It is like patting yourself on the back for funding better helmets, when your bike infrastructure is straight out of mad max. The real problem is somewhere else.

Crashes almost never occur like we see in these crash tests. The prime culprit is what you can call 'dangerous actions'. If a Tesla driver is far more likely to be distracted (FSD, Touch controls) or more likely to be out of control at speed (insane acceleration with terrible braking), then a Tesla is an unsafe car. This is irrespective of the marginally lower likelihood of death in ideally controlled circumstances. Taking standard customer behaviors into account is one of the basics of design. If you market your lvl2 system as FSD, then a few idiots will let the car drive while in the back seat. If your car can't brake effectively past 120 mph, then maybe it shouldn't accelerate to that number within the blink of an eye.

I am also ignoring the safety risk posed by these vehicles towards anyone else on the street. It has long been my complaint that the SUV & Pickup craze is actively making the roads unsafe. They have worse blind spots & can kill with the kind of efficiency that sedans can only dream of.

Tesla's safety ratings are misleading too. For one, Tesla's high safety has to do with being one of the first truly electric cars on the market. Electric cars can't roll, are heavy and thus the crash zones can absorb more energy due to having more mass. Electric cars are thus objectively safer in straight line collisons. [1] But, that's like saying that your military tank is incredibly safe, but crushing dozens of cars in front of you. Safety should be about how safe the streets are with your car on the road, vs not being on the road. Alas, this is a problem with the car industry at large. So, Tesla isn't really to blame for this one.

[1] https://www.iihs.org/news/detail/with-more-electric-vehicles...


"But, that's like saying that your military tank is incredibly safe, but crushing dozens of cars in front of you. Safety should be about how safe the streets are with your car on the road, vs not being on the road. "

I recently witnessed an almost head-on collision between a H2 Hummer and a Chevy Malibu. The H2 Hummer lost control (i think due to speeding) and served into oncoming traffic. The H2 flipped on it side but was barely damaged; the Malibu was completely mangled.


An "autopilot" that requires a driver's attention is not an autopilot.

If people really understood that the car driving is less reliable than themselves, they might not turn it on at all. Imagine you teaching a 16 year old to drive while commuting, everyday. NTY


> An "autopilot" that requires a driver's attention is not an autopilot.

Weird; I thought that's exactly what an autopilot was.


And they probably still are, along with the rest of the models.

Assuming you're actually the one driving them that is, and not a broken ass camera.


Where I live Tesla has quite high insurance rates. Mostly because it’s very expensive to repair but also because Tesla drivers tend to speed and cause more accidents than other drivers.


Perhaps the drivers didn't mean to speed...


Tesla doesnt make a $200K car.......


They may not be in production yet, but they are asking $200k for the Roadster (and $250k for the founders), which might be the super car I presume parent is talking about. The plaid while not $200k still clocks in at $150k


>They may not be in production yet, but they are asking $200k for the Roadster (and $250k for the founders), which might be the super car I presume parent is talking about

Well, that's the problem. Claiming that the car that isn't even in production yet has "underbraking" issue is misleading at best.

Regardless, I don't think that person was talking about the upcoming Roadster, because they added a link as a citation in "having their $200k super car be criminally underbraked. [2]", and [2] is a link to a video review of Model S Plaid by a big car youtube channel. I have no idea why they linked that specific video, because it was a glowing review of the car. The reviewers were giddy and threw tons of praise at it in pretty much every aspect.


I meant the plaid. Sorry.

You are right, Tesla makes good cars. The rub is pretty much about carelessness on part of Tesla in safety design.

I specifically linked the video because they are even handed in their reviews and not one of those who put down cars just because they don't like the CEO.


Thanks for explaining your point, I really appreciate it.

Mostly because the original reply seemed like it was just trying to insert a video that doesn't fit the criticism drawn in the comment, but is baiting for people who will just read the comment without watching the video. But I get the point you were trying to make now.

While I disagree, I cannot in good conscience not upvote a point made well.


That's odd since Tesla seems proud of their AI expertise. Isn't something like "image is blurred" a fairly pedestrian (heh) exercise? Especially given that you would have related data, like what speed the car is going, if you're parked, etc.


They do report if the camera can't get a good image. Glare from the sun is the most common reason. It is certainly possible that there is a window between when vision is degraded enough to be dangerous and when it alerts the driver, but a single occurrence isn't proof of a widescale problem. That is the type of thing that should be investigated in more depth, but it is hard to get anyone to care when it is only a theory that isn't even yet connected to a single accident.


This does sound like something that should be the subject of a recall investigation.


Genuine question, what temperature are cars expected to be stored at safely? I'm not talking about engine temperature, just literally the room temperate of the garage. I'm not wanting to excuse or accuse anyone here, but are cars supposed to be manufactured such that they can sit in 150F garages for days at a time without any maintenance?


Because cars sit out in the sun in hot countries all day (unavoidably), the automotive supply chain have developed quite rigorous standards for pretty much everything in a car.

Typically every electronic component must be rated from -40 to +105 C. I'd imagine there's similar requirements for all the interior parts of a car too. Cars have a lot of glass, so get very hot in direct sunlight and quickly reach ambient on cold nights.

The reason cars take so long to "catch up" to consumer electronics (eg it's only in the last year or two that large LCD displays have started appearing in cars, or that LED headlights have become commonplace) is that the car design cycle is very long - typically about 7 years. Tesla decided to do things differently, which meant they could iterate much quicker. It also meant they used components not qualified for automotive use. Mostly they got away with it, but there have been some issues (for example their large displays started failing in hot countries).


There are a lot of cars that get parked in the hot Arizona sun everyday, where temperatures outdoors can exceed 100 F and interior temperatures can exceed 150 F[1].

[1] https://www.desertsun.com/story/life/home-garden/james-corne...


I would expect SOME vehicles to do so just fine.

Really hot weather in the Continental USA can be pretty toasty, and if a garage was designed more like a crummy solar furnace, I would expect to see it hit those temperatures.

My car has survived at least 150F interior temperatures parked outside during a sunny Pacific Northwest heatwave. I'm estimating that temp because I had some plastic in the car that was rated to lose stiffness at 150F and it got a bit floppy after the car was parked outside in sunny 105F+ weather weeks on end and the plastic was in shade.

It can be hard on lead acid batteries and shorten their life, some lithium ion battery chemistries top out at 130F ambient operating temperature, and ABS plastic that dashboards can be made of is rated to about 176F. If its 150F ambient, the dash could be even hotter.


When the Tesla technician came out to replace the camera anti reflection shield, the plastic just crumbled in his hands.

Btw, they never actually replaced the camera, just the anti reflection shield, and cleaned the windshield and the camera lens.


> my car is parked outside ... the interior temperatures can easily rise over 150f

I live in Arizona as well and this is why you'll see everyone in parking lots parking under trees in the summer, with empty spaces in between.

Teslas have overheat protection (turns on the AC when interior temp > 105F) for a reason (one of them because it's cheaper to manufacture cars that don't need to withstand higher temperatures).


Your garage is not 150F, and certainly not for days at a time.


It’s not parked in a garage. Outside, facing southwest, in the Phoenix summer.

Tesla’s overheat protection only works for 12 hours at a time. If I were commuting, then maybe this wouldn’t have happened, but during Covid, the car was just sitting there for days without being driven. Tesla doesn’t offer a way to extend overheat protection past 12 hours, even when plugged in.


Indeed mine certainly is not, but apparently the parent commenter's routinely reaches 150F.


I assumed they were referring to the interior of the vehicle rather than the interior of a building.


I suppose teslas could come with warnings, "this vehicle is not intended for use in Arizona".


I wonder if the US government is afraid to kill their golden goose like what happened with Boeing.

EV is the future and China is moving fast.


EV != Tesla and Tesla != Autopilot Also maybe hydrogen is the future?


> Also maybe hydrogen is the future

It definitely isn't if we are going to move away from fossil fuels.


You can get H+ by cracking water. You “just” need a source of cheap energy.

Storage is tough — those protons are really tiny and it’s hard to keep them from escaping


> You can get H+ by cracking water. You “just” need a source of cheap energy.

Well aware. But we don't have a source of cheap energy. And guess what, whatever source of energy we have, it is very wasteful to produce hydrogen.

First, you need electrolysis. Around 80% efficient. However, in order to get any usable amount of energy per volume out of it, you need to either compress hydrogen (using more power) or liquefy it (way more power, boiloff problems).

Now you need to transport it, physically. This uses more power, usually with some big trucks driving around, just as we have today with gasoline and diesel.

This will get stored into some gas station equivalent. All the while losing mass, as those protons are hard to store, indeed.

Now you can drive your vehicle and refill. Some inefficiencies here too but we can ignore them unless you need to travel long distances to refuel.

This hydrogen will generally be used in a fuel cell. The output of the fuel cell is... electrical energy (+water)

You could skip all that, use the electricity from the power grid to directly charge batteries. No hydrogen production plants needed, no container challenges, no diffusing through containers and embrittlement to worry about. No trucks full of hydrogen going around. Electricity is found almost everywhere, even in places where there's little gas infrastructure.

Mind you, hydrogen has another disadvantage, other than thermodynamics: it ensures control is kept with the existing energy companies. We would still need to rely on their infrastructure to refill our cars (much like gas stations). They would like to keep it that way. Ultimately it doesn't really matter what's the fuel we are using, as long as we keep driving to gas stations.


If hydrogen fuel cells lasted longer than batteries (in lifespans, not-per charge) or required fewer rare metals to create, I can see a giant win. EVs are pretty wasteful if you consider the entire car lifespan and not just what the end consumer fills up with, to the point that I feel the ecologically superior action is just to drive an old ICE.


> required fewer rare metals to create

Fuel cells require expensive and exotic catalysts, like platinum. So they don't even win there.

Hydrogen might have aviation uses but I'm just as skeptical we'll get over the safety issues.

> EVs are pretty wasteful if you consider the entire car lifespan

Only until the required recycling infrastructure is in place. We can recycle around 96% of the material used in batteries. There's much less equipment needed for an EV, compared to ICE. There's also a push to use less of rare earth minerals like cobalt. Any advances in battery tech (like solid state batteries) can be easily incorporated, as electric motors don't care where the electricity comes from.

EVs don't have spark plugs(consumable), oil pumps and filters, alternators(the drive motors work as alternators for regen braking), air filters, crankshafts, valves, pistons, exhaust(and catalytic converters with rare metals), starter motor... the list goes on. Even the brake pads last longer. It's at a minimum a battery bank, an electric motor, and some electronics to manage power. An order of magnitude less moving parts (that have to be manufactured, wear out, and have to be replaced).

> the ecologically superior action is just to drive an old ICE.

An old but relatively recent, fuel-injected ICE with the latest emissions standards? Probably much than getting a new one built, yes.

A really old one with a busted engine (or one before we had catalytic converters in the exhaust)? Then I do not know. Might be better to scrap and get a somewhat less old vehicle.

Honestly, the ecological win here would be to convert the existing fleet into EVs, rather than creating entirely new cars. There are some hobbyists doing that successfully. It's not even difficult - the difficult part is adapting the parts to fit the specific vehicle. The economics may be difficult to justify, however.

At some point, when we have a sizable EV fleet, getting recycled batteries into them would be the way to go. They will probably last longer than our current ICE vehicles.


You speak as if lead-acid batteries, the most recycle-friendly battery, had anything in common with EV batteries, besides being batteries.

If a lead acid battery is bread in a bag, a Lithium battery is a tacobell combo: much more energy, much more waste material.

Reduce comes before Reduce and Recycle


I agree, H+ is a dumb approach.


And H2 isn't much better.


Hydrogen as fuel is unrelated to fossil fuels. Look it up. Edit: 95% of hydrogen is currently produced from hydrocarbons. BUT Electrolysis

https://en.m.wikipedia.org/wiki/Electrolysis_of_water Hydrogen as a fuel has been tested in buses. Works a proof of concept.

This was about hyundai marketing hydrogen as a fuel:

https://www.economist.com/science-and-technology/2020/07/04/...

In the end, it will depend on the energy conversion efficiency that can achieved in the future.


Can't renewables produce hydrogen?


They can. But overwhelmingly, they do not. It's much cheaper to extract hydrogen from fossil fuels and that's not likely to change any time soon.


All major car companies manufacture EVs. Off the top of my head, I've seen BMW, Prius (Toyota), Honda electric cars outside recently.


Hydrogen production should be coupled to renewable energy production in the way they will pair bitcoin mining to any energy plant


Meanwhile, TSLA is up 52% over the past 30 days, and 200% over the last year.


Does that magically make it safe? I don’t get the connection.


My charitable read of the GP comment is that they are pointing out how ludicrous the stock increase is, but that's my reading. Side note, how much attention does your username usually garner you?


None typically. I don’t think anyone has made the connection, publicly anyways, since I’ve started using it.


Chenquieh


I think the point is that Tesla investors and customers don’t care about safety, either for the passengers or as an externality.


I think he's referring to the naive idea that stock prices in general are somehow influenced by what a company is doing and are not pure speculation.


That's horrifying. Every phone I've owned in the last 5 years or so notifies me when my camera is smudgy... it's not exactly cutting edge tech. Does this mean Tesla isn't doing this?


Tesla does do this. But there are likely some cases that are blurred enough to be a problem but not quite enough to trigger the detection.

FWIW, I've seen a Youtube video of a non-Tesla with radar cruise have a similar issue. The radar was caked with snow and it was trying to run over the car in front of it. Does it not have detection for that scenario? Of course it does, but this one somehow managed to not trigger the radar blocked warning.

These systems are not expected to be perfect at the moment.


There are five forward facing cameras in a Model 3. So you're not being told the full picture here.


https://imgur.com/a/KDkjbuF https://imgur.com/a/vEbpzPo

See for yourself.

All the forward facing cameras reside behind the rear view mirror. While the technician had removed the rear view mirror, I took a short video of the windshield blurring. Notice that through the haze it’s almost impossible to read the house numbers, about 9” tall, approximately 15’ away.

And no, the car wasn’t shipped like this. I have plenty of footage from a year earlier, where the front facing cameras are crystal clear. Something happened in the last year.


As someone keenly interested in such topics, would you permit using these pictures, and if so, how would you want to be credited? (the background is https://TorchDrift.org/ , and the image reminded me of the right hand side of our poster here https://assets.pytorch.org/pted2021/posters/I3.png)


There are three cameras in the front center of the windscreen and two more forward facing cameras on the sides of the car. Again, for a total of five forward facing cameras.

I see your picture of the on-screen rendering of an image from just one of these five cameras.

And your second picture (video) which appears to show things that have been disassembled or partly uncovered… hard to draw any conclusions from that especially about the side forward facing cameras which are situated not where your picture shows, but just behind the front doors. Also can’t draw many conclusions about the windscreen cameras, not having been there myself to see what was done in the disassembly. And it’s a picture of the interior of the car, back side of the camera assembly? Not sure what it is supposed to be adding here.


I’m going to use the terminology from this image: https://i.shgcdn.com/739286e6-3f49-463f-ae61-cbc6c99d5270/-/...

And then I’m going to reference some part numbers from the Tesla parts manual you can find online.

The forward looking side cameras are embedded in the b-pillars, and are unable to image directly in front of the car. I checked my car, and cannot see the lens of these cameras when a few degrees from straight forward. The image above shows a field of view which approximately matches what I could assess.

The only cameras which can see directly in front are the main forward camera, the narrow forward camera, and the wide forward camera. All three cameras are behind the rear view mirror, in the 1143746-00-C module.

When viewing the in-car recording, it’s not clear as to which of the three is being displayed. It’s possible the other two weren’t impaired, but I find that unlikely.

These three cameras are enclosed in a small area behind the rear view mirror, bounded by the windshield in front, and the triple cam glare shield around them. 1086333-00-E <-- Triple Cam glare shield.

The entire inside of the windshield enclosed by the glare shield was coated in a haze. When I touched the surface, it was sticky. The Tesla technician (servicing Phoenix) indicated that he had never seen this failure. I sincerely believe he hadn’t.

The video I had posted previously was showing the windshield, after the rear view mirror was removed, along with the camera module by the Tesla technician. As I panned left to right, you can see that the house numbers are obscured through the haze. If I recall correctly, the haze was roughly uniform inside that glare shield. As all three forward facing cameras are looking through that same haze, I expect all three were significantly impacted. I did not have an optical acuity test sheet to validate my subjective assessment that it was hazy, and have to go by the inability to see ~9” tall house numbers from 15’ away. I haven’t done the minute of angle calculation, but I expect that level of acuity would be sufficient for an individual to be considered legally blind.

What’s even more frustrating is that when the technician was disassembling the rear view mirror assembly, he discovered that there were 2 missing screws. When he went to remove that triple cam glare shield, it fractured into many pieces, almost turning into dust.

How did it get that hot in there? We keep a reflective windshield screen in the windshield. The rear view mirror and cameras would have been sandwiched in the small space between the screen and the windshield. Coupled with the highly optically absorbent glare shield, and exactly aligned solar exposure, and high external temperatures… there must be some strange combination that caused this failure.

My biggest gripe is that Tesla may not be forced to recall, and when this happens again, I’m going to have to pay for the repair out of pocket. I’ve already purchased a replacement glare shield off eBay in case I have to replace it in the future. The glare shield appears to have some optical absorbent flocking, likely attached with glue. It’s this glue that I expect couldn’t withstand the heat.

The technician cleaned the windshield, the camera lenses, and reassembled (the new glare shield had to be ordered and was replaced on a subsequent service call). The problem hasn’t recurred in the following 4 months, even though the parking orientation, windshield screen use, etc hasn’t changed. Luckily Phoenix does cool down.

Btw, I think the three forward camera design behind the windshield is quite innovative. The cameras are looking out through an area that is swept by the windshield wipers, helping to maintain visibility. The cameras are separated by sufficient distance that a single bug shouldn’t obscure all three. The only common environmental failure mode appears to be this weird issue on the inside of the windshield. Hard to fault the engineers designing this. I’d be extremely proud of this design, if I designed it. I happen to have hit a corner case.

My wish is that Tesla enable some sort of software to detect this failure, and offer free repair for this failure for the life of the vehicle - regardless of how often this recurs.

In typically Tesla fashion, this could be a simple over the air software update, but will likely be followed by months of arguing with the service department when the fault is detected and the repair actually needs to happen.

Without this software detection, I’m afraid that this failure will go undetected by many other Tesla owners, incorrectly putting their trust into auto pilot using cameras with this failure mode.


Awesome comment, thank you!


> was shocked to find that my camera was completely blurry

They maybe don't need to figure out how it got blurry.

But their testing regime is sad if it continues to work when it can't see!


It sounds like the fix here is a built-in-test for the camera capable of detecting this sort of obscuration / degraded acuity.


There are several cameras for redundancy. The car also doesn't need crystal clear vision to drive. What may seem like a blurry image to human eye with simple filtering can become much more visible. It also doesn't need to see details, but rather needs to detect if there are any objects in the way or where the lane markings are.



I've also filed a NHTSA report, some 12 months old now, that has been greeted with ... crickets. Something very basic, that any car maker should be able to do, is write a manual that teaches people how to use the (poorly written) software.

The manual explains that you can do a 3-step invocation of a voice command: 1) push 'voice' button; 2) utter command; 3) push 'voice' button to complete. It hasn't worked that way since 2019. See, e.g. Software version: 2021.32 [1], p. 161

Good luck teaching people how to use 'Full Self Driving' with these kinds of error checks.

[1] https://www.tesla.com/sites/default/files/model_s_owners_man...


Tesla is making a virtue out of cutting costs, using cheaper parts, and running a public beta.

I guess one real test will be what the resale value of Teslas ends up being. Because they have been growing so fast, most Teslas are very new. What will they be like at 10? 15? Will they stay in shape, or will they drag the company valuation down? Also right now they have the cool car premium, but that’s bound to be shared once other catch up.

Ford T was a similar bet I guess, attention to detail must have been lower with mass production, and it clearly paid off. Hard to know this time.


There are five forward facing cameras on a Model 3. Was it all five of them that were blurry? How could you tell? Or just the one selected as the frontmost one on the dash cam video?


Just the one that’s visible on the dashcam video, but the enclosure where all (I’m not aware of any other place) the forward facing cameras reside, the window was covered in a haze. Do you know of any way to view the other camera feeds? Are there any other forward facing cameras which aren’t behind the rear view mirror?

https://imgur.com/a/KDkjbuF https://imgur.com/a/vEbpzPo


My $300 OPPO phone alerts me when there's dust in front of the lens... Maybe maybe Tesla could benefit from some similar technology.


FWIW Oppo sells more phones per month than Teslas lifetime total


It's also a problem that the camera could have degraded vision that's inadequate for self driving yet report no problem at all and continue driving as if everything's fine.

I'm pretty sure your situation isn't the only way the vision can be obstructed, maybe liquids, or even bird poop can also degrade vision.


This is what I would expect as well. Some kind of visual self-test, and if/when it fails, the driver gets an obvious warning and autopilot does not engage - you get some kind of clearly visible warning so that you can contact Tesla to investigate / repair the obstruction.


This does happen, but maybe not in all cases. I've had it happen where lane change is no longer supported (and you get a notification) because the sun is blinding one of the pillar cameras.


I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

Imagine if you were rear ended by a Toyota and the driver said "it was on cruise control, shrug." Would you talk to NTSB or NHTSA about that? Probably not.

The only scenario I can imagine that'd warrant investigation is if the driver were completely unable to override Autopilot. Similar investigations have been started by NTSB when Toyota had issues with their cruise control systems several years ago.


> I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

You can have a similar argument for: if I'm drunk while driving that's my choice; if I don't cause any accidents, it should be none of anybody's business that I'm drunk.

You see, it doesn't work like that.

If Autopilot poses a significant risk to other road users, then an investigation is warranted.


> If Autopilot poses a significant risk to other road users, then an investigation is warranted.

Except there's no evidence that it does. I drive a Tesla. I've seen AP brake hard in circumstances where a distracted driver would have failed to notice. The system works. It does not work perfectly. But all it has to do to make that "if" clause false is work better than drivers do, and that's a very low bar.

Tesla themselves publish data on this, and cars get in fewer accidents with AP engaged. (So then the unskewers jump in and argue about how that's not apples and oranges, and we all end up in circles of nuance and no one changes their opinion. But it doesn't matter what opinions are because facts are all that matter here, and what facts exist say AP is safer than driving.)


At the risk of beating a dead horse, the issue is that AP engaged/disengaged isn't a random test, it's a user choice. Presumably people leave AP on when the outside conditions are inherently safer per mile (including, for example, on a highway). If Tesla randomly made AP unavailable some days to some drivers and ran a controlled test, that would be compelling.


You are absolutely beating a dead horse, sorry. This argument gets made repeatedly every time that report gets mentioned and... frankly I'm sick of going through it every time. Nitpicking against a result you don't like without presenting contrary data is just useless pontification. It proves nothing, and has no value but supporting your own priors vs. inconvenient evidence. Find some data.

I mean... there are two million of these cars on the street now, and all but a tiny handful of early Roadsters and S's have full featured autopilot. If there was a significant safety signal in all that data somewhere it would be very clear against the noise background now, and it isn't.


My point wasn't that I had contradicting data. My point was that the data Tesla publishes isn't valuable. It's a different statement. The premise could be right or wrong.


> My point was that the data Tesla publishes isn't valuable.

That's not true at all, and you know it. Consider the extremes: if autopilot got in zero accidents over these however many million miles of travel, you'd have to admit it was safe to use, right? If it got in 10x as many accidents as the median vehicle, we'd all agree it was a deathtrap. There's value in those numbers. Well, the truth is somewhere in between, it gets in 4x fewer accidents (I think) than the median car. That still sounds good to me.

You're just nitpicking. You're taking a genuine truth (that the variables aren't independent because "miles travelled with AP" aren't randomly sampled) and trying to use it to throw out the whole data set. Probably because you don't like what it says.

But you can't do that. The antidote to incomplete or confounded analysis is better analysis. And no one has that. And I argue that if there was a signal there that says what you want ("Tesla AP is unsafe") that it would be visible given the number of cars on the road. And it's not.

Stop nitpicking and get data, basically. The time for nitpicking is when you don't have data.


Yes, a very strong signal (everytime AP is turned on the Tesla explodes or everytime AP is turned on the Tesla drives perfectly from parking spot at origin to parking spot at desitnation, and there has not been a crash in 10 billion miles) can overcome the noise in Tesla's published data. However, any signal there is not that strong and is so weak it is drowned in noise.

Meanwhile, I do have this one datum about how OTA updates caused a couple of accidents (see OP's story.) You cannot just dismiss it because it doesn't agree with a larger trend, if that larger trend is undetectable in the data.


> However, any signal there is not that strong and is so weak it is drowned in noise.

It's a 9x reduction relative to median vehicle and 4x vs. non-AP Teslas! I mean, go read it: https://www.tesla.com/VehicleSafetyReport

You're just making stuff up. That's not a weak signal at all, and you know it. The argument you should be making is that it's a confounded signal. But that requires a level of sophistication that you aren't supplying (nor is anyone else). Which is exactly why I mentioned way upthread that these digressions were so tiresome.

There's literally no data in the world that will satisfy you, just admit it. Yours is an argument from priors only, to the extent that one anecdote ("A Tesla hit me!") is enough to outweigh a report like that.


Tesla themselves publish data on this, and cars get in fewer accidents with AP engaged

This is false, or at very best misleading, considering that Tesla's Autopilot is responsible for more than 10 fatalities. Or in other words, Tesla's AP has killed more people than the entire rest of the domestic auto industry's smart cruise control features. Combined. (And Tesla's "safety" data conspicuously fails to mention any of these fatalities.)

And that doesn't even include all the times that AP ran into stationary emergency vehicles, or the many slow-moving accidents in parking lots, or the many near-misses that were avoided because the other driver was able to avoid the suicidal Tesla in time.


Come on, the "entire rest of the domestic auto industry's smart cruise control features. Combined." constitutes what, 20k vehicles or something like that? It's just in the last few months that we've seen lanekeeping features start to land in volume models, and even then they tend to be extremely heavily geofenced. It's very hard to kill someone when no one has your product and you won't let them use it.

Who's being misleading here?


The entire rest of the domestic automobile industry sold millions of cars with smart cruise control features similar to Autopilot over the past few years, so the misleading one would be...you?

Unless you're now claiming that FSD is just a version of smart cruise control?


"Similar to autopilot" meaning what? Are you trying to say radar speed modulating is the same kind of system? You've completely lost me.

But if you mean vanilla cruise control, people die using that every day.


No, using vanilla cruise control, almost nobody dies every day. It is in fact rare enough that when it happens it gets reported in the news.

The fact that Tesla fans can't grasp the concept that Tesla is (a) not perfect and (b) deliberately endangering people's lives for profit is unfortunately not shocking any more, but it doesn't mean the rest of us have to put up with that schadenfreude.


>> If Autopilot poses a significant risk to other road users, then an investigation is warranted.

> Except there's no evidence that it does.

It sounds like you two are talking past one another.

I don't believe op is presenting evidence that it does. Instead, they are saying that there is at least one antidote (TFA) and that someone (presumably a government agency) should investigate it.


First, a very valid reply to there being "at least one anecdote (<-- sp)" is that there is a bunch of actual data on exactly the subject at hand that shows the opposite of the stated hypothesis. You don't throw out or ignore existing data just because one guy on HN got rear-ended!

Second: I think you're being too charitable. I think it's abundantly clear that grandparent believes that there is a "significant risk to other road users", and that this single data point confirms it. And clearly it doesn't.


An NTSB investigation is a separate thing from an accident investigation for fault or insurance purposes. The NTSB does not investigate a drunk driver, that does not mean the drunk driver would be free of fault or charges.


A more apt comparison would be if OP was hit by a drunk driver, and there were no laws against drunk driving. It would be appropriate to ask the government to investigate whether or not drunk driving is safe, because it could happen to many more people.


Except that both Tesla and the law make it clear that in a Level 2 system the driver is responsible at all times, regardless of whether Autopilot is engaged or not.


To stretch the metaphor further, I don't think it's fair to say Tesla makes it clear. Or better, it's not all the story.

People would be less upset with the drunk driver and more with the beer manufacture if they had been imbibing "alcohol free beer" which actually was not alcohol free.


The car tells you "please keep your hands on the wheel". It then proceeds to nag you if you don't put enough torque on the wheel, and then brings the car to a halt in the middle of the road.

Please don't propagate myths about "Tesla doesn't make it clear that you have to be in control of the vehicle."

My AP has many times disengaged because despite paying attention to the driving I wasn't putting enough torque on the wheel to convince AP that actually had my hands on the wheel.


Have you used it? Because its as clear as any system that I've ever used with a clear statement on each activation.

Most other systems are far less clear, IMO.


And a drunk driver is still responsible at all times. That doesn't change the fact that we outlawed drunk driving because it made things worse.


> Autopilot poses a significant risk to other road users, then an investigation is warranted.

You could fault their methods but NTSB is doing just that: waiting for a trend to emerge that would warrant such an investigation.

I'm not suggesting that Tesla's design is not the cause but if the driver were lying in order to shift blame, then NTSB would end up wasting their resources investigating it.


> If Autopilot poses a significant risk to other road users

It does not, if you use it as Tesla tells you to use it - paying attention to the road at all times and with your hands on the wheel. If you don't do that, Autopilot is probably still better than distracted driving - over the years I sometimes would catch myself drifting off in my thoughts as I was driving to the point where I wasn't even sure how I drove my car for the past few seconds. I bet this happens to others, too, and often. Between that and even semi-autonomous Autopilot, Autopilot seems like a safer choice, and it'll only get better from here on out, though I don't think we're anywhere near real "FSD" where you'd be able to read a newspaper as the car drives itself.


No, you're right, none of what you're talking about works like that.

There was a traffic accident. There are a variety of "automated" driving features in modern cars: Auto-park, cruise control, auto-pilot. Any one of these features could, under "less than ideal" circumstances, cause an accident that doesn't warrant contacting national authorities. Well before any regulatory body is going to do anything, private insurance would. They're going to investigate in an effort to determine fault and facts. Was autopilot to blame, or did the user spill a cup of coffee in the passenger's seat that caused them to take their eyes off the road, etc.

The idea that a national regulatory body is going to start looking into a single car crash seems great until you start thinking about the expense that would create for the taxpayers. Bureaucracy just isn't built to be agile, for better or worse.

Similarly, if you drive drunk and don't cause an accident, nobody will know. This doesn't make it legal, and nobody is trying to argue a point even tangentially similar to that. This is a straw man, and a rather obvious one. There is no national group that would ever investigate a drunk driving crash (assuming that was the only relevant information in the case). That's a local law enforcement issue.

TL;dr- The feds don't care about a sample size of 1 fender bender with no fatalities.


The class of problem called driving while incapacitated was studied and special regulatory attention was applied to it.

An automated driving system that can offer to function while incapacitated without the operater even being informed that there is any problem, is a different problem from an operator that neglected to press the brake pedal. The brake pedal itself and the rest of the braking system is absolutely required to meet a whole bunch of fitness standards.


I think that outside of a courtroom, it's possible to recognize "layers" of responsibility, for lack of a better term. I can think of a couple of scenarios:

1. In an industrial plant, equipment needs to be fitted with guards and other protective measures, even if it's arguable that sticking your hand into an exposed rotating mechanism, or grabbing a live electrical circuit, is your fault. If someone's hurt, and there's an investigation, the factory will be cited for not having the proper guards. From what I've read, this is one of the things that we now take for granted, but was hard won through labor activism resulting in a dramatic reduction in workplace deaths.

2. The European approach, where they assign "fault" for a crash, but at the same time investigate how the infrastructure can be redesigned to make the crash less likely.

Yes I would hope that a regulatory body investigates the OP's crash. At the same time, I've read some comments in HN that Tesla drivers hover their foot over the accelerator in case of "ghost" braking. Naturally this is anecdotal. Still, the driver could have mistakenly stepped on the wrong pedal.


VERY generally, in the US, the jury assigns a percentage of fault to each party. E.g., Defendant 1 was 40% at fault, Defendant 2 51%, Defendant 3 5%, and the Plaintiff 4%.

Plaintiff can collect 40% of his damages from D1, etc. Some states will allow Plaintiff to collect 96% from D2, and give D2 the right to collect the other defendants' proportionate share from them.


The big black box makes a mistake and you voluntarily accept the blame? I really don't get this train of thought. I know that this is legally correct, but at some point in time along the technological advancement of self driving technology we'll have to stop blaming ourselves and legally shift the blame to whoever created the self driving tech instead.

Edit: Updated wording to make it less confusing. From system to whoever created the self driving tech.


Because as the driver you're responsible for the tool you're using to drive your self to your destination. That means you're responsible for whether or not you use Autopilot, do 200kmph, take your hands off of the steering wheel, turn your lights on at night, and everything else associated with the operation of that vehicle.

> ... at some point in time along the technological advancement of self driving technology we'll have to stop blaming ourselves.

No we don't. We built it, so we're responsible for it and the consequences associated with its operation, regardless of how well it's engineered or how smart we think we are.


Yes, and no. With other dangerous tools, society decided manufacturers have responsibilities, too.

If you buy a chainsaw, you can trust it has certain safety features. If you buy a children’s toy, you can trust it doesn’t use lead paint, etc.

Those aren’t features you, as a consumer, have to explicitly test for before using them.

Similarly, I think society will demand that cars behave as cars, and not as projectiles. If a manufacturer claims a car has auto-pilot, but that auto-pilot has a tendency to rear end other cars, the manufacturer should carry at least some blame.

I think that should be true even if they explicitly mention that problem in the instruction manual. Certainly in the EU, whatever you buy, you can assume it is “fit for its purpose”.


The problem here is Tesla does not make the claim that the car is autonomous, in fact they clearly state otherwise, nor it is legal for any driver to operate the car like it was autonomous

To continue your analogy Chainsaw manufacturers include clear instructions, and directions to use safety PPE.. if a operator of the chainsaw fails to follow the instructions, or where the PPE and chops their leg off the manufacturer is not liable.. Hell even if they did follow the instructions and chop their leg off it would be unlikely the manufacturer would be liable.


Lets take the analogy one step further. Imagine we have an Autopilot chainsaw, you just have to touch it every once in a while to tell it to keep sawing. Then suddenly it starts sawing in a completely unexpected way and causes an accident. Are you at fault because you have to, in theory, keep in control at all times? Even though you in practice relinquish control and humans don't have the ability to context switch without a delay? The issue would not have occured if the chainsaw didn't start behaving in an unexpected way but it also would not have occured if you didn't use the Autopilot function.


>> Are you at fault because you have to, in theory, keep in control at all times?

Yes...

We already have examples of this with machinery that is automated. The Humans in charge of these automated machines are responsible for monitoring them, and have things like emergency shut downs, and various other safety protocols to shut them down when (not if When) they do unexpected things.

Machines are machines, they fuck up. The manufacturers are not held liable because the know these types of abnormal conditions exist they is why they have the safety protocols and human minders.

An example of this is the auto manufacturer i believe in TN or TX that did not train the operators properly on the safe way to operate, and lock out a machine, this failure to train lead to human injury. The manufacturer of the robot was not liable, the company that failed to train the human staff was.


Stick to my example please, your example is not the same type of situation.


There is no need to stick to your strawmen when we have real world actual examples. If you demand others only address your strawmen then you have recognized the weakness of your argument


I merely changed a car to a chainsaw, it's an analogous situation. There is no strawman, a strawman is a fallacy where the new argument is easier to attack. This is the same argument.

Your example on the other hand is not relevant to the situation at hand, it misconstructs the problem. The machine required trained operators to operate the machine, driving a car requires a trained operator but that is where the similarities stop. You don't need any additional training to operate autopilot. If someone without a license sits behind the steering wheel they're at fault cause they're not supposed to drive the car in the first place, this would be an analogous situation. Yours, as pointed out, is not.


He answered your question with a yes. If you buy and operated some chainsaw with an assistance technology and it automatically cuts down your neighbors tree then you are responsible. Your neighbor doesn't sue the chainsaw manufacturer, they sue you. You can sue the manufacturer all you want too, but the court won't let you tell your neighbor its the chainsaw manufacturer's fault and you are innocent.


I know what he replied with, a yes and an irrelevant example so that leaves just the yes. What I'm after is a stronger, relevant argument.

Let us not forget my original argument. Where I said that this is the current legal situation, I just think that this will change in the future if autonomous actions (driving or other behaviors) keep improving. I believe we have reached a point where it's not necessarily clear anymore who is in the wrong. Not in a legal sense, the law tends to run after the facts. More in a moral sense.


https://en.wikipedia.org/wiki/Product_liability:

“Product liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause.”

A specific example is https://en.wikipedia.org/wiki/Escola_v._Coca-Cola_Bottling_C.... : somebody uses a soda bottle as intended, gets hurt, and wins a case against the manufacturer of the bottle.

If you don’t follow instructions, that still can happen, but it will be harder to argue that a problem originated with the manufacturer.


> The problem here is Tesla does not make the claim that the car is autonomous

Tesla's own marketing disagrees with this. Here's what Tesla says they mean when they say "Autopilot" and "Full Self-Driving":

> The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.

[1] https://www.tesla.com/videos/autopilot-self-driving-hardware...


That is marketing video from 2016, which many including the NTHSA criticized Tesla over, and which Tesla has updated documentation, driver prompts, and other marketing to reflect that the system is not "autonomous" infact if you look at the product pages they clearly say "full self-driving capabilities in the future—" this is also reflected by prompts to the driver, warnings to the driver, and the requirement that the driver provide feedback to the car they are in control...


>That is marketing video from 2016, which many including the NTHSA criticized Tesla over, and which Tesla has updated documentation

I can speculate from people sleeping when driving Teslas that there are some drivers that interpret the marketing and documentation contradiction as "the laws forced Tesla to add this warnings in the documentation, Elon said autopilot is OK".


but the autopilot makes you nudge the wheel every 30 seconds. if people are really sleeping in their teslas they have added a device to defeat this attention tracking mechanism


This makes my point that those sleeping drivers when they had to chose between

1 marketing message" Autopilot works and stupid politics forces us to release it with this tons of boilerplate warnings and having to pay attention"

2 the actual disclaimers the car will show you

this drivers chose to believe 1, this is obvious that marketing is undermining the reality, so IMO Elon and his marketing should be criminally responsible for undermining their own official message.


Yeah and honestly the chain of responsibility is driver first then tesla. What's important is the victim gets their payout. It's the drivers responsibility to provide that payout / get dinged on insurance. if the driver then wants to complaint to tesla for damages that's totally fair. But the victim's compensation should come first from the driver. The driver can't say "but muh autopilot" and release all blame


The OP was very clearly not looking at a payout, but at preventing future incidents.


Incorrect. You are responsible for using the tool responsibly and according to any applicable directions and best practices.


> We built it, so we're responsible for it and the consequences associated with its operation, regardless of how well it's engineered or how smart we think we are.

That's my point. The company that built the self driving technology should be held responsible. If you're not driving you're not responsible, that's how it works when somebody else is driving your car as well so why would it be any different if that somebody else is code? It seems like a good way to align the incentives to create a safe system. You could claim that at this point in time you still have to pay attention and take control when necessary, the issue I have with that argument is that we already have research showing us that the context switch results in a delay that's too long to be really useful in practice.


I didn't build it, Tesla did, and I personally think the company that signed off on "touch screen controlled window wipers" is the one with the most liability in autopilot failures.



The question is the driver responsible? Should it be the owner? Should it be the company who made the failing product?

If your toaster blows up and kills someone a solid case against the maker of toasterb


I agree, but if you bought a tool called "autopilot" and it piloted you into another car, there is something wrong, no? Maybe not the NHTSA, but it seems like someone should be tallying that.


Has Tesla ever said anything like: the “auto” in this case means “automobile” and not “automatic”?

But yeah, they shouldn’t call it that.


Yes, and that point is most likely SAE level 5, as shown here (https://www.nhtsa.gov/technology-innovation/automated-vehicl...). We are some way away from that - just how far depends on whose hype you listen to.

I drive a Tesla Model 3, but I have sufficient grey matter to understand that AutoPilot, in spite of its name, is SAE level 2 (see above reference). If the car is involved in an impact either something hit me, or I hit something - no blame attaches to the computer, because I’m still in charge. Given the current state of the art, I’m perfectly happy with that and bloody livid at anybody who is dumb enough to buy/lease and drive one of these amazing machines without being constantly and totally aware of their limitations.


> Yes, and that point is most likely SAE level 5

Hard disagree.

This is the same as identity theft, where it becomes your responsibility instead of a failure on the part of the bank to protect your account, and the burden gets shifted entirely onto the consumer.

Relevant Michell and Webb: https://www.youtube.com/watch?v=CS9ptA3Ya9E

If Tesla sells a SAE Level 2 lane following / accident avoidance feature which should be able to stop and avoid rear end collisions, yet it causes rear end collisions, they must be liable for that failure. They can't just write words and shift all liability onto the consumer for the software failing to work properly. Nice try though.

And I don't care if the consumer technically agreed to that. If someone smacks into me with Autopilot enabled, I never consented to that technology and didn't give up my rights to sue Tesla.


If someone hits your car while using the car features is exactly why you can sue both the driver and Tesla. You’re very likely to in against the driver, the other isn’t really tested in court. Your mileage may vary.

Also, how is the bank supposed to protect your account if you give your login information away to a criminal? There are a non-negligible amount of cases where the consumer compromises their accounts themselves. The responsibility lies on both. The burden often isn’t placed on the consumer either. Merchants are often the ones that pay the highest price.


I saw an old dude apparently asleep behind the wheel of a tesla while on the freeway. I immediately feared for my family members’s lives, and I had to speed in order to gain distance from this fellow.


when you turn on autopilot it tells you to keep your hands on the wheel and if it senses you are distracted it will turn it off if you don't wiggle the wheel a little to show you are paying attention. So at least as far as autopilot is concerned I would say the driver is at fault if they run into another car.


I think there is a good argument that now is a good time for a liability shift for certain types of accidents with TACC. TACC should never be rear ending people.


It’s a driver aid, not a driver replacement. I’m responsible for where my airplane autopilot takes the airplane (in large part because I’ll be the first one to arrive at the accident scene).

Why shouldn’t the driver be responsible for accidents in the car they’re driving at the current level of driver assist technology?


Yeah, I know, but I'm starting to think that we've reached a point that this shouldn't be an excuse for straight up rear end collisions. They should be forced to make the black box available and take liability if the driver didn't take over.

There's no reason that TACC shouldn't be able to handle the basic job of not running into the car in front of it in the vast majority of circumstances.


a bad actor can easily create an accident of this description, no?


How? Hard braking shouldn't do it. Cutting in front potentially could, but that'd be a different problem.


This is so tremendously NOT a case of personal responsibility. A large portion of our economy attempts to rely on the belief of: "you said it would do x, it must do x".

You bought a car, it promised functionality, it did not deliver and endangered another human/their property.

This is the fault of the manufacturer.

Here are some examples of non-autonomous driving cases where the manufacturer made a false promise: Volvo lawsuit: https://www.motorbiscuit.com/volvo-owners-seeking-class-acti...

Toyota recall after lawsuit https://www.industryweek.com/the-economy/article/21959153/to...

Chevy lawsuit: https://topclassactions.com/lawsuit-settlements/consumer-pro...

It is my sincere hope that we can enjoy Elon's persona without just letting Tesla off the hook of being a car company. Or really, a company.


Fortunately, in the US, responsibility usually isn't an all or nothing proposition. Tesla can be responsible for a FSD bug, the driver responsible for not paying attention, the driver in the other lane for swerving, the tire shop for not putting the tires on right, etc.


It's so odd that in stories like this people don't even ask this simple question: did you try to brake?

It's usually conveniently omitted from the story.


Based on videos and observed driving behavior, it seems the marketing and documentation for Autopilot is ineffective at communicating to drivers that Autopilot is basically advanced cruise control. If this is correct, it represents a systemic issue that should be investigated by the NTSB or NHTSA.


https://tesla.com/autopilot

In the video there it says:

"THE PERSON IN THE SEAT IS ONLY THERE FOR LEGAL REASONS.

HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF."

Total lie unless they are psycopaths and would be willing to run it with no one in the seat if not for legal reasons. In other words unless they were willing to murder if murder were legal--the video is from 2018 or maybe even earlier, where we know they were nowhere near ready for driverless.


Did you consider that that text is about the video and not about the feature in their cars as sold to the public?


Do you think they could have filmed that video with no driver in the seat without high risk of killing someone? Only "legal reasons" kept them from doing it, they have no ethics?

On top of that it greatly mislead people about how soon it would be, and how far along they were. Buyers who they were charging like $9,000 to at the time.


The moment you use Autopilot it's evident that it's basically a fancy cruise control. You're assuming some people would not interfere with Autopilot in any scenario. To think somehow people with that much lack of common sense exist is odd. It shows a massive level of disregard for those people's ability to perform basic functions.


I literally saw a driver operating erratically in stop and go traffic. As I attempted to pass I noticed they were reading a book and driving “hands free.” They were driving a Tesla, and presumably using Autopilot. I’m sure some people operate like they’re a test driver, but there’s also folks who don’t understand the basic functions of being a tester.


Not using brakes when car is approaching a red traffic light is failing at a basic function. Being distracted by text messages, phone, etc. is not.


That may be the legal reality we live in - it's true that if a driver in a Toyota rear-ended someone else, there would be (and should be) no NTSB/NHTSA safety investigation. The driver was simply at fault, they should be appropriately penalized and will hopefully not do it again.

The difference is that safety improvements to Autopilot can scale. Patch the software or upgrade the hardware to fix the root cause of this collision and future collisions can be avoided!


> I drive a Tesla. If my car rear ends another car, that's my fault. Doesn't matter if I had Autopilot on or not.

It is supposed to be super-self-driving... it has to count for something.


No it's not. I didn't spend $10k for full self driving. That's not the same feature as Autopilot.


> The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

So she continued to use it? That's insane.

It's worrying that there are tons of videos on YouTube reviewing Tesla's Autopilot and FSD where the cars do something incredibly scary and dangerous, and the drivers just laugh it off, go "well, it's a beta!" and continue using it instead of turning off the feature that almost got them into a car accident. I don't want to share the road with drivers who think dangerous driving is a funny game.


“It’s a beta” and the unsaid “and I don’t care about other peoples’ lives anyways”.

Still pretty frustrated nobody has regulated “self driving” cars off the market yet. If you’re testing it with a driver and a copilot that’s fine, but putting it in the hands of customers and using “it’s a beta” to hide behind should not be legal.


> “It’s a beta” and the unsaid “and I don’t care about other peoples’ lives anyways”.

Yes, it's a bit distressing having myself and the people I care about be treated as NPCs to some Tesla owners as they beta test their new toys on public roadways.


I don't disagree, but I can't help but wonder if people feel the same about Ford's pre-collision assist with automatic emergency braking or, back in the day, anti-lock brakes.

Sooner or later, these technologies are either public-ready or they aren't. If Tesla autopilot isn't public-ready, I agree it shouldn't be on the road, but I suddenly realize I have no idea by what criteria the system was cleared for public-access roads.


I would absolutely not expect a feature attached to my car to be beta grade, no. And if early ABS systems had issues, I’d expect them to be recalled and fixed at the manufacturers cost.


The question is really: does your expectation match reality, or were you just uninformed of there being existing issues on ABS systems? Or more generally: have things always been the way we currently see them and we just less informed in the past, or has there actually been a change?


I’m not of the right age to know, honestly. That being said I have never seen a car maker ship a feature to me that is self described as a beta. Obviously mistakes happen and plenty of features fall short, but seeing a company knowingly ship a feature incomplete safety system for real world testing by their users with a mere EULA agreement seems like a step beyond the hypothetical of the ABS you raise.


I don't understand why there's so much significance attached to the label. I've seen a non-beta try to run off the road in the same scenario that the beta handles just fine. Its not an isolated example.

To me, that'd just mean one company, the one with the beta label, has higher standards.


I have long said that society has become such risk adverse that if someone invented a technology like the car today it would never be allowed in society.

I said this pre-COVID, and COVID has now cemented this idea, we will not accept any level of risk at all... None.

If it is not perfectly safe everything should be banned, locked down, or otherwise prohibited by law


The topic in question, Teslas (and other autonomous vehicles) on the road, would seem to suggest the opposite, no? While errors do seem to be relatively low, they do exist, and there is a large portion of people still pushing for their immediate use and I am not aware of the (US) government stepping in and banning their usage.


Then don't drive with other drivers - NON-autopilot drivers eating, texting (constantly!), seeming to look up directions and much more (falling asleep).

https://www.youtube.com/watch?v=cs0iwz3NEC0&t=2s

The fact that you promote this absolute recklessness and demand users turn off safety features that may, net net save lives is depressing.

Finally, folks get very confused between things like autopilot and other features (autosteer beta with FSD visualization) etc. I think some teslas now even show a message warning that cruise control will not brake in some cases? Anyone have that.

I've been following the unintended acceleration claims around tesla's as well. Most seem pretty bogus.

For what its worth here is data we currently have from Tesla:

"In the 2nd quarter, we recorded one crash for every 4.41 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.2 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles."

So you want to go to a 10x increase in crashes. (1 per 484 vs 1 per 4M). I realize this is nt one to one, but even outside of this we are seeing maybe a 2x improvement (if your normalize for road conditions etc).

Nission Maxima had something like 68 deaths per million registered vehicle years. Will be interesting to see how this comes out for Teslas.


That data is not comparable, the autopilot in question there can only be activated in simple situations like a straight highway when you aren't changing lanes. While the human data is for all scenarios.

Tesla is not being transparent enough with their data for us to know anything.


No one has shown any credible data that the fatality rate driving a tesla is higher than anything. They go out of their way to ignore cars getting into pretty clearly lots of accidents.

My expectation would be that Suburu WRX / Infinit Q50 / Elantra GT type drivers are crashing more than folks using autopilot? Maybe ban them first?

Anyways, we should in a few years get some NHTSA data on this, though there is some speculation that given their approach towards tesla if its possible they will delay it and stop updating vehicle model statistics if its favorable towards Tesla.


Here’s some credible data that your assumptions are probably wrong:

https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...


That page is not credible at all.

IIHS reports are publicly available [1] and the numbers there don't match what's in the article at all. Large/very large luxury vehicles have an overall fatality rate of 16-21, ranging from 0 to 60 depending on model. The overall average for all vehicles is 36. The Audi A6 for example, is reported as having 0 fatalities in the article, while in the report the actual number is 16.

The other source used, tesladeaths.com, lists a ton of accidents where AP was not deemed responsible. It actually states the real number if you pay attention - 10 confirmed deaths in the lifetime of Tesla as of 2021 - yet somehow the article claims 11 deaths up to 2016.

[1] https://www.iihs.org/api/datastoredocument/status-report/pdf...


The comparison there is as bad as elons in the other direction. They are digging into every missing bit of data and mistake for tesla in a different data set than other brands. I wonder of you could redo now with official data.


That's one subset.

What people can credibly claim, because Tesla's stats are entirely loaded and biased, is that any claim AP is safer than humans is just that, a claim. Because AP will (usually) disengage in situations where it won't do well, something humans can't.

The closest you could come (maybe, who knows, because Tesla hordes the raw data like Gollum, despite their "commitment to corporate transparency") is:

"In optimal and safe driving conditions, AP may be safer than human drivers, and in less optimal conditions, uh, we don't know, because it turns off. And we won't count those miles against AP."

Tesla should actually, if they have the courage of their convictions, show how many miles are driven where AP/FSD would refuse to engage.


> Tesla's stats are entirely loaded and biased

Do you know that, or do you suspect that?

> how many miles are driven where AP/FSD would refuse to engage

I've used FSD for the vast majority of the nearly 40k miles on my 2017 Tesla model S. Highways, boulevards, side streets, even dirt roads on occasion.

It's a powerful but subtle tool. Used correctly, I have absolutely no doubt that has made my driving experience safer and less stressful.

It definitely requires an engaged, alert driver.

Where I suspect you and I agree is the impact of Musk's language and claims around the technology.

If he would fully shut the hell up about it, I think it's quite likely that there would be way less ill will toward the product.


It's known. The logical fallacy is accurate re "miles driven" versus "total possible miles". Tesla's statisticians cannot possibly be unaware of this if they have anything beyond a middle school education, yet they repeatedly continue to tout this stat without so much as a footnote.

I realize that may still, to some, be "suspect", not "known", so yes, if you're asking, "Is there public or leaked internal documentation saying 'we realize these stats are misleading and don't care'", then no, there's not.


The burden of evidence is on Tesla to show their car is significantly safer in an apples-to-apples comparison. They haven't done that so far.

And this is actually quite intriguing: even if Tesla was 10x safer, it would be extremely difficult for them to prove it. Because they'd have to do a multi-year feature freeze on Autopilot etc. while collecting data.

See in total, Tesla has sold around 2 million cars. Comparable cars have a fatal accident rate of around 10 per 1 million vehicle years. So the target for Tesla with the current number of cars is to have less than 2 fatalities per year, on average. To show with statistical significance that you have achieved that, you would need data for around 5 years without doing any changes. Maybe only 3 or 4 years given that the number of Tesla's keeps increasing, but still.

Really, it's only when you're doing 10 million cars per year, like Volkswagen or Toyota, that you can make statistically meaningful statements about safety while also updating your car's software frequently.


> NON-autopilot drivers eating, texting (constantly!), seeming to look up directions and much more (falling asleep).

FSD is something people are _encouraged_ to use, not _discourage_.

This is an entirely disingenuous take.

And Tesla's stats have been, _repeatedly_, shown to be entirely disingenuous.

Human drivers don't have a "disengage" mode, other than pulling off the road, when conditions are "suboptimal". AP disengages. FSD disengages. And suboptimal conditions are, surprise, surprise, the conditions in which more incidents occur. So the whole "AP is safer because when it's safe to use AP it's safer" circuituous reasoning, and every stat Tesla publishes, while withholding as much information from the public as it can (while touting their "corporate transparency is a core value" spiel), should safely be thrown in the trash.


> I think some teslas now even show a message warning that cruise control will not brake in some cases?

In my experience, that warning about not braking occurs when you are in TACC mode but are pressing on the accelerator to go temporarily faster than the set speed.


Unfortunately some people may have gotten into the habit of hovering their foot over the accelerator to be ready to override phantom braking, so maybe some even rest their foot on the pedal without realizing.

Fortunately, removal of radar will reduce the frequency of phantom braking, so hopefully this habit will fade.


Well she says she did. Further confusing the matter is there is the FSD Beta, which very few people have access to for now. Then there is adaptive cruise (works really well!), then there is adaptive cruise with autosteer (not so good!). So who even knows which she was talking about.

I fault Tesla for having a mess of badly named options, and for putting beta stuff out on the roads for sure.


Yea, the fsd beta issues last weekend (weekend before?) involved false forward collision warnings.. I imagine she did not have the fsd beta though.


> So she continued to use it? That's insane.

Or she just lied to deflect blame for her clearly at-fault collision. I genuinely don't understand why this story is getting so much play here. It's like people read "Tesla" and completely lose their mind.

Literally thousands of these accidents occur every day. This is a routine a collision as one can imagine, and there is basically zero evidence for a system failure here at all. It's One Unreliable Anecdote. And it's generated 300+ comments and counting? Come on HN, do better.


> "So she continued to use it? That's insane."

Well its proving to be a great excuse when she runs into people with her car. Sounds like a win for her


this is how i feel too. i want FSD to succeed and i dont want it to destroy itself during the beta. But the videos ive seen are how you put it, people say "whoa that was close" and then re-engage autopilot and have 3-4 more incidents in the same session


And then laugh nervously, and demonstrate a complete misunderstanding of ML. "We need to keep having near misses to train the car of what a near miss might be!".

And God forbid if they post a dashcam video to YouTube. Then they'll get hit with comments like this with a sincere straight face:

> "FSD didn't try to kill you. More like you're trying to get FSD killed by publishing this video."


I think the main problem here seems to be that the driver has put too much trust into Autopilot and what she needs to do is rather assuming it is as just another driver assistance feature and always be in control of her vehicle at all times.


Was it TACC or Autopilot or FSD or driver trying to blame someone else?


> I talked with the NTSB who said that they won't investigate because no one was killed.

So they must wait until lots of people get killed by autopilot / FSD in order for your incident (and many others) to be investigated?

The scary thing is that it is beta quality software on safety critical systems and not only the drivers know it, they use it as an excuse to cover them not paying attention on the road. Was driver monitoring even switched on at the time?

As I have said before, this software without proper driver monitoring and autopilot / FSD turned on puts the driver and many others on the road at risk.

At this point, this is more like Teslas on LSD rather than FSD judging from the bugs from many YouTube videos and social media posts of all of this.

Oh dear.


> o they must wait until lots of people get killed by autopilot / FSD in order for your incident (and many others) to be investigated?

The entire NTSB (which covers trains, planes and automobiles) has 400 employees. They only get pulled into fatal accidents because they only have that much manpower.


That low level of funding for such a critical function seems a clear threat to public safety.


The NHTSA has primary responsibility for highway safety at around $1B/yr in spend and 650-ish people.


Still need transportation…


She's supposed to be able to drive herself, without autopilot.


Possible scenario A: Teslas have a fundamental flaw that causes a single driver to have several accidents. Multiplied by the number of Tesla drivers who use Autopilot, there must be hundreds of thousands of unreported and uninvestigated auto-pilot accidents. Most certainly a conspiratorial cover-up by Tesla, the NTSB, and the NHTSA.

Possible scenario B: The driver of the Tesla has misrepresented or misunderstood the facts of the situation.

Possible scenario C: The driver of the Tesla doesn't know that she should take her foot off the accelerator pedal when using Autopilot.

I suppose any of these (or several other) scenarios are at least possible. I'd probably apply Hanlon's razor here before anything else.


> Most certainly a conspiratorial cover-up by Tesla, the NTSB, and the NHTSA.

You're just adding the word "conspiracy" to imply blaming the car is insane.

You skipped "Possible scenario D: It's the car's fault, and the NTSB and NHTSA didn't respond due to a lack of interest, an incompetent person handling the complaint by the OP, the OP not being able to communicate what happened well enough, or a procedure at the agencies which requires the driver to make a complaint rather than an observer or a victim."


I would assign a large probability that Autopilot wasn't even engaged and it's driver's fault.

Accidents with Tesla (many of those that make the press) are often caused by not being accustomed to the car's acceleration.

There is Chill mode that decreases max acceleration, but I wouldn't expect the driver who manages to get into 2 accidents with a same car to read a manual


Yeah, the first thing that came to mind was that one valet who claimed that it was not his fault, but the autopilot that made him accelerate heavily and crash inside a parking garage.[0]

People called out bs on it instantly, as it would make no sense for autopilot to suddenly accelerate inside a garage until the crash, but the driver defended himself to death.

Only for the formal investigation to happen and confirm that the driver did it himself by fully pressing on the acceleration pedal instead of the braking pedal without any autopilot engagement, and was just trying to shift blame. And no, it wasn't just Tesla's legal team claiming that. The guy went to a third-party service that could decode the blackbox data from the car (again, that was his own technician, not requested by Tesla or anything), and the evidence clearly showed that the driver pressed the accelerator full-on. The raw log that is relevant to the situation is included in the link as well, in case someone wants to see it with their eyes.

Almost every single article I've seen that features bizarre accidents like this ended up being bs, and I am seeing the "accidentally pushed the accelerator instead of the braking pedal" vibes here as well. Will be eagerly awaiting for the results of the investigation before I make my own judgement though. Note: I am not claiming that autopilot is infallible, and there were indeed some legitimate incidents a while ago. But this specific one in the OP? Yeah, I call bs.

It also helps that NHTSA has already investigated such claims before[1], so they looked into hundreds of cases that were claiming Autopilot suddenly accelerating and causing crashes, and they discovered that not a single one happened due to autopilot, but due to the driver misapplying acceleration and braking pedal. Direct quote from their investigation summary (emphasis mine):

>After reviewing the available data, ODI has not identified evidence that would support opening a defect investigation into SUA in the subject vehicles. In every instance in which event data was available for review by ODI, the evidence shows that SUA crashes in the complaints cited by the petitioner have been caused by pedal misapplication. There is no evidence of any fault in the accelerator pedal assemblies, motor control systems, or brake systems that has contributed to any of the cited incidents. There is no evidence of a design factor contributing to increased likelihood of pedal misapplication.

0. https://insideevs.com/news/496305/valet-crashes-tesla-data-r...

1. https://static.nhtsa.gov/odi/inv/2020/INCLA-DP20001-6158.PDF


One concern I would like investigated is that it appears Tesla's Autopilot can't detect (or can no longer detect?) stationary objects.

Until fairly recently, I believe that the RADAR, which used to be in all their cars, would have detected the stationary object (me) and applied braking force to at least reduce the impact.

Now, though, Tesla has stopped installing RADARs in their cars and apparently also disabled existing RADAR in cars that have it (because they all are using the same camera-only software).

If this has also removed the ability to detect stationary objects and the corresponding emergency braking functionality, this is a really serious problem and needs to be addressed, perhaps with a recall to require installation of RADAR in all the new cars and re-enabling of it.


> Until fairly recently, I believe that the RADAR, which used to be in all their cars, would have detected the stationary object (me) and applied braking force to at least reduce the impact.

It is the other way around. They've stopped using RADAR because RADAR has too many false reflections at speed and it was causing learning issues in the ML dataset. The vision based position detection is better (could argue about weather and visibility here, but it is irrelevant to this convo). Reliance on RADAR was causing it to hit stopped objects, because it was relying on radar data that was already too jittery and had to be ignored.

Regardless of the RADAR issue, the driver is responsible for the vehicle at all times. If someone hits you using cruise control they remain at fault.


Nearly every major manufacturer has a separate emergency stop system that continually looks for an obstacle in the vehicle's path and applies braking force, overriding the driver and any cruise control or self-driving that is in use.

These often use RADAR, have worked for years, and are routinely tested by international agencies such as the IIHS.

See, for example: https://www.youtube.com/watch?v=TJgUiZgX5rE

Teslas at least used to do this too:

https://www.iihs.org/news/detail/performance-of-pedestrian-c...

https://www.youtube.com/watch?v=aJJfn2tO5fo

If Teslas no longer have this functionality, is a serious problem that needs to be corrected. That could mean reprogramming existing cameras or adding a sensor if the system really did rely on the now-removed RADAR.


That's not true and you are confusing different features. AEB and Autopilot/Assisted Cruise Control do not behave the same. Here's some articles I found with a google of "radar cruise control stationary object."

https://arstechnica.com/cars/2018/06/why-emergency-braking-s...

> But while there's obviously room for improvement, the reality is that the behavior of Tesla's driver assistance technology here isn't that different from that of competing systems from other carmakers. As surprising as it might seem, most of the driver-assistance systems on the roads today are simply not designed to prevent a crash in this kind of situation.

> Sam Abuelsamid, an industry analyst at Navigant and former automotive engineer, tells Ars that it's "pretty much universal" that "vehicles are programmed to ignore stationary objects at higher speeds."

Here's one with BMW owners asking about it: https://g20.bimmerpost.com/forums/showthread.php?t=1738458

Here's a review of Cadillac's SuperCruise: https://www.thedrive.com/tech/14830/i-wrote-this-in-a-superc...

> Related: All these types of semi-autonomous systems have a hard time identifying stationary cars—vehicles in stopped traffic on the highway, for example—as "targets" to avoid. Coming over a crest at 70 mph, I pick up the stopped traffic ahead of me far before the system does. I even give the system an extra second or two to identify that we're hurtling towards stopped traffic, but have to initiate braking. However, it's very likely that even at that speed, the car could have made the stop before an impact.

And the feature you're thinking of, AEB and its limitations are here: https://www.caranddriver.com/features/a24511826/safety-featu...

> If traveling above 20 mph, the Volvo will not decelerate, according to its maker.

> No car in our test could avoid a collision beyond 30 mph, and as we neared that upper limit, the Tesla and the Subaru provided no warning or braking.


The Toyota Safety Sense system will slow the car down by 25MPH regardless of its initial speed (except possibly for "very high speeds").

It makes no sense to give up entirely when an application of brakes would reduce damage.

In the case of this accident, the vehicle not only didn't apply brakes but continued to accelerate after it hit me (the double hit).

This may or may not have been operator error, but the driver says she was on autopilot, that there was an OTA update issue, and that it has happened before. This all seems to suggest an investigation is appropriate and necessary.


I don't see why this has anything to do with Toyota's system, which by the way, had a recall for TSS braking due to stationary objects.

You're mad, I get it, but you need to take this up with the driver (who claims she had a failed OTA update, which caused her to wreck, AND THEN DROVE AND WRECKED AGAIN) whose insurance will take it up with Tesla. It isn't your job to figure out why the car did it.


The point is that it not only didn't slow down but accelerated.


Okay, and?

The only point here is that you were hit by an inattentive driver who didn't maintain control of her vehicle twice in one day. She put you in danger and your life at risk, and you're blaming the car.


> If someone hits you using cruise control they remain at fault.

That'd be relevant if they didn't call it autopilot. If they called it lane assist or cruise control plus or something then I'd agree.

Tesla is at major fault for implying that the Teslas can drive themselves, even if the driver is also at fault.


Tesla could call it Egre-MuzzFlorp and it wouldn’t matter. The surveys showing people think “autopilot” means “can drive itself” were fatally flawed, in that they were done with non-Tesla owners.


And yet, people drive their Teslas as if it means "can drive itself".

Words matter. You'd need real evidence to convince me that Tesla calling the system "Autopilot" hasn't significantly contributed to crashes.


On the contrary, we need real evidence suggesting it does. Certainly the population level crash data doesn’t support it.


>The surveys showing people think “autopilot” means “can drive itself” were fatally flawed, in that they were done with non-Tesla owners.

Reality check: There are videos shown Tesla drivers sleeping in the car, or using the car from the back sit, so we have proof that a subset of Tesla users are using autopilot as "it drives itself and only laws forced Tesla to add those warnings". This videos are reality, no conspiracy, no subjectivity, no fake stats... some idiots are believing the marketing and some other idiots are still defending that marketing.


> some idiots are believing the marketing

That doesn't follow. Some people are definitely being idiots, but the marketing doesn't say that it's okay to do those things, so they're not "believing the marketing" by doing so.


If an at-fault Tesla hits me, my legal beef is with the driver of that Tesla. They may in turn rely on insurance or on suing Tesla to recover damages from their own misunderstanding of the car’s abilities or improper marketing, but my beef remains with the driver of the at-fault vehicle.


Radar is not good for detecting stationary objects. Of course you get a nice reflection back from a stationary car but you get a similarly nice reflection from an overhead trafic light or a manhole cover or a dropped nail. Because of this every automotive radar ever fielded gates out the stationary objects. If you wouldn’t you would get a crazy amount of false positives.

They can do this because the radar measures the relative speed of objects via dopler shift, and you know the speed of your own vehicle. Anything which has the same speed as you have but goes in the other direction is most likely stationary. (Or moving perpendicular to you. The velocity difference is vectorial, but dopler can only observe the component along the observation vector, and civilian radars have terrible angular resolution.)

In short: nobody ever used radar to stop cars from hitting stationary objects. This is not a Tesla specific thing.


I think its probably more likely the driver made a user error— tens of thousands of people use autopilot every day in stop and go traffic


Stop and go traffic is not the same.

In stop and go traffic, the car in front of you is visible to the camera at all times.

In this case, my car may not have been acquired by the camera before it was already stationary, in which case it might have been ignored, causing the accident.


We won't know until the lady provides you with evidence that her AP was actually on, but my guess is that she fucked up and she's just using AP to offset blame.


Exactly: An investigation would find out what happened.


Please post an update when the investigation concludes


The main point of this thread is that I don't know how to actually get an investigation to happen!


The insurance company handles it.


To be fair to scenario A, I've seen one other video of a driver reporting AP hitting the car in front of them during a rapid slowdown in traffic. Its hard to say how widespread it is, if the NHTSA isn't wanting to investigate.


Or D) Tesla stops instantly when it hits OP once by itself and again when the person behind it hits it.

(Spare me the low effort comment about how everyone should be prepared for the car in front of them to stop instantly because it hit something, statistically nobody drives like that)

Edit: I misinterpreted the OP about it being an four car pileup.


Dude. It's not a "low effort comment" to point that out, it's literally the law. If you can't stop when the person in front of you stops, you're too close. Increase your following distance. Don't let others bad habits justify your own.


Yes, yes it is a low effort comment. And it is exactly what I was trying to preempt. It adds exactly zero to the conversation to say "but the law" or "but drivers ed" or "but some proverbial rule of thumb".

For better or worse neither your fantasy of how people ought to act nor the letter of the law is reflective of how the overwhelming majority of the human population operators motor vehicles or expects others to. Is it ideal? Probably not. But it's a decent balance between being somewhat prepared for the expected traffic oddities, leaving margin for some subset of the unexpected and efficient use of road space.

I'm sure this will be an unpopular comment because there is no shortage of people here who think humans adhere to laws the way a network switch adheres to its configuration but the reality is that there is not perfect alignment between how reasonable traffic participants behave and the letter of the law.


I think that some of us would argue that these are not "reasonable traffic participants". People who do not maintain sufficient stopping distance are one of the most frustrating parts of the American driving experience and are (IMO) extremely disruptive to safe travel, especially on highways.


>I think that some of us would argue that these are not "reasonable traffic participants"

You're basically arguing that almost everyone else is unreasonable. That's going to be a very uphill argument.

Also there's no reason to hide behind "I think that some of us would argue". You clearly hold this opinion. Ask yourself why you have reservations about owning it.

>People who do not maintain sufficient stopping distance

Who defines "sufficient" because the consensus based on the observed behavior of typical traffic seems to be that "sufficient" is a few seconds where possible but always less than whatever the comments section on the internet thinks it should be

>American driving experience

The American driving experience is not particularly remarkable compared (except maybe in its low cost) compared to other developed nations. All of which are pretty tame compared to developing nations

I'm not asking you to like the way people drive. I'm just asking you to not assess traffic incidents based on the reality of how people drive and not the farcical assumption that most participants are following or can be expected to be following whatever rules are on paper.


You could have pre-empted it by not making such a claim in the first place, and I abhor your attempt to normalize this dangerous behavior. It is not a "delicate balance." Increase your follow distance. Driving closer to the car in front of you gains you nothing but sacrifices valuable reaction time in the event of an emergency.


Take your high horse and turn it into glue. I'm not attempting to normalize anything. Look outside, it's already normalized. It is the current status quo. I'm not endorsing it. I'm simply asking you to not to pretend otherwise so you can pretend to be outraged at someone who failed to avoid an accident because they were driving like typical people drive.


> If you can't stop when the person in front of you stops, you're too close

If the other driver decides to randomly brake in the middle of the road (as some Teslas have been known to do), it's not necessarily the person behind's fault.


It absolutely is. If the person behind was unable to avoid collision, then the collision occurred because they were following too closely. It doesn't matter whether it was a Tesla phantom-braking, or a human slamming on the brakes to avoid hitting a dog. It is always the driver's responsibility to maintain enough distance that they can safely respond to any action taken by the vehicle ahead.


>It absolutely is. If the person behind was unable to avoid collision, then the collision occurred because they were following too closely.

This is infantile circular logic. It makes for great internet feel good points an little else.

What the other car was doing absolutely matters. Plenty of people have brake checked their way into an accident and wound up paying for it because there were witnesses.

>It is always the driver's responsibility to maintain enough distance that they can safely respond to any action taken by the vehicle ahead

Citation please. I'm particularly interested in one that backs up the word that you thought was important enough to italicize.

My state places no specific requirement for following distance upon drivers. The state driver's manual states a suggested minimum of two seconds.

I spot checked two other states and their drivers' manuals advise similar (one advised three seconds, one advised a variable number depending on speed), neither said anything about being able to account for anything the car in front of you does.


In my part of the world, it's always the person's behind fault.


This is changing with dash-cams. If the other person has a dash-cam you are not free to do unreasonably things and then weasel out of being on the hook on the basis of the front of their car hitting the rear of yours. Of course, if there were witnesses this has always been the case.


If the dashcam shown the driver following too closely and failing to brake in time, I'm not sure a dashcam will help. But sure you can't reverse into someone.


Really? In most states and EU countries its not. It's assumed to be the person behind's fault at first, but that's a heuristic, not a necessary assignment.


No other car hit the Tesla from behind. It hit me with two impacts all by itself.


You said it was a 4 car accident? She hit you from behind and you hit the car in front of you which hit the car in front of it? Did you end up hitting the car in front of you twice because she hit you twice?


I'll go against the general opinion in this thread and say that this is not something that I'd expect to be blamed on Tesla. Especially considering this:

> The driver said she was using Autopilot, that an OTA update had had a problem earlier in the day, and that she had actually had the same kind of collision on Autopilot previously!

Autopilot might be an absolute dumpster fire, but what you are describing is similar to an adaptive cruise control failure and the liability is still with the driver. She rear ended you while you were at a red light, make sure that her insurance pays and that's it. If she wishes to sue Tesla she can obviously do so.


I have adaptive cruise control, and it behaves the exact same today as it did on the day I test drove the car. It doesn't change behavior unpredictably with OTA updates!

How was the driver supposed to know that the previous issue was not some rare fluke?

Tesla is recklessly rolling out software, and amazingly has pushed off the liability to the drivers willing to try it. Sadly we are all part of the beta, like it or not, if there are are Teslas driving around on public streets.

I'm mostly shocked that insurance rates have not skyrocketed for FSD beta testers.


I have a tesla - I could be wrong but I don't think that autopilot has seen any updates in quite some time. The exception to this is people who have the FSD Beta - this is a very small percentage of people who were able to get a impossibly high driving score. Getting into an accident would have made it impossible for this lady to get the FSD Beta


To clarify, I was responding to the OP with regard to the "investigation" part. Authorities won't launch an investigation on someone rear-ending someone else on the basis that "autopilot did it". As a wronged individual he should make sure that he is properly compensated as non-responsible for the crash. The driver herself can (and maybe should!) at least contact Tesla to inquire about the whole thing.

If truly Teslas with FSD are causing more accidents than another car then yes at some point some government body will investigate but they won't care about a single incident with only material damages.


> I have adaptive cruise control, and it behaves the exact same today as it did on the day I test drove the car. It doesn't change behavior unpredictably with OTA updates!

I do too on a non-Tesla. I've seen differing behaviors not just due to updates but due to the time of day!


Yes mine tends to fail more during winter because the sun is much lower in the sky. It also doesn't like heavy fog, what a surprise, or ignores once in a while uncommon vehicles such as trucks carrying weird cargo.


Responsibility and who pays are certainly important, but isn’t it equally concerning that there isn’t an investigation initiated directly to determine if there’s malfunctioning technology?


At scale not really, how many cars get rear-ended daily in the US? Insurance companies will forward the info to another government body which will launch an investigation if there is a significant deviation from the norm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: