Hacker News new | past | comments | ask | show | jobs | submit login
An Update on Last Week’s Accident (tesla.com)
349 points by runesoerensen 12 months ago | hide | past | web | favorite | 429 comments



I really hate the whole Tesla angle of "technically we are not lying but we know people are going to mis-remember and misrepresent what we are writing".

Take for instance: > The driver had received several visual and one audible hands-on warning earlier in the drive...

What this means is that during this incident there were No visual or audible clues that a crash was about to happen

What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't. They are stating it as a fact because they know some people will incorrectly claim that the car warned the driver prior to the crash.

With their eager to try to explain what happened why aren't they talking about what actually went wrong? Why did the car drive straight into a barrier. If their claim is that it was cased by the driver not having his hands on the wheel for 5 seconds, then they need to fold the company and give up trying to create self driving cars. That is Not acceptable of a self driving system.

The fact that the barrier was damaged intensified the accident, but that does not in any way excuse their system driving straight into it. And no, you can't get out of culpability by claiming statistical superiority. That's like a gang-member drying to get out of jail after killing a rival gang member because his gang statistically kills fewer people. Tesla has put a product in the hands of consumers, when it kills those consumers they need to step up to the plate and be honest about their fuckups, not just blame the drivers, blame the infrastructure and point to statistics.


And no, you can't get out of culpability by claiming statistical superiority.

Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.

Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers. My working assumption is that Tesla doesn't do that because it would be far less flattering to the 'A Tesla is totally safe, honest' PR boosterism that runs through every Tesla press release.


> Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers.

I'm having a hard time to find an exact number comparison, but:

- Tesla claims 1 fatality per 320M miles

Compared to

- A 2015 study of 2011 model year vehicles showed, for instance, Volvo XC90 had never been involved in a driver fatality (1)

- There are about 10K 2011 XC90s in the US (2)

- Avg. US drivers cover 13,400miles / year (3)

So:

- 4 years studied * 10K vehicles * 13,400 avg miles -> 536,000,000 fatality-free miles

This is obviously super hand-wavy, but I think it's fair to state as a hypothesis that Tesla could be twice as deadly as the safest high-end vehicles.. you'd need to design some experiment to attempt to falsify it to feel confident this is correct.

(1) https://www.freep.com/story/news/nation/2015/01/29/study-cha...

(2) http://carsalesbase.com/us-car-sales-data/volvo/volvo-xc90/

(3) https://www.fool.com/investing/general/2015/01/25/the-averag...


Eh, you can't cherry pick specific (model, year) combinations after the fact without overfitting. You expect some variation in deaths per (model, year) and by searching for the least deadly ones, you'll dramatically influence your estimate simply because you used a maximizing search instead of a broad survey of a-priori safe models.


This is totally true, and I agree; I tried to guard against this at the end with the "I think this is enough to state a hypothesis, which would then need to be tested.." paragraph.

However - the same is true for 2009, 2010, 2011, 2012 model XC90s, and they are just one of 10 models that had no fatalities in those four model years in the study. I just picked that year and model to get one data point for mileage to compare; the other model years of XC90 have similar sales, so the same applies.

So, while you're right, I don't think it's an entirely useless comparison; it indicates there could be something here, which could then be used to state a falsifiable hypothesis, something like: "Similar cohort drivers would be safer in <one of these ten models> than in a Tesla.".


At a minimum it indicates that Tesla's numbers do not seem exceptionally safe, even if you can't say they're less safe than comparable competitors either.

Given the low number of incidents, you probably want to be looking at something other that fatalities to get a large enough pool of incidents to reduce the impact pure chance; at least in addition to actual fatalities. It's not going to be an easy comparison, that's for sure.


Probably should also note, that the average Tesla driver is likely male, has money (i.e. more risk taking), etc. Essentially, compare to sports cars as opposed to high end vehicles.

They we're originally advertised as sports cars really. 0 to 60 faster than any other car, etc.


I'm not sure how much of that is about the car versus the driver. Volvo drivers are some of the most slow, excessively cautious, annoying (to others) drivers on the road.


> Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.

You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.


Firstly, wealthy drivers can afford to keep their cars in better shape and can afford better cars in the first place. Neither of these factors depend on self-driving technology.

Secondly, Tesla themselves makes it clear that their self-driving can't be relied upon to driver the car unsupervised - the driver has to be paying attention at all times in case the Tesla self-driving code decides to do something catastrophically stupid. Hence the same factors that correlate with driver attentiveness in the ordinary population will also affect the Tesla population. Wealth correlates with health correlates with better reaction times & general fitness.

The Tesla autopilot tends to turn itself off precisely in situations where driving is difficult and complex - which in turn implies that these human factors will remain just as important as they are in the general driving population.

(Of course it may simply be that the reason wealthy people have a lower death rate on the roads per mile is that they can afford cars that are safer in crashes; the above is my supposition, not evidentially backed.)


>Wealth correlates with health correlates with better reaction times & general fitness

I agree with your general point, but you can't just join up 2 correlations like that, that isn't always going to work when there are other confounding factors (and there always are). Wealth also correlates with age which anticorrelates with reaction times and general fitness.


In case anyone's wondering, here's a simple example of 3 variables so that the first is positively correlated with the second, the second positively correlated with the third, but the first and the third have a negative correlation: simply take 3 independent standard normal variables, X, Y, Z, and define our 3 variables to be u=X+Y, v=Y+Z, w=Z-X. Then correl(u,v)=0.5, correl(v,w)=0.5 and correl(u,w)=-0.5.


According to insurance adjusters, age correlates with lower accident rates.


Lower accident rates, or a lower percentage of injury or fatal accidents? I find it hard to believe 80 year olds are getting into fewer accidents than 40 year olds unless it’s because they drive less frequently.


Which would be fine for an insurer of course. Heck, even if they got into more incidents, but the incidents were cheaper, that would be fine for an insurer too.

And not all markets are equally competitive. Just because insurance rates differ doesn't mean risks necessarily do the same way. And let's not forget that while insurers are professional risk assessors that doesn't mean they're infallible either, or immune to other distortions (e.g. pass-the-buck subprime mortgage kind of issues).


And having a hotmail emailaddress with higher accident rates.


Agreed, hence my parenthetical note at the bottom of the comment.


> Wealth correlates with health correlates with better reaction times & general fitness.

But at some point between wealthy and rich the person hires a (non-wealthy) driver and now they're back to square one.


Wealthy people can better control the quality of the driver.


What I'm looking for really is anything specific to auto-pilot technology to suggest that the safety benefits won't also be met, or exceeded, when extrapolated across the whole population.

All other factors taken into account, driving has become much, much safer in recent years, and is now safer than it ever was. When you start to use technology to eliminate driver errors in that self-same error-prone cohort, there's no reason to think that the safety dividend won't be even more pronounced than it is today.

If you want to link crash rates to wealth, you would also have to account for countries like Germany, that has a similar median wealth figure to the US, but a vastly lower motorway death rate [1]

It seems you can't both say 1) Tesla drivers are less likely to have accidents because they are wealthy and error-prone, and 2) Extending self-driving technology that eliminates basic errors to less-wealthy and/or more error-prone drivers won't have a net benefit on driver safety.

[1] https://www.sciencedirect.com/science/article/pii/S240584401...


I’m suggesting that Tesla drivers are both relatively wealthy & /less/ error prone!

Clearly /if/ Tesla self-driving code is safer than the average driver then extending that to the general driving population would be a net good. But if the apparent safety is an artefact of the statistics due to Tesla cars themselves being intrinsically safer just as practically every other modern vehicle is a safer car than one built 20 years ago then extending self-driving to every vehicle might not be a net safety benefit

Likewise, if the driver cohort is different (and we know that it is different) from the general population & that cohort is required to drive the vehicle precisely at the times that the self-driving code can't cope which are probably the times when driving places the most demand on the driver, then that different in driver cohort matters: the statistical difference in death rates could easily be a result of differences in driver populations.

You can't tell from the data as given & the fact that Tesla keeps repeating the aggregate figures in their PR is suspicious frankly.

(NB. that paper you link simply underlines my point - In both countries, the lowest wealth quartile has a vastly greater death rate on the roads than the rest of the population. It doesn't matter whether Germany is worse or better than the US. If Tesla draws disproportionately from higher income deciles, then we should a priori expect them to have a far lower death rate than the mean. Quoting the mean as a benchmark to compare the Tesla death rate against is outright misleading.)


Wealthy people tend to be older, which directly correlates to less impulsive or stupid behavior on the road.

People who buy a super expensive car have more to lose than some dude in a 10 year old hoopty.

Maintenance standards vary. We can presume that big brother Tesla maintained the car as they would have castigated the driver for not doing so. They are also the only game in town.


Statistically if you're wealthy enough to own a Tesla, you're also way less likely to have to drive when tired/under the influence for example.

All sorts of constraints are correlated with wealth that put you in way more risky situations if you're poorer.


Do you have anything to back this claim? It seems to not make sense.


For example: https://academic.oup.com/alcalc/article/46/6/721/129644

It's not exactly controversial to claim that lower income correlates to crime though.


There's lots of different kinds of crime, each very unique in its circumstances. Just try comparing the stats for securities fraud vs. drug possession, for example.

DUI is a tough one, people of all socioeconomic backgrounds like to get drunk, and people of all socioeconomic backgrounds like to drive. Off the cuff, I might guess that the numbers would show DUI to be the most egalitarian crime, actually.


How about you look at facts before spewing PC-motivated 'guesses', like http://www.thelancet.com/journals/lanpub/article/PIIS2468-26... and http://injuryprevention.bmj.com/content/8/2/111 - both from the first page of my first google search, I'm sure it's trivial to find many more.


[flagged]


First scientific study after 10 seconds of googling https://academic.oup.com/alcalc/article/46/6/721/129644


But Shaanie, maybe im reading it wrong, but from that article, it seems that “poor people” are more likely to do a dui, by a factor of somewhere between 1 and 2.8. That does in no way directly correlate with Teslas beeing 3.7x safer than the average.


Hell, wealth correlates with age. How many 16-20 year olds are driving Tesla’s?


For the same reason those drivers will pay less for car insurance for almost any car they buy: they are actuarially safer drivers.


> We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile,

We do? Is there a cite for that? It seems to me that driver of new luxury vehicles are disproportionately aggressive and unsafe, actually. Has this been studied?

If your going to make an argument for bad statistics, you need to practice good science yourself.


The simplest argument here is that Tesla's are new vehicles, the total US fleet average age is much older than any Tesla on the road today and driver/passenger fatalities tend to be higher in older cars because they have fewer safety features.


higher in older vehicles

This seems intuitively that it would be true, but are there any stats on car age and accident frequency? Also where is the line on old? Vehicles made in the last 5-10 years will share most safety features, with all but a small minority of newer cars having some cutting edge. I'll admit I had been thinking of fleet age from the state of maintenance, but safety feature is a fascinating angle as well.. And the article you link in another comment was a fascinating read too, thanks!


For one there is the presence/absence of safety features (that one is obvious).

Then there is the actual design of the vehicles, cars that were made in the last couple of years fare much better in collisions with cars that are older because they are a lot stronger, and are better able to channel the energy of an impact to areas away from the passengers. Many newer vehicles will for instance shift the engine under the passenger compartment in a frontal impact.

Older airbags had 'best before' dates associated with the igniters being somewhat unstable and that reduced the certainty of them going off as they got older.

Then, finally, the bodies themselves, once cars start to rust you can patch them up but they won't be the same afterwards. There is quite a bit of redundancy in the body but it definitely doesn't help if there is significant corrosion.


Yeah yeah, I get the argument. The point was that you can't ding Tesla for "bad statistics" by using an argument that asserts "tend to be" without evidence.

In fact I personally doubt (i.e. I have no evidence and am not actually making this argument because it would be "bad statistics") this is true if you compare Tesla's to the BMWs being sold into the same demographic. Those "safe new vehicles" you're positing include a whole lot of crossovers and minivans driven in much safer ways.



>> You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.

It doesn't. But autopilot is only supposed to be used on the highway. Not in New York City. Not on dirt roads. On the highway. That's the easiest place to make a system like that work - adaptive cruise control and a simple lane follower go a long way there (I hope Tesla has something a lot more advanced than that). To compare stats on their limited use autopilot to the whole country is truly disingenuous.


> You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.

One important nit: it isn't self driving. It's sometimes self driving.

About cohorts: how about this: young (thus inexperienced and more careless) drivers generally can't afford Teslas. So yes, wealth is an indicator for how accident prone someone might be.


Wealth of driver - converts to potential good lawyers per damage per crash converts to fear of god in Tesla software departments management converts to actual safety practices being followed and corners not being cut?


That's an awfully long risk compensation feedback loop to close! It sounds like a plausible motivation for Tesla to care more about the quality of their self-driving code than they otherwise might, but it would be a bit difficult to run the counter factual experiment to actually prove it one way or the other :)


Do you expect 370% above average for any cohort? That's amazingly amazing. That must mean that some other cohorts are correspondingly far below the average, what are they?


Less rich people, less healthy, driving smaller, older, less well maintained, less safely designed cars in more dangerous/remote locations. That’s a bunch of factors contributing to safety or lack thereof. Vehicle age alone is a huge factor in safety. Even the likelihood the driver and passengers are wearing a seatbelt is likely vastly different between Teslas vs all cars on the road. All those factors add up.

But that’s not even the point. The point is that the numbers Tesla gave are meaningless. Tesla’s stats would only be meaningful if you compared them to similarly sized/priced cars of the same age cohort with the same market segment. The fact that Tesla compares its luxury high-priced large sedans to all cars and not its own segment is itself very telling. They have those stats, and they would use them if they were compelling.


I've heard that before and it still sounds puzzling... Elon Musk has an ego the size of Mars, right? So he wants to sell autopilot-equipped cars to everyone, and his basis for comparison is everyone.

(Last posting from me today.)


Thank you. Do people really want Tesla to use another company's failure / worse #'s via statistics in their review of this accident?

At least they are responsive, seemingly open, and seemingly empathetic. They are obviously handling things in a business first manner as corporations will but it hardly feels as inhuman as people seem to be making it out to be.


My question is honest. I found GP's assertion and its implication astonishing, but if that's how it is...


You forget the key rule: "Move fast and break things", including people and expectations of human decency


There's always someone who expects more. If you make a doodah that reduces (name something bad) by 30% someone will expect 40%, if it improves (name something good) by 50% someone will expect 60%, and by the way it costs too much, whatever it costs.

Name your expectations up front, please.


The expectation is simple: value human life. Don't try to do beta tests and simultaneously market it as if it's production quality. Accidents happen, but own up to them; don't immediately and fervently insist that you were not at fault before the investigation even starts.

It may be easy to casually update software and break builds, but when human lives are at risk a healthy dose of conservatism is appropriate.

But you're right -- it's probably unfair to expect a SV company to prioritize anything over profits (we saw that with Facebook, right?)


I thought about this over the weekend. About that you want, concretely.

Tesla says their cars have drive past that spot 85000 times so far. So you may be asking for some testing that's better at uncovering problems than 85000 whitenoise test instantiations are. Or you may be asking to test, and test, and improve until the software meets the developers' intentions: If their intention is to improve safety by 40% (or 400%, or 40000%) then you may be saying they shouldn't release a version that's improved over the baseline by 20% (or 200%, or 2000%) because that still doesn't realise the design.

Seems a bit absolutist to me. You could equally well make the opposite absolutist argument that having inhouse software that's better than the current released is putting people's lives in danger by inaction/negligence/whatever. That fixing a bug or improving a feature confers a moral obligation to release immediately, lest people's life be put in danger which would be avoided by the new code.


Have you seen some of the drivers out there? Next time you’re in a parking lot take note of all the cars in terrible condition with scrapes and dings all over.


This is by far the most important point in this discussion. According to this article (https://www.greencarreports.com/news/1107109_teslas-own-numb...), Tesla's statistics are deliberately wrong, and Tesla compared its Autopilot crash rate to the overall U.S. traffic fatality rate—which includes bicyclists, pedestrians, buses, and 18-wheelers.

You could argue that Tesla should be compared to luxury vehicles (which have more safety features), but even if you just compare their crash rate against U.S. drivers of cars and light trucks, the Tesla Autopilot driver fatality rate is almost four times higher than typical passenger vehicles...


They should (on closed tracks) test self driving cars with drunk and deliberately stupid drivers. Do they do this? Because if you mass market such a car you will get people using it to drive themselves home drunk.


They can be far more specific: just tell us how Model S drivers without (or not using) Autopilot do compared to those with and using it. This would control almost every variable except Autopilot. I find the fact that they don't release this number as extremely damning.


Somebody has tried to do this [1]. It would be nice to see this analysis from Tesla themselves. As you say, we can probably guess why we haven't.

[1]: https://www.greencarreports.com/news/1107109_teslas-own-numb...


Tesla always blame the "driver" (and publicly so, nonetheless) when they are killed by their "autopilot" (which is actually driving, that is fulfilling the purpose indicated by its name, despite being not very good at it...) -- either directly, or by carefully stating misleading statements that are both technically correct and incredibly dishonest in the context of both the events and the surrounding text.

Why people are still using their shit is beyond me. Why they are still authorized to sell it under this marketing (like the name given to it, or the mismatch between the restriction and the practice), I also can't understand: it should be better regulated by the authorities.


This is probably the worst part about driving a Tesla: Your car is going to testify against you. Because Tesla can and will make a statement like this, and they will always throw you under the bus. Which is a terrible way to treat a customer, as a general practice.


> Your car is going to testify against you.

Presumably Tesla/the car is telling the truth as it saw it. I don't think you're suggesting they're falsifying the logs.

> Which is a terrible way to treat a customer

I'd assume that Tesla sells you a vehicle, not PR or legal services that protect you in case of an accident. Your expectation that Tesla should take the blame is unreasonable by any standard.


I'm not saying Tesla is lying. But I am saying that I could spend six figures on a car from a company that is ready and waiting to stab me in the back. Or, I could spend five figures on a Toyota, knowing that Toyota is not going to sell me out if my car crashes.

I mean, look, it makes sense Tesla has to defend itself, especially legally, but there's a big difference between pulling out the logs when you need to for the lawyers and the regulators... and making a public blog post blaming the customer.


I think you're absolutely correct, and moreover that the difference comes from that it's in Toyota's best interest to think about the long term, play good guy and dip into their rainy day money for any costs associated with that.

Whereas with Tesla, they're leveraged out to here with stock valuation based on promises and projections, and if their key tech is perceived as seriously flawed, they're not going to exist as a company in the long term.


As far as I can tell, Toyota (or other car manufacturers) would be just as inclined to argue driver error in crashes (especially once involved in litigation). See e.g. https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_... for a discussion of some Toyota crashes.


I think there's a significant difference between a company who will defend itself reasonably in court, and one which is actively collecting telemetry on your car, and will proactively write a blog about how badly you drove.


There is a nuance here though: unlike a "classic" car, tesla and other modern cars clearly have access to a wealth of data about your driving and your crash. The scope of lying by omission, of even simply misinterpreting the wealth of data is much larger.

I do think it's reasonable to ask of them to either keep completely silent; or to be completely open - perhaps by giving complete access to all data to a truly independent third party, and not picking which facts they think are fit for public consumption, and which are not.

I admire what Tesla has achieved - but this is a company under considerable pressure. They may well go bankrupt; and every incident matters. They are under terrible pressure to massage those facts to make them look good. I can't help but treat any statement like this no differently to any other ad - it's easily possible to be misleading even when there is some possible interpretation that isn't an intentional untruth.

Everybody knows people aren't good at sitting around doing nothing and maintaining focus. Distracted driving isn't a new risk. So the question is: are the safety benefits to a driver aid really greater than the risk they cause by encouraging distraction? Even if the driver aid catches 99% of all risks, that may yet be a net negative. Personally, I don't think it's reasonable for a system that encourages distracted driving to then claim it was the driver's fault they were distracted. No amount of small print or big fat warnings can ever excuse ignoring reality. If the autopilot is contributing to driver inattention, then the driver's inattention is no longer just the driver's fault. Not that I'm at all sure that's what's happening here, but there are some ominous indications, that is for sure. Without an autopilot, who takes their hands off the wheel for more than 6 seconds on a drive like that?


You know how many plane crashes are not the fault of human error? Very few.

I have a friend that investigates planet crashes. He’s told me that between pilot error, human error, hubris, flying when conditions are too dangerous, poorly trained or sloppy mechanics only about about 1% really can’t be classified as being caused by humans. He said the stats are a bit skewed because like any organization that reports on human performance they’ve been influenced to be a bit “flexible” when assigning blame.


Some of this is a quirk of how the NTSB reports accidents. For instance if a small plane suffers engine loss in flight, the cause of the accident will be listed as "failure of pilot to land the aircraft safely without power". Which is technically true, but most ordinary folks would blame the engine failure for the crash, not the pilot. It reflects a culture where the pilot is absolutely responsible for the aircraft and its passengers. That's the exact opposite of where the self-driving car industry is going.


Human error is distinct from driver/pilot error. It's possible (though quite unlikely) that there 0% pilot error in all plane crashes, and 99% human error.

Given that plane auotpilots are roughly comparable with Tesla's autopilots, one wonders how many plane crashes have occurred while the plane was in autopilot, and what fraction of those crashes were attributed to pilot error.


> planet crashes

I want this job. Although I'm pretty sure all planet crashes would be the result of gravity.


I think the regulations will come soon enough. Tesla, Uber, etc are trying to get as many miles booked as possible before this happens.


Sadly I have to agree with you on the narrative angle. While I am very impressed with what Tesla has accomplished in their autopilot I find the careful construction and omissions in this follow up post to be done poorly.

They omit what speed and what lane the car was travelling in. They omit the time at which the car was no longer behaving in the way in which it had (when did it start heading for the crash barrier? At what point did the wheels closest to the barrier move over the line? etc)

They have previously mentioned that their cars have navigated this spot many times in the past (over 200 as I recall) did they note the number with and without the crash barrier intact? How many times was the car in the exit lane and how many times in the lane to the right of the exit lane? How many times have they navigated past it with a damaged crash barrier.

The driver had driven this route before, it was his commute to work. Had he driven it with this car on that route in autopilot before? Did it work then? How many times? How many seconds were there between when the driver's car was doing something right (in lane etc) and doing something wrong (straddling the lane). The response indicates there were 5 seconds of visibility coming up to the crash barrier, at what point was the autopilot on a collision course? All of those five seconds? or the last 500mS ?

Reading the Tesla response I felt I wasn't getting the whole story, and was getting a whole lot of deflection. That isn't the tone that inspires confidence in me. And that makes me sad because they have done much better in the past. I cannot help but speculate that this time they feel they might have been contributory at least and that a full disclosure would be used against them in a civil suit.


Yeah, it's pretty crafty PR. The CPC must be green with envy, while Uber is still ruing their reputation. Such a difference in conversation; so many people laying the blame squarely on the driver(he is ultimately the PIC tbh, but Tesla and particularly CalTrans messed up bad), however Uber is being raked through the coals when somebody literally walked in front of their car..

Just Say'n

EDIT: While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier. At least Tesla owns up to the limitations and warns drivers to stay in control, but why the hell was that barrier not replace d?! What systemic mediocracy is playing out there that they have road work signs with dates 3 years old on them, and a critical safety barrier is just missing with traffic flow as usual?!


somebody literally walked in front of their car..

The analysis I've seen suggested that the self-driving system in the Uber vehicle had 4 or 5 seconds to detect the pedestrian crossing the road in front of them. Driving into a pedestrian on the road in that circumstance is /not/ acceptable for any driver, be they human or automated.

Don't be fooled by the video footage Uber released from the in-car camera - the quality is appalling. Compare it with the dashcam footage taken by people driving the same road at the same time & you'll see that visibility should have been perfect & a human driver would easily have avoided the pedestrian who was doing something that all of do every day - crossing the road.


It is acceptable though, most people wouldn't even get involuntary manslaughter which still requires criminal negligence. People are really spouting off here and project in their own superiority onto the situation. It's a situation similar to when a child gets left in a car or jumps a zoo fence, "I never take my eye off my kids even for one second even when I'm sleeping. I would never allow that to happen to my kids. Worst parent/person who has ever walked the earth".

Not only did the pedestrian instigate that, the failsafe(which are supposed to be very simple to back up complex and more fallable systems) to not getting hit and killed by a car was entirely in the pedestrians area of responsibility; don't walk into traffic.


At least here, the driver of the vehicle hitting anything (another car, especially a person) is /always/ responsible, because you're supposed to be attentive for unexpected events. And that's how it should be. Why do you think otherwise? If you don't see clearly in front of you because of scarce illumination, you should go slower.

Unless assisted driving is close to near perfection (that is, it becomes a true auto-pilot without the need of assistance), we're going to see much worse numbers of accidents on the streets for assisted-driving cars.

Assisted-driving requires the same, if not more attention, as it essentially becomes like supervising an inexperienced driver that is prone to make stupid mistakes in otherwise normal circumstances. If you ever had a kid, you should be pretty familiar with how stressful this is.

But thanks to the PR spin, the driver is led to think that he can keep less attention, until he pays none at all. Assisted driving is even more boring than regular driving, lowering the attention span.

What could possibly go wrong?


I disagree with your implied belief that cars should have priority on the roads & that in every collision with a pedestrian it's implicitly the pedestrian who is at fault unless it can be proven otherwise.

We all owe each other a duty of care. Excuses about 'she just walked out in front of me' are just that: excuses. The driver in this case had plenty of time to slow down and avoid a collision & simply chose not to do so. That's inexcusable.

The fact that the driver was a pile of self-driving code doesn't change that one iota.


> It is acceptable though [to drive into a pedestrian]

No. Children wander into roads. Drivers have a responsibility to keep an eye out for the unexpected. They are faultless only if they were driving responsibly and had no opportunity to react — neither of which were the case with the Uber.


While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier.

When reading something like this in sci-fi, I always thought that the stories where over the top and tongue in cheek, to make some other point. Now I see they were real.

Companies are allowed to create robots that kill people. Then we sit around here and discuss how much of the killing is on the the robot's manufacturer, the robot's owner, the street lights and the victim.


> Then we sit around here and discuss how much of the killing is on the the robot's manufacturer, the robot's owner, the street lights and the victim.

You forgot one entity and I think it is a major one.

The people of the world who just stood by and let sdv's on the street with no sort of regulations and tests.

I posted this [1] after the Uber incident, Look at the awesome response from HN community that I have received.

[1] https://news.ycombinator.com/item?id=16696681


Replace "robot" with "car" and it suddenly doesn't sound sci-fi at all.


I think CalTrans owed up to its limitations: you should expect the road to be flat and not end abruptly without appropriate signage. But you are fully responsible to control the vehicle at all times and maintain an appropriate speed - so that when the road does inevitably end you can react safely. At no point are the crash atenuators a guaranteed component that the user of the road can expect to be in place and should rely on to save his life.

That is not to say CalTrans did not break some contractual obligations to replace that infrastructure in a reasonable timeframe, and shouldn't be held responsible in a commercial sense. But they are not in any way liable for the accident, the missing hardware was not a reason to stop traffic and all such devices are offered on a best effort, good to have, basis.


> Such a difference in conversation

in both the driver was supposed not to let the car drive itself unsupervised but only in one the driver was paid to do so


And look who fell for it (techcrunch):

"However, it seems the driver ignored the vehicle’s warnings to take back control" [1]

Followed by the quote from Tesla that at some point in the history of the ride a warning of some sort was given.

[1] https://techcrunch.com/2018/03/30/tesla-says-fatal-crash-inv...


yeah that's pretty bad. I wonder if they'll correct it.


It's widely known that RADAR systems are not able to detect stationary objects, only moving ones. Actually, they are are to detect them but just not where they come from, so a barrier in the middle of the road and a traffic sign on the side of the street, they would look like the same to the RADAR system.

But to be honest, I am more worried with the markings of the road rather than the inability of the autopilot of not foreseeing the accident: https://imgur.com/a/hAeQI

What's wrong with the US road administration? Why is this even look like a driving line? Where are the obvious markings? It's a very misleading road layout, I am curious the amount of accidents that happen there every year.

This is how I expect this kind of thing to look like: https://i.imgur.com/dfZehmd.gif

Given on how the road looks like, it makes more sense why Tesla is reinforcing the fact that the driver wasn't paying attention to the road.

Edit: Since people are curious about the limitations of the RADAR, the manual of the car mentions this limitation:

"Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead."

You can also read here that Volvo's system faces the same problem: https://www.wired.com/story/tesla-autopilot-why-crash-radar/


> It's widely known that RADAR systems are not able to detect stationary objects, only moving ones. Actually, they are are to detect them but just not where they come from

That doesn't make any sense. Radar systems do detect stationary objects just fine. In fact, from the point of view of the moving car, almost nothing is stationary.

I wouldn't be surprised if you're correct about the markings on the road, I almost crashed into a temporary barrier driving at night, because there were two sets of lane markings - the old ones, and the ones going around the barrier. Construction workers simply didn't bother to erase the old ones.


It is mentioned in the manual of the car:

"Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead."

You can also read here that Volvo's system faces the same problem: https://www.wired.com/story/tesla-autopilot-why-crash-radar/


This is not a RADAR problem, it’s a world modeling problem.

So you’re sending out pulses and listening for echos, which tells you how far away something is in a particular direction. You correlate subsequent pulses to say whether the object is moving toward you or away. If you have a car 50 m ahead of you every time you ping, everything is good. Now that car suddenly swerves around a car stopped in front of it, and your ping off that object says 60 m. A crash is less likely, your model thinks! The object in front of you is rapidly speeding away! By the time it realizes it isn’t, boom, crash.


That’s not how it works in my limited understanding. Radars don’t just detect the positions of object, but also their relative speed. For this, they take advantage of the Doppler effect. When a pulse gets reflected, its frequency rises if the object is moving towards the radar, and falls if it’s moving away.


You could be right, either in this case or other cases. I'm entirely inferring what autos are doing based on their failure case (not detecting stationary cars when the car in front of you swerves around them); that sounds a helluva lot like mistaking the new car for the old.

Again, not saying it's a limitation of RADAR; it sounds like a deficiency in the way they're using it.


Although there's an inherent trade-off between detecting position and speed. Longer pulses give more accurate frequency/speed detection, while short pulses give more accurate time/position detection. This is the same phenomenon that underlies Heisenberg uncertainty.


IIRC the radar can detect the object but it has too many false positives, like a small soda can that has a lucky alienation and shape that make it appear as a big object. So the system ignores the static signals of the radar, because randomly stopping each time an small object is confused with a big object is also dangerous. And the system uses other signals to identify and avoid static objects.


> It's widely known that RADAR systems are not able to detect stationary objects

What the fuck? That's not at all how radar works. It has no such limitation.


My understanding is that RADAR systems can detect stationary objects just fine: What's going on is that in-car radar systems can't differentiate between, say, a metal sign on a gantry or at the side of the road and a parked car on the road, because their spatial resolution is poor to non-existent. Hence in-car RADAR systems (often) ignore radar returns that are non-moving relative to the road surface. Otherwise the car would panic brake every time you went past a road sign.

The effect of this is that current in car RADAR systems are great at avoiding collisions with vehicles that suddenly brake to a halt in front of you whilst at the same time will happily let you drive a full speed straight into the back of a parked car.

This is why I believe current self driving vehicles (apart from Tesla) are all using LIDAR for object detection.

(The above is my interpretation of my reading on the current state of the art. If anyone knows more detail, please correct me.)


The problem for all self-driving car systems is modeling the road. Let’s say you are climbing a gentle hill, and there’s an overpass ahead. The nose of the car is aiming up, so it will see the overpass dead ahead and not moving. This looks the same to the car as driving down a flat road with a semi perpendicular to the travel lanes and dead stopped. This is true for all imaging systems - RADAR, LIDAR and camera.

Same if you are about to enter a right-hand sweeper, pedestrians standing on the sidewalk in front of you would look as if they are directly in your path.

Musk mentioned this in one of his blog posts. Tesla is attempting to build a database of obstacles like this and geotag them, so that the car can filter them out.


> It's widely known that RADAR systems are not able to detect stationary objects, only moving ones.

Uh, what?

I think you're thinking of Doppler radar systems. You can do static scene imaging with radar, no problems (I've written the code to do it, even!).

There's no way in hell a car is going to be using a Doppler radar. In all likelyhood, they're using a plain-old FMCW system, and it can absolutely detect stationary stuff.


Model S likely uses https://www.bosch-mobility-solutions.com/en/products-and-ser...

(It's for sure a Bosch radar, but maybe a different one)



While I find it deplorable that several companies are essentially using people, although in this case apparently willing, participants in testing auto pilot or autonomous driving software. I also find the lack of prominent road markings, signage, or any significant shock absorbing zone simply astonishing!

It makes me wonder, is that layout actually on purpose? I truly find it hard to believe that a road or traffic authority in this day and age could design that on purpose! It's quite well known that human attention is a tenous thing, and that the attention given to an object is primarily proportional to its size. But here we have a barrier end that is signed as being about as dangerous as a a bit of debris.


What irks me is that they think it's OK for them to advertise it as auto-pilot, with an asterisk pointing to the fine print that says "aw shucks, not actually, haha".


Well all that an actual autopilot (on a plane) does is maintain your velocity and heading, so if anything the name is underselling it.


autopilots do a bit more than maintain course, they can basically execute the entire flight-plan in between takeoff & landing, which includes plenty of heading adjustments. They also increasingly do a fair bit of heavy lifting on landings as well.


A plane is much easier to navigate because there aren't things like lane-dividers floating about. That's handy, but alas it does not apply to driving a car.

Just because an in-air navigation aid that allows for essentially pilot-free navigation deserving of the name "autopilot" happens to be relatively trivial to build, does not mean any navigation aid of similar complexity for other modes of transport should be called autopilot simply because they're at least as complex. If you're not automatically piloting, the name autopilot is pretty disingenuous, is it not?

Also, planes have a bunch of other navigation aids, including collision avoidance systems and ground-based air traffic control, and pretty obvious stuff like predictable flight paths and huge safety margins between nearby aircraft. An autopilot for an airplane works in that context. I kind of doubt it would work if there were thousands of planes nearby, many of which are on paths separated by distances the vehicles in question cover in tiny fractions of a second, and some of which follow almost invisible rather unpredictable paths (that drunk guy without lights over there...). Context matters.


I think what I'm taking issue with isn't anything about comparing them to self driving cars, it's that you characterize them as "trivial to build". They are not. They coordinate a significant number of life-critical systems, react to minute perturbations in the air stream with microsecond precision, and do so with such a high level of precision that human pilots are often forbidden from overriding them outside of specific situations. I don't know what gave you the impression that these systems were basic or trivial, but they have been at the cutting edge of systems automation for decades.


Compared to a self-driving car system, they are trivial. Your assertion that "they have been at the cutting edge of systems automation for decades" kind of indicates that - this stuff is simple enough to make that it was feasible to do even that long ago.

Trivial is good mind you because it actually works, and works reliably. The same is not yet clear of self-driving cars.


Well my kap 140 sure doesn't do any of that stuff.


Absolutely.

It's a marketing term, and they are intentionally committing murder by the act of not calling it "LaneAssist" or some other similarly boring name.


I think at least 50% of the blame should be placed on the maintainer of the road.

The crash attenuator was 'used up' from a previous accident and might have prevented death if it was in working condition.

I see this all the time where I live. Crashed attenuators or disfigured guard rails go months and months without repair.


If we are moving towards a future with self driving cars there needs standardization of road markers so that cars can recognize what's going on.

This crash is looking to be the autopilots due to the strange lane markings.

Have you ever driven down a road the lines grinded away and replace by new ones? Sometimes you can still see both sets, it can be pretty confusing for a human.


Won't help. There will always be surprises. Debris on the road, or snow drifts, or something.

Any system that works by not interpreting the world outside is bound to encounter nasty surprises.


Radars do just fine detecting stationary objects, what they can't do is tell whether the stationary object is blocking the road, overhead, or to the side of a curve in the road. As most radar reports of stationary objects fall in the latter two categories, the adaptive cruise control system filters them out of consideration.


Yeah, there is so much wrong going on with that road. The divider warning sign is also lower than a standard car.


They're marked 'optional' on your diagram. Optional things can be excluded.


"Optional" doesn't mean "exclude it if you feel like it". It means "think about it and decide whether it's worth adding". They can be excluded if doing so doesn't present a danger (e.g. it's a small road).

From the one photograph we've seen of the slip road in question, I'd say the lack of chevrons poses quite a serious danger. It looks just like a normal lane. I've never noticed a slip road branch off like that in the UK, i.e. a big crash barrier up the middle but no chevrons.

(I'm not saying whoever designed this section of road is guilty of some sort of crime, or is somehow responsible for the crashes, I'm just saying it should have been designed better).


>> The driver had received several visual and one audible hands-on warning earlier in the drive

> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant?

I see it being relevant that the driver had his follow distance set to the lowest and that several times in the drive prior to the accident he had to be prompted to maintain his hands on the wheel. This speaks to a driver that wasn't paying attention like they were supposed to be doing. That would increase the likelihood of him missing an upcoming hazard and responding in time.

Regardless of that, if he had reported and issue in that area to Tesla, why the hell was he not driving it himself at the time? I know that I personally value my life enough that I wouldn't trust AP if it had been veering into the wall repeatedly in one area.


Also, ever notice how in none of the Tesla Autopilot accidents, Tesla finds itself guilty?

I don't even read these posts from Tesla anymore because I know each one of them is just justification for how Tesla did nothing wrong, and it's the fault of the stupid human driver for using its Autopilot "wrong" and dying in the process. Meanwhile, Tesla is laughing all the way to the bank selling the usage of Autopilot as one of the primary features of its cars.

It almost reminds me of how Coke markets its sugar-filled soda "Drink as much as you can at every meal and at any event...but you know, in moderation (because that's what the government makes us say with its dietary guidelines)".

Tesla is kind of like that. It tells people to use Autopilot because it's such a great experience and whatnot, but then stuffs its terms and conditions with stuff like "but don't you dare to actually lay back and enjoy the Autopilot driving. You need to watch the car like a hawk, and if you happen to ignore any of our warnings, well it's your stupid-ass fault for ignoring them, even though we wrote everything there in the 11-point font." It's very irresponsible for any company to do stuff like this.

I don't know what the exact process is now, but I wish the authorities would be able to get access to un-tampered logs of the accident after the fact from the self-driving car companies.

I don't care if it's a car company that has "Elon Musk's reputation" (who I like) or Uber's reputation. These accidents, especially the deadly ones, should be investigated thoroughly. No company will ever have the incentive to show proof that leads to itself becoming guilty. It's the job of the authorities to independently (and hopefully impartially) verify those facts.

I'm a big fan of Tesla and its electric vehicles, big fan of Tesla Energy, big fan of SpaceX. But I've always criticized them for the Autopilot because it was obvious to me from day one that they're being very irresponsible with it and putting the blame on humans, when they were marketing it as a self-driving feature (they did for years, even if they don't do it as much now, but they still use the same name), while its system is far from being anywhere close to a complete self-driving system, and thus putting anyone who uses in danger of dying.


Tesla's huge success in people's mindspace of self-driving vehicles is baffling to me. Correct me if I'm wrong, but these are the self-driving features in Teslas:

- Cruise control maintains speed, will automatically decelerate if a car in front slows down, speed back up if they speed or go away, etc.

- Collision avoidance: the car will automatically emergency brake if it believes a collision is imminent.

- Lanekeeping: the car will stay in the lane without driver attention, although if it doesn't detect your hands on the wheel with enough frequency, the system will disengage.

- The car can park itself.

A quick look at Wikipedia tells me it can also change lanes and be "summoned."

Most of these features, especially considering cruise control, collision avoidance, and lane-keeping as the core featureset, since at least in 2017, have been in wide deployment among most car manufacturers, down to even cheap economy cars, that people are buying right now without getting on waiting lists or anything of the kind. You can buy a car right now with this core featureset for around ~18k.

Not only that, people seem to dramatically overestimate what kind of auto-pilot Teslas have. Speaking with my doctor about it, he assured me that "you just tell the car where to go, and it goes there." How did Teslas get so hyped up in the common man's imagination?


Yeah, I'm going to call BS on this one. More specifically, on the lanekeeping claim. The current non-Tesla implementations are worse or much more limited at the price point you mentioned (and frankly, at most price points).


All I can tell you is that the Honda lanekeeping does everything I would expect or want, and its the only one I have a lot of experience with. The only thing that annoys me is that it requires you touch the wheel every now and then.

Can you elucidate on what tesla lanekeeping does that others don't?


> It's very irresponsible for any company to do stuff like this.

It's very irresponsible for regulation to allow them to do that. And statistically for us to vote for politics that are not against that kind of behavior. Big companies rarely give a shit about killing people in the process of making money, as long as they are not punished for doing that. (And we, btw, mostly don't give a shit when the people being killed live in other countries, even when the process to do so is far more direct than the 1st world problem that interest us here.) People who used to organize those kind of criminal behaviors used to be held responsible for the consequences, but it is more rare now and the laws in various countries have actually been changed so that people managing companies are far less accountable than before for the illegal and otherwise bad things they order. As for companies being accountable, it is usually in punitive money that is order of magnitude less amount than the extra profit they made in doing their shit, so why would them stop? The risk of the CEO being sent to jail would stop them. Ridiculously small fines against the company, in the astonishingly rare cases where they even actually fined, not so.


It reminds me of South Park's drink responsibly ad [1]. You could almost replace 'drink' with 'drive' and it would sound like the current autopilot spin.

[1] - https://www.youtube.com/watch?v=j3osSJSGInQ


Totally agree, which is why I refuse to automatically be hyped about everything that Musk does.


>> That is Not acceptable of a self driving system.

Yes. But as we all know autopilot is not the same as self-driving.

>> And no, you can't get out of culpability by claiming statistical superiority.

In fact no one has found Tesla culpable for the accident. You can however defend the safety of your system by referring to statistics. (In fact that is the ONLY way to measure safety level) And your analogy does not hold. It it more like a parachute company saying their parachutes are safer than other brands because it fails less often than the other brands. Which brings me to my final point. Even if it turns out Tesla was culpable and negligent in this accident, it would not be a complete judgment if we leave out the inherent risks in car manufacturing and the incidents attributed to other companies.


I also dislike how Musk is synonymous with Tesla, except in cases like this.


The defensiveness, narcissism and lack of true leadership stamped all over this press release scream Musk to me.


I'm pretty sympathetic with PR departments doing what they have to do, in terms of putting out the official boilerplate remorse, the promises to cooperate and keep people up to date, as well as publishing the incident data, and then keeping mum until the investigation is finished.

But Tesla is shoveling in the bullshit pretty early on. Besides the fuzzy and deliberately vague stats (why is the mileage attributed to Tesla's "equipped with Autopilot hardware"?), this graf is just despicable:

> Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.

"unequivocally" has an actual meaning: leaving no doubt. And yet it's used following the paragraph that pretends Tesla's and all U.S. vehicles are directly comparable. I know it sounds silly to focus on that word but someone in Tesla PR thought it needed to be used with the pile of crap numbers. Such blatant and pointless deception -- "significantly" would work just as well for that inane sentence -- feels like a strong a signal to doubt the rest of the info given, as if the evidence weren't already so obviously dubious.


> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't.

Vehicle notifies the driver it's having difficulty operating the driver elected auto pilot mode based on the conditions. Driver elects to continue using the failing operating mode despite being warned.

I fail to understand how "It isn't" relevant that the driver chose not to take manual control when warned that conditions were not ideal.


Perhaps they don't know (yet?) what exactly went wrong? The car was heavily destroyed, I am surprised they got any logs retrieved from the car at all. They might be able to reconstruct more information about the accident, but unless they have a complete log including video (unlikely), it will take time to reconstruct. As there is an ongoing investigation about the accident, it is understandable, that they are not making any statements beyond anything they can prove without a doubt at the current point of time.


Sure, but the ONLY reason many of us can figure for mentioning the previous warnings in the way they did is because the way it would.get interpreted as a sound bite. Could be wrong, but this isn't conspiracy theory territory. These companies have teams of highly paid communications experts(spinsters) engaging in social engineering.


Like you’ve predicted, first headline on Techmeme: “Tesla says Autopilot was engaged during fatal Model X crash on Mar. 23, and the driver didn’t respond to warnings to take control of the car before the crash”. Although the linked article has a different title; might’ve been changed.


"And no, you can't get out of culpability by claiming statistical superiority"

Every MBA student needs this tattoo'd on the back of their hand in order to graduate from now on.


Don't you think being honest about their fuckups in situations involving death is just a massive lawsuit? You can't do that in America.


To me there is a stunning lack of compassion and decency in the response to these incidents by both Tesla and Uber.

Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant. They’re trying to bamboozle the reader in an effort to improve their images at the expense of victims who can’t defend themselves.

I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this. I don’t expect them to accept responsibility if indeed their technology was not to blame, but frankly that isn’t for them to decide. Until the authorities have done their jobs, why not show remorse, regardless of culpability, as any decent human would?


I agree with the lack of compassion bit -- I think the messaging could have been far more empathetic. However...

> Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant.

I don't understand how you can equate those first two lines. Uber's observation is clearly irrelevant, but the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident seems very relevant to me.


> the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident

But that isn't what their statement says. It says the victim had his hands off the wheel for six seconds before the crash, and that he received hands-on warnings "earlier in the drive." It does not say that during those crucial six seconds he was being warned. Nor does it explain the fact the car plowed into a barrier.


It seems that it was written in a way that confuses the issue intentionally.


”Nor does it explain the fact the car plowed into a barrier.”

This is critical. No matter how many notices you give, slamming into something at speed is the wrong answer. It’s almost unbelievable that they’re trying to use that as an excuse.


I'm finding it hard to believe the wording on this crucial sentence was so misleading by accident. Upon first reading it, the wording seemed a little off and seemed to me that the two statements should have been two separate sentences instead.


I'm willing (and almost hope) to be proven wrong on this, but I personally find it hard to believe that the ambiguity around this in the statement is accidental. A statement of this nature will have been extensively wordsmithed.


Even then, I fail to see how it’s relevant information.

I’m supposed to trust an “autopilot” that warns me six seconds before it slams me into a wall?


In their next statement about the next person they kill: 'The Autopilot clearly both stated in vocal synthesis and by displaying on the internal screen: "WARNING - I'll attempt to kill you in six second". Yet the driver did not manage to regain control against the rogue AI. He is clearly at fault for having died, and his family should be fined to recover the cost the the investigation.' :P


>the victim had his hands off the wheel for six seconds

Actually it doesn't even say that, it says his hands "were not detected." Which means nothing to me.


@jVinc's post answers this. They didn't state specifically that he was warned six seconds before the crash.


Yeah, but it seems that they may be, statistically speaking, right. The fears that surround self driving cars may be unfounded, and if they are unfounded, then it would be a shame to throw the baby out with the bathwater.

They're right when they say we don't speak of the accidents that didn't happen, and I bet there's a ton of them.

As someone who's been in multiple car crashes, as a passenger, I really on longer want to be at the mercy of human drivers.

Brief reminder of the same discussion, in a different context.

https://www.youtube.com/watch?v=bzD4tIvPHwE

Yet on one, and I really mean no one complains about computer assisted landings and takeoffs. I'm not sure why. Lots of passengers even sleep through them.

I guess we've come to realise that the man machine symbiosis works well for flight. But it might take time for this to get engrained into our culture when it comes to driving.


Exactly. It's very important to ensure negative (and unjustified!) publicity do not affect technology perception, adoption and growth. There are so many Luddites who take every chance to hammer another nail.. It's understandable that any tech pioneer can easily sense being attacked and has to quiveringly rebuff any hysteria attempts.


But is it really unjustified? The statistics we have so far suggests that with the current state of self-driving cars they are significantly (about an order of magnitude) more likely to be involved in a fatal crash.

Maybe what we need is to put the brakes on the notion that self-driving cars are nearly ready for prime time and in a few years they will be all over the road. A level-headed review of the state of the technology suggests that is extremely unlikely. Except for people hanging around /r/futurology and to some extent here on hacker news, who are perhaps too close to technology to really appreciate it's limits in the practical world.

It would probably be for the best if the enthusiasts and the companies rushing headlong towards this new driving paradigm put forth some effort to tone down the hype.


The main thing that would make me convinced of the safety of self driving cars would be if the executives of self driving car companies took criminal liability for accidents caused by their cars or software.

When the cost of them killing somebody by having a deficient QA process is minimal (and just affects shareholders), then the cars will be rolled out before they are ready.

What will happen to liability at present is still unclear but if the self driving car companies are successful in dodging most of it (and it appears they are trying) then self driving cars are going to kill a lot of people unnecessarily - quite possibly more than humans would.


I agree with your intention, though isn't it inevitable self driving cars will kill people even if they are 1000x safer than humans?

If a CEO has many millions of cars running his software, their will always be a death.

In that case, nobody would want to take on that responsibility so nobody would build self driving cars.

Therefore humans will still drive cars and there will be 1000x more deaths, most preventable with self driving cars.

This is pretty much exactly the Trolley problem in ethics and philosophy. Read up on it :)


They don't need to be 1000x safer than humans. If they are 2x safer than humans, that is plenty for me to support using them. The issue is determining how safe they in fact are, to appropriately high confidence, before putting people at risk.

That is something that "tech" companies will in all likelihood never be able to do, since they seem uniquely designed to betray and lose the public's trust at the earliest opportunity.

"Tech" companies have become far too culturally accustomed to dishonest hype, "growth hacking", marketing BS, PR spin, etc. for their statements to be trusted, especially when there are life-and-death implications.


It's not always true. Waymo is a clear exception: their car was at current Tesla's level 5 years ago, but they chose not to run into production. Many Google engineers left from the team because they wanted to see their work in public, but Larry clearly wanted safe L4 driving before getting to the roads.

(of course Google did lots of ,,ungoogly'' things in the past few years, but at least they got Waymo right)


It's not the trolley problem because as yet we don't know how many people would be killed by rolling out self driving cars and, more to the point, the speed of rollout and safety are not necessarily inversely proportional.

By characterising self driving cars as a "trolley problem" you have presented what is known as a "false dilemma". Maybe you could read up on that ;)


Good point! Are there any publicly accessible crash/death statistics for cars with a similar price/safety/demographic profile as Teslas?

I feel that is the only way we could isolate for self driving and not other factors.


If individual engineers and executives were made to be legally responsible for potential fatal accidents resulting from technology, progress would grind to a halt.


I dont think that's the case. However, if the people who build it refuse to take responsibility for their defects killing people then I think that's the clearest possible signal that the technology isn't viable.

Obviously this would also mean that compensation would have to go up for self driving car engineers and execs to compensate for the risk. That's fair, and, if that cost makes a self driving car venture uneconomic - again, not viable. And that's okay.

People take responsibility for other people's lives every day when they get in a car. I don't think the idea that Uber engineers should do the same should be considered particularly controversial.


I mean, in cases where they are criminally negligent, they are already responsible for accidents.

The tricky part is figuring out what constitutes negligence.


> https://www.youtube.com/watch?v=bzD4tIvPHwE

What exactly am I looking at? The video itself says it's a computer-piloted takeoff, the YouTube title says it's a computer-piloted landing, and the top-voted comment says it's a human-piloted fly-by.

edit: it appears to be this flight: https://en.m.wikipedia.org/wiki/Air_France_Flight_296


Could you source the assertion that Uber (as opposed to the local police) has indicated the victim in the crash was homeless?

(That would be an inexcusable and cynical deflection.)


Uber never came out mentioning the homelessness of the person. The police did. This is typical HN banter on Uber, thanks for calling it out.


That's because Uber never released an official statement on the accident. At least, I can't find one on their website.


That is utter bullshit that Uber said the victim was homeless. It was the police chief, Uber never mentioned it once and their response was completely appropriate.


It's fashionable to hate Uber now so people don't object to lies told against them.


This was clearly vetted (or written) by a lawyer who is interested in changing the narrative for his future wrongful death defense rather than exercising empathy.


It reads like it was written by a company who has a lot of scientists working there is all. They are simply explaining what happened according the system log files before the incident and then reminding the public that it is still safer.


> They are simply explaining what happened according the system log files

I must have missed the part where they explained why their autopilot system drove into a concrete divider at high speed. Please can you quote the explanation you're referring to.


>I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this.

Because humility requires you to admit wrongdoing, which from a legal standpoint is not advisable in a public statement.


What you are saying here is: "I really wish the PR person who wrote this chose words that would cause readers to feel as though the company empathized more with the victim."

Ultimately companies don't have feelings, their employees do. PR statements and the words chosen (unless written by an individual like the CEO) don't somehow make a company into a person and don't reflect the feelings of their employees: they are a crafted tool to create a certain outcome by instilling thoughts in the mind of the reader and/or providing legal cover.


> I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this.

I would venture a guess this is an outcome of the "sue for everything" legal environment in the US. Any statement that could be construed as anything other than "we did nothing wrong" could be seized on by shareholders suing the company for securities fraud and demanding class action status.


Really not much additional info here aside from the number of warnings the driver had. Also, the standard "Tesla is 10x safer" metrics that get pulled out each time a crash gets sensationalized.

What I think they fail to address, especially in this case, is that the autopilot did something a human driver who was paying attention would never do. Autopilot does a great job of saving people from things even a wary driver would miss, much less a negligent one, but the fatal accidents that occur in statistics are not from people who are fully watching the road missing the fact that there's a concrete barrier with yellow safety markings directly in their path, and hitting it head on for no good reason (e.g. evasive action because of another driver, etc, oh a stupid last-minute "oh crap that's my exit").

I want autopilot to succeed, and I want Tesla (and Musk) to succeed, and for the sake of their public image they have to realize that this isn't an average accident statistic, a lapse in attention or evasive maneuvering. It's a car that seemingly plowed right into a concrete barrier while still under complete control. That's not a mistake a healthy human will make.


> the autopilot did something a human driver who was paying attention would never do.

Then how did the crushed barrier get crushed before the Tesla hit it? Clearly, the stretch of road is unsafe enough to trick humans drivers (and, clearly, Tesla should improve).


I'm obviously not privy to the details of the prior crash, but I'm pretty sure any healthy human would not deliberately drive into the barrier.

The most likely scenarios I can think of would be not paying attention and drifting into the barrier, attempting to avoid a car merging into the lane (and not paying enough attention), or being struck by another car and being forced into the barrier.


I think the answer is that the path into the barrier looks just like an actual lane, and that this was enough confusion that a human driver apparently had made the same exact mistake a week earlier. https://imgur.com/a/iMY1x

https://techcrunch.com/wp-content/uploads/2018/03/screen-sho...


If you are building an autopilot system, I’d expect 1) identify black/yellow hazard signs and 2) don’t hit them to be basic features by version 0.5. Seems like a big miss to run straight into that.


Maybe SDC already do this, but it seems like the Agent should keep track of things that look like potential lanes and then if the potential lanes turn out to be not-lanes (have barrier in them) the Agent should flag the area that looked like the start of a lane so future SDC know to avoid it.


I wonder if the problem is the uneven pavement leading up to it. The white solid line starts close to it, then drifts left. In certain lighting conditions, the division in the pavement might seem more prominent than the white line, which I could imagine tricking a human or computer to follow it instead. That wouldn't cause a head-on crash itself, but would cause the car to get very close. I could then imagine a computer trying to correct its mistake by moving RIGHT (because that area looks like a lane) instead of left and going head-on into it.

Though of course that doesn't explain why it didn't recognize the barrier as something it should avoid at all costs. Unless perhaps something to the left confused it and made it think there was an obstacle there, too, thus causing it to think it was going to crash regardless. If so, maybe it did try to brake at the last second?


This divider looks rather unforgiving. Is it common for dividers in US to be just plain concrete blocks without anything to prevent rapid deceleration?


If you look at the imgur link you'll see a impact barrier that had previously been crushed and not reset or repaired.


I dont see it


It's in the techcrunch image, not the imgur. The barrier with the yellow face is designed to collapse.

https://techcrunch.com/wp-content/uploads/2018/03/screen-sho...


That still looks pretty risky to me - I compared a similar junction here in the UK near to where I live (on the M90 in Fife) and it has about 40 impact attenuation barrels in a triangle.


Those are also present in many locations in the US, although they seem to be phasing them out so I think they are considered old tech. I don't know the performance specs, but new cars are a lot smaller and safer than old cars - the same is probably true here too.


that image also makes me wonder if tesla's system is designed to identify knocked-over cones. It's one of those things that would be easy to code for, but also easy to overlook.


It's quite hard if you take into account that machine learning needs 10000x the training data that a human needs, and Tesla is sold in 30+ countries (and used in much more).

I think 3d mapping the surrounding world correctly using multiple cameras (what humans are doing) is more generalizable.


CalTrans doesn't diagonally stripe spaces like that? It would seem that they should.


There has been a lot of repaving work on 101, so these may well not be the final markings.


I don't think that area has been repaved in a while.


The lanes aren't very well indicated, but given how visible the divider is, I don't think either an autopilot or a driver should drive off.

Reminds me of a scene in "La grande vadrouille" where a motorcyclist is killed on a mountain road because he was following the dotted line and the painter had gone to the side of the road to take a break.

Also, that's an immense pothole!


> That’s an immense pothole!

... well you certainly aren’t from southeast Michigan, that’s for sure.


Does seem like a problem that a LIDAR based system would have been less likely to make, compared to AI on video classification


You would think they would at least add pylons leading up to the divider: https://ops.fhwa.dot.gov/publications/fhwahop13007/images/fi...

The divider is from the fast lane no less.


Except those solid white lines delimit an uncrossable barrier delimiting the HOV-specific (during commute hours) flyover from the 101 lane, which is well documented for nearly a mile preceding that point.


Reasonable, lawful, absolutely. People break the law, people get drunk, people get sleepy.

There's a spot just like this in Houston on 610 East. HUUUUUUGE flyover as the HOV lane spends about a half mile merging into the left lane of 610 (nice fat fast freeway, they need the runway). A guy had thought the flyway would be a safe place to park his car and wait for a tow. Safe enough that he was sitting on the hood of his car. Sure as shit the Challenger in front of me got confused, cut into the flyaway thinking it was a lane, hit the disabled car, and sent the guy flipping up into the air a good 10, 15 fit. Fucked up accident.

Anyway point is after the accident I was talking to the driver of the Challenger, trying to calm him down (kept saying "holy shit I fucking killed that guy!") and other that shock he was sober as a duck. He got bloodtested and everything, clean. In court he said he just got confused and thought it was a lane.

LONG STORY SHORT human error mang, human error. Still not sure why a car was able to do it.

They have those same white lines leading into intersections and people cross that shit all the time. People speed.


The entire thing is two solid white lines on either side, which are pretty strong indicators that you shouldn't pass into that area in the first place, diagonal stripes or not.


It also indicates that you shouldn't pass out of that area.

The vehicle should have at the very least stopped short of the road obstruction. But we don't yet know what led up to being in that lane, I was hoping the article in the OP would have taken us through it. Instead I'm getting the feeling this was a catastrophic software failure. Without more transparency on the issue, despite what Tesla may prefer I don't think I have any choice but to feel uncomfortable.


If you step a bit further back, it does not look so much like an actual lane: https://teslamotorsclub.com/tmc/attachments/streetv1-png.288...

A piece of software that mistakes that wedge for a lane in broad daylight has no business being deployed on the road, in my opinion.


It's not that straightforward. It's two adjacent lanes, which gets separated. Left one becomes an exit, the lane next to that one continues. The space between them keeps growing. The two lanes in question have a solid line between them before they separate to indicate you can't cross between them.


I read that that driver was arrested for DUI.



  any healthy human would not deliberately drive into the barrier
But plenty have driven negligently into barriers.

The worst such crash I've ever seen was barely 2 miles from there, on northbound 85 near Fremont Ave. There is a soundwall that comes to a connection point where a wall segment is directly perpendicular to the freeway. For some reason, the guardrail had a gap there almost exactly the width of a vehicle.

A few years ago, a vehicle veered off the right shoulder and perfectly threaded the gap, into the wall at full speed. It was compressed to maybe 5 feet long.



At that particular point, what I see the most is people deciding that they either need to exit or not and abruptly crossing from one HOV lane to the other. Lots of crazy shit. So I'm not sure your scenarios have much relevance -- how many times have you driven through this intersection? I've been through it maybe a thousand times since it was built.


I’ve seen people back up in the breakdown lane on the expressway because they missed their exit. But that was Beijing, and all drivers are hyper sensitive on the lookout for that kind of stuff.


On my first drive into Bucharest a guy drove on the wrong side of the divided highway to take a left into the lot of a building supplies company right in front of my car. Scared the crap out of me.

https://www.google.com/maps/@44.436552,25.9597006,18z


Can confirm, have just driven for 2+ hours on the streets of Bucharest and there’s no way for an AI-like car to drive in here for more than 2 blocks without getting involved into an accident. Unless said car becomes sentient, but even in that case it would still have to “guess” some of the crazy stuff the other drivers on the road may be up to or to expect that a barely marked road-construction thingie will suddenly show up just in front of the vehicle while driving at 60 kph. And Bucharest has got nothing on a city like Istanbul.


It's the only place where I got honked at by a car on the sidewalk while walking on the sidewalk, and when I didn't get out of the way fast enough he proceeded to attempt to nudge me with his car.


Previous driver was drunk.


It's a somewhat easy accident to have in heavy traffic if the lanes aren't marked clearly and you don't see the barrier because of cars ahead of you. Not a factor in the Tesla crash (they had a clear line of sight) or in the original crash (which was a DUI).


Distraction, misjudged timing, sudden health problem, etc


«the autopilot did something a human driver who was paying attention would never do»

I'm sure that out of the 1.25 million annual automobile-related deaths, plenty of drivers were paying attention and still did stupid things similar to this accident.


Using global statistics greatly skews your argument. While all these deaths are tragic, many of them could be prevented through law and regulation of human drivers.


> That's not a mistake a healthy human will make.

Humans do exceedingly stupid things all the time because they stop paying attention, even momentarily (or subconsciously).

We put big lights on the back of cars that light up when they brake. And yet, despite a driver looking direct at a huge object with two lights, that rapidly grows larger right in front of them, does not always prevent a collision. Or even a chain of collisions. I don't get on the road much, and yet even I've seen a ton of accidents that are baffling and can only have been the result of a driver not paying attention for a bit.

I'm reminded of how in quite a few places removing signs and lights actually improved safety because it forced drivers to stay aware instead of 'driving on autopilot', so to speak.

I think that's the real issue here. The more we outsource our attention to a machine, the more important it is that said machine does MUCH better than we humans do. Especially if a mistake can be deadly.

But I wouldn't be surprised if, indeed, technically this accident could've happened just as easily by a non-autopilot car where the driver had a little 'micro-sleep', got distracted by something in his field of view, mistakenly thought he was on a lane and didn't notice the (let's be fair) ridiculously bad markers that were the only way to tell that part of the 'gray' stuff in front of him was in fact a wall of concrete.

I mean, just look at the image: https://imgur.com/a/iMY1x . Half of what makes the barrier stand out is the shadow!

All that said I might sound more argumentative than I am. I do agree with most of your comment.


"the autopilot did something a human driver who was paying attention would never do"

"this isn't an average accident statistic, a lapse in attention or evasive maneuvering. "

"That's not a mistake a healthy human will make."

Almost every driver thinks they're significantly better than average. Few are.

If a human driver's lapse of attention causes a similar crash, how much less of a tragedy is it just because we can less ambiguously blame the victim?

My opinion is that the statistics compare just fine.


> Almost every driver thinks they’re significantly better than average. Few are.

Actually, about half of drivers are better than average.



Hence “about” and not exactly half, unless you can prove some weird distribution for drivers’ abilities. Go on, I’ll wait.


It's easy to provide a counterpoint if you ignore words (significantly). Few people are significantly above average.


I’m not sure I agree. Humans run into highly visible barriers all the time.


> human driver who was paying attention

true. here’s the location (post #62)

https://teslamotorsclub.com/tmc/threads/model-x-crash-on-us-...

but the average person in the US anyway is not a good driver. i very often see people get aurprised at the lane ending at that exact spot. they should put a rumble strip leading up to it.

So, my question is, is it fair to compare AP to a human driver paying attention or is it more fair to compare it to an average driver.

I mean that just for the sake of argument. In this specific case, NTSB or someone should ban AP. It is so obvious what’s about to happen there, an AP should do what a driver should do which is take the ramp whether it’s a wrong turn or not. So many a-holes try to squeeze in last second (oops) when they should just take the damn exit or miss the damn exit as the case may be. What AP did here is what a poor and panicked driver would do and that’s just not acceptable.


Did they release information about the path the car took? It could just as well have simply gone down the median as if it were a lane of traffic.


I want electric cars to succeed as I am am long bored of breathing in vehicle fumes. Automated driving is cool sci-fi tech, but holds nowhere near the same sense of necessity, as far as I am concerned.

So, I thought it wasn't all that clever in the first place to try and marry the risks of getting electric cars into the market with the risks of telling the extremely wealthy that they didn't need to hold the damn steering wheel.


>That's not a mistake a healthy human will make.

This statement is meaningless circular logic. You can handwave away any incident with a human driver by saying he wasn't "healthy".

A human pilot has intentionally driven an airplane full of passengers into the ground with full control because he wanted to kill himself. The airline believed he was "healthy".


But this wasn't a self driving vehicle. It isn't capable of dealing with all situations, and it warning the driver for 6 seconds to take control, which didn't happen.


No, it didn't warn at all. It warned prior to this, which was just put in the text to trick people into thinking it warned.


I thought the auto pilot warned the driver to take control of the vehicle or is that incorrect? There is probably a lot more we don't know and it would be premature to place the blame on the auto pilot and Tesla. This isn't a lvl 5 system. It still requires the drivers attention and that is something we need to consider ( he obviously didn't have his hands on the wheel).


Technically tesla's statement says he was warned at some point to take control during his drive. That could have been 5 seconds into his commute, or it could have been 5 seconds before he hit the barrier. He may have taken control at that time, and then reengaged it later. He may or may not have been warned to take control before hitting the barrier.

Tesla's statement is not clear on this at all.


>or it could have been 5 seconds before he hit the barrier.

If that was the case, they would have said so. Their vagueness here is telling.


The statement mentions "hands on warning" which just means you need to keep your hands on the wheel. That's very different than giving control back to the driver.


Thank you.. you are correct. That’s an important detail I wrote incorrectly.


it's a mistake that healthy humans make all the time, which is why those crash barriers exist in the first place, and why that particular crash barrier was damaged by a previous crash.


You’re missing the point. The crucial fact according to OP is that the car did something that a fully aware driver would not ever do. It’s at least worth acknowledging that.


It sounds like peering into the black box of autopilot and anthropomorphising some parts of it but not others. Why not say "autopilot sometimes drives into barriers and also humans sometimes drive into barriers."? In both cases the driver/autopilot failed for whatever reason.


It's only anthropomorphizing in the sense that we're discussing two systems capable of propelling automobiles around. Presumably, they're both in the business of avoiding concrete barriers.

If one system is steering the vehicle into things that the other system would, in most conditions, reliably avoid, it bears some discussion.


That's disingenuous.

The only reason a human drives into that barrier is if they swerve to avoid something or are otherwise not properly paying attention (texting, etc.).

There's no good reason for the autopilot to have hit that barrier.

The mental gymnastics going on in these comments attempting to absolve Tesla of any responsibility are truly next-level.


A swerve is not the only possibility, as the area between lanes looks like a lane. My father has done the same thing while driving at night. He insisted he was in a lane while I yelled repeatedly that he wasn't. Luckily he changed lanes / entered a real lane just before we crashed.


Yes, bad visibility could explain it, but Tesla themselves say it was 9.30am and the autopilot had "five seconds and 150 meters of unobstructed view of the concrete divider".


Looking at a photograph of the barrier, a poorly designed median also explains it.


Nobody is trying to absolve Tesla. They're just refuting a dumb and easily rebuked claim that no human driver would make the same mistake. There is evidence that crashes happened there before, and it would be easy to imagine someone crashing there because they were focusing on trying to find a spot to merge while driving in what appeared to be a standard lane.

Again, not absolving Tesla, just being realistic about the capabilities of human drivers.


"Fully aware" drivers (true Scotsmen) as proposed here statistically do not exist, or at least do not compose the vast majority of drivers. So it is kind of a meaningless comparison to draw.


It's not a True Scotsman in this context. The point is that humans tend to crash into things they're not fully aware of, short of self-harm.

Either:

A) The autopilot could not see a concrete barrier in its path. B) More likely, and as the story reports, it WAS aware of the danger but didn't do anything.

Either case is at least worth discussing, no?


In which bucket would you put the possibility that the computer observed evidence of the barrier, but the evidence that there was no barrier -- the prior that barriers don't exist in lanes of traffic -- was too strong?


Bucket A. Which could be articulated more generally as: failure to capture the world as it really is.


"tend NOT to", did you mean?


"Fully aware drivers" are all the many drivers who manage to navigate that area without driving into the barrier, minus the ones who didn't mean to get off there or meant to get off and didn't. The latter groups are aware enough to drive safely through the area but still spacing out a little.


I've been a fully aware driver unsure of what to do when seeing a surprise in my lane. I only hit the break because a passenger started screaming. I was so confused to see a pedestrian on the freeway.


When in doubt slam brake.


Just a little nit-pick - «When in doubt, quick glance in rear-view mirror, then slam brake.»


It turns out brains can freeze in a panic.


Drivers crash at this particular point all of the time.


In keeping with the "Autopilot" terminology, this was "Controlled Flight info Terrain".

Tesla demonstrates their usual abuse of statistics. 1.25 fatalities per 100 million miles is the average across all types of terrain, conditions, and cars.

The death rate in rural roads is much higher than in urban areas. The death rate on highways is much lower than average. The death rate with new cars is much lower than average.

The autopilot will not engage in the most dangerous conditions. This alone will improve the fatality rate, even if the driver does all the work.

Tesla cars are modern luxury cars. They appear to have done a great job building a well constructed, safe, car. This does not mean their autopilot is not dangerous.


Note: Tesla is comparing the fatality rate of autopilot-equipped cars with the average accident rate. They are NOT only counting miles where autopilot is used, they are counting all miles driven by their cars. It could be that Tesla drivers are inherently safer than the general population, or that the cars themselves are much safer. What they are not doing (at least with the data) is making a statistical claim about autopilot safety.


That's a very good point. But it raises a new question, why didn't Tesla compare the safety record of Teslas with Autopilot against Teslas without Autopilot.


They have done so. As mentioned in their previous blog post, Tesla's crash rate was reduced by 40% after introduction of Autopilot based on data reviewed by NHTSA.


I don’t know. It could be because the comparison isn’t favorable, or it could be because it’s very hard (as your parent post states) to generate a valid comparison that controls for driver age and skill, driving conditions, and vehicle safety.

What I do know is that Tesla have previously published a comparison of the safety of their cars before and after the autopilot feature was made available, and there is a statistically significant improvement of nearly 60%. This study should factor out most of those caveats, since it’s the same cars, same drivers and same roads before and after the feature release. http://bgr.com/2017/01/19/tesla-autopilot-crash-safety-stati...


Exactly. Tesla drivers will be older, more affluent, and I bet, generally safer drivers. I'm sure an insurance company could easily debunk those statistics.


> This does not mean their autopilot is not dangerous.

It also doesn't mean that it is dangerous. That bag of doritos that you ate, it might also be dangerous. Let's use facts and not vague worrisome complaints. So if you want to talk about the higher danger of rural roads, please give a number, and then give a number for tesla, or estimate one. Don't just say "urgh".


Outside of the Tesla marketing department, conditional probability has been known for 300 years. Comparing the average fatality rate to the fatality rate of a modern car with Autopilot is an egregious abuse of statistics. How can anyone defend this practice ?!

Just as an example, click on California (it is mostly consistent all over) on the map [1]. California has a lower average fatality rate of 1.01 per 100 million miles.

Rural roads have 2.62 fatalities per 100 million miles. Urban roads have 0.70 fatalities per 100 million miles.

The fatality rate is much lower on highways, according to Wikipedia [2], freeways have 3.38 fatalities per billion km (0.54 per 100 million mile, if I managed the conversion).

[1] https://cdan.nhtsa.gov/stsi.htm

[2] https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


I mean, if Tesla is comparing their luxury car's death rates to motorcycles, which have 10x-50x the fatality rate of passenger cars, then, yes, it's dangerous.

Remember, there are several very normal cars models that have had ZERO deaths: https://www.nbcnews.com/business/autos/record-9-models-have-...

These autopilot cars are death traps.


I wonder if it’s possible to determine whether those cars have zero death rates in part because someone who would buy that model is less likely to get in a fatal accident in the first place—whether because of who they are, or how they were advertised, or whatever. I imagine a minivan owner as a very cautious driver…then again, I have been in the passenger seat when a friend was doing drifts in his mom’s minivan, so your mileage may vary.


I've had 2 of those cars - the Sorento and the Odyssey, and they both feel really tactile/agile to drive, as well as being chill, relaxing rides.


Yes, one might quip that males under the age of 25 wouldn't be seen dead in any of those vehicles.


Who doesn’t wanna sit in an Audi A4 Quattro?


I'd say the report is misleading.

Fatalities are rare events. They chose cars with 100,000 registered vehicles or more. The categories these vehicles belong to experienced around 30 deaths per million cars, so cars that have 100,000 registrations would expect 3 deaths. You could easily get a few models with 0 deaths just as a matter of chance.

I don't think these results necessarily show these cars are inherently safer than other cars in the same category.


I completely agree; Tesla’s marketing modus operandi is to fudge statistics. In my view, they are consistently on the wrong side of the marketing puffery | misrepresentation borderline.

I do take issue, however, with your claim that Teslas are “well constructed” - they are not. They are poorly designed, poorly constructed vehicles that feel and look cheap. Test one side by side with, say, a BMW 3-Series or a Volkswagen Golf and the difference in production quality will be palpably obvious.


The author was talking about road safety from the cars rigorous construction. There seems to be little doubt about that non-subjective issue.


I saw the argument that a traditional motor’s noise level hides rattling sounds and similar subtle car defects, which a fully-electric vehicle’s silence does not cover.

(I haven’t seen scientific comparisons, though.)


Traditionally at least, the only thing one would hear inside a Mercedes Benz going at 60mph would be the tick of the clock.


> "Controlled Flight Into Terrain"

Thanks, I never knew this term before. Seems like a dark humour gem - equally funny and terrifying.


Agreed. I don't know why they think they get away with it. Is there any independent agency providing numbers about Tesla claims?


It is only dangerous, if it is worse than the average driver.


No, it is still killing people, but average people are cuppable. Big difference.


You realize killing people less than average is not dangerous. It is actually safer. Without understanding this basic logic, than I am not sure what else to say.


What percent of all driving is done on rural roads?


for varying values of dangerous.


You're saying autopilot refuses to run on rural roads?


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: