Take for instance: > The driver had received several visual and one audible hands-on warning earlier in the drive...
What this means is that during this incident there were No visual or audible clues that a crash was about to happen
What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant? It isn't. They are stating it as a fact because they know some people will incorrectly claim that the car warned the driver prior to the crash.
With their eager to try to explain what happened why aren't they talking about what actually went wrong? Why did the car drive straight into a barrier. If their claim is that it was cased by the driver not having his hands on the wheel for 5 seconds, then they need to fold the company and give up trying to create self driving cars. That is Not acceptable of a self driving system.
The fact that the barrier was damaged intensified the accident, but that does not in any way excuse their system driving straight into it. And no, you can't get out of culpability by claiming statistical superiority. That's like a gang-member drying to get out of jail after killing a rival gang member because his gang statistically kills fewer people. Tesla has put a product in the hands of consumers, when it kills those consumers they need to step up to the plate and be honest about their fuckups, not just blame the drivers, blame the infrastructure and point to statistics.
Quite. Tesla is deliberately comparing their modern vehicle design & wealthy driver base (statistically one of the safest cohorts) with the entirety of the U.S. driving population. This is bad statistics: We expect Tesla vehicles to have far fewer accidents than the mean US vehicle per mile, because Tesla's are expensive modern vehicles with (relatively) wealthy drivers who can afford to keep them maintained & will themselves be in better health than the mean US driver.
Don't tell us how safe Tesla's are compared to the entire US driving population: Tell us how safe they are compared to equivalent vehicles from other manufacturers. My working assumption is that Tesla doesn't do that because it would be far less flattering to the 'A Tesla is totally safe, honest' PR boosterism that runs through every Tesla press release.
I'm having a hard time to find an exact number comparison, but:
- Tesla claims 1 fatality per 320M miles
- A 2015 study of 2011 model year vehicles showed, for instance, Volvo XC90 had never been involved in a driver fatality (1)
- There are about 10K 2011 XC90s in the US (2)
- Avg. US drivers cover 13,400miles / year (3)
- 4 years studied * 10K vehicles * 13,400 avg miles -> 536,000,000 fatality-free miles
This is obviously super hand-wavy, but I think it's fair to state as a hypothesis that Tesla could be twice as deadly as the safest high-end vehicles.. you'd need to design some experiment to attempt to falsify it to feel confident this is correct.
However - the same is true for 2009, 2010, 2011, 2012 model XC90s, and they are just one of 10 models that had no fatalities in those four model years in the study. I just picked that year and model to get one data point for mileage to compare; the other model years of XC90 have similar sales, so the same applies.
So, while you're right, I don't think it's an entirely useless comparison; it indicates there could be something here, which could then be used to state a falsifiable hypothesis, something like: "Similar cohort drivers would be safer in <one of these ten models> than in a Tesla.".
Given the low number of incidents, you probably want to be looking at something other that fatalities to get a large enough pool of incidents to reduce the impact pure chance; at least in addition to actual fatalities. It's not going to be an easy comparison, that's for sure.
They we're originally advertised as sports cars really. 0 to 60 faster than any other car, etc.
You're going to have to explain to me how the wealth of a driver has any bearing on their ability to pilot a self-driving car.
Secondly, Tesla themselves makes it clear that their self-driving can't be relied upon to driver the car unsupervised - the driver has to be paying attention at all times in case the Tesla self-driving code decides to do something catastrophically stupid. Hence the same factors that correlate with driver attentiveness in the ordinary population will also affect the Tesla population. Wealth correlates with health correlates with better reaction times & general fitness.
The Tesla autopilot tends to turn itself off precisely in situations where driving is difficult and complex - which in turn implies that these human factors will remain just as important as they are in the general driving population.
(Of course it may simply be that the reason wealthy people have a lower death rate on the roads per mile is that they can afford cars that are safer in crashes; the above is my supposition, not evidentially backed.)
I agree with your general point, but you can't just join up 2 correlations like that, that isn't always going to work when there are other confounding factors (and there always are). Wealth also correlates with age which anticorrelates with reaction times and general fitness.
And not all markets are equally competitive. Just because insurance rates differ doesn't mean risks necessarily do the same way. And let's not forget that while insurers are professional risk assessors that doesn't mean they're infallible either, or immune to other distortions (e.g. pass-the-buck subprime mortgage kind of issues).
But at some point between wealthy and rich the person hires a (non-wealthy) driver and now they're back to square one.
All other factors taken into account, driving has become much, much safer in recent years, and is now safer than it ever was. When you start to use technology to eliminate driver errors in that self-same error-prone cohort, there's no reason to think that the safety dividend won't be even more pronounced than it is today.
If you want to link crash rates to wealth, you would also have to account for countries like Germany, that has a similar median wealth figure to the US, but a vastly lower motorway death rate 
It seems you can't both say 1) Tesla drivers are less likely to have accidents because they are wealthy and error-prone, and 2) Extending self-driving technology that eliminates basic errors to less-wealthy and/or more error-prone drivers won't have a net benefit on driver safety.
Clearly /if/ Tesla self-driving code is safer than the average driver then extending that to the general driving population would be a net good. But if the apparent safety is an artefact of the statistics due to Tesla cars themselves being intrinsically safer just as practically every other modern vehicle is a safer car than one built 20 years ago then extending self-driving to every vehicle might not be a net safety benefit
Likewise, if the driver cohort is different (and we know that it is different) from the general population & that cohort is required to drive the vehicle precisely at the times that the self-driving code can't cope which are probably the times when driving places the most demand on the driver, then that different in driver cohort matters: the statistical difference in death rates could easily be a result of differences in driver populations.
You can't tell from the data as given & the fact that Tesla keeps repeating the aggregate figures in their PR is suspicious frankly.
(NB. that paper you link simply underlines my point - In both countries, the lowest wealth quartile has a vastly greater death rate on the roads than the rest of the population. It doesn't matter whether Germany is worse or better than the US. If Tesla draws disproportionately from higher income deciles, then we should a priori expect them to have a far lower death rate than the mean. Quoting the mean as a benchmark to compare the Tesla death rate against is outright misleading.)
People who buy a super expensive car have more to lose than some dude in a 10 year old hoopty.
Maintenance standards vary. We can presume that big brother Tesla maintained the car as they would have castigated the driver for not doing so. They are also the only game in town.
All sorts of constraints are correlated with wealth that put you in way more risky situations if you're poorer.
It's not exactly controversial to claim that lower income correlates to crime though.
DUI is a tough one, people of all socioeconomic backgrounds like to get drunk, and people of all socioeconomic backgrounds like to drive. Off the cuff, I might guess that the numbers would show DUI to be the most egalitarian crime, actually.
We do? Is there a cite for that? It seems to me that driver of new luxury vehicles are disproportionately aggressive and unsafe, actually. Has this been studied?
If your going to make an argument for bad statistics, you need to practice good science yourself.
This seems intuitively that it would be true, but are there any stats on car age and accident frequency? Also where is the line on old? Vehicles made in the last 5-10 years will share most safety features, with all but a small minority of newer cars having some cutting edge. I'll admit I had been thinking of fleet age from the state of maintenance, but safety feature is a fascinating angle as well.. And the article you link in another comment was a fascinating read too, thanks!
Then there is the actual design of the vehicles, cars that were made in the last couple of years fare much better in collisions with cars that are older because they are a lot stronger, and are better able to channel the energy of an impact to areas away from the passengers. Many newer vehicles will for instance shift the engine under the passenger compartment in a frontal impact.
Older airbags had 'best before' dates associated with the igniters being somewhat unstable and that reduced the certainty of them going off as they got older.
Then, finally, the bodies themselves, once cars start to rust you can patch them up but they won't be the same afterwards. There is quite a bit of redundancy in the body but it definitely doesn't help if there is significant corrosion.
In fact I personally doubt (i.e. I have no evidence and am not actually making this argument because it would be "bad statistics") this is true if you compare Tesla's to the BMWs being sold into the same demographic. Those "safe new vehicles" you're positing include a whole lot of crossovers and minivans driven in much safer ways.
It doesn't. But autopilot is only supposed to be used on the highway. Not in New York City. Not on dirt roads. On the highway. That's the easiest place to make a system like that work - adaptive cruise control and a simple lane follower go a long way there (I hope Tesla has something a lot more advanced than that). To compare stats on their limited use autopilot to the whole country is truly disingenuous.
One important nit: it isn't self driving. It's sometimes self driving.
About cohorts: how about this: young (thus inexperienced and more careless) drivers generally can't afford Teslas. So yes, wealth is an indicator for how accident prone someone might be.
But that’s not even the point. The point is that the numbers Tesla gave are meaningless. Tesla’s stats would only be meaningful if you compared them to similarly sized/priced cars of the same age cohort with the same market segment. The fact that Tesla compares its luxury high-priced large sedans to all cars and not its own segment is itself very telling. They have those stats, and they would use them if they were compelling.
(Last posting from me today.)
At least they are responsive, seemingly open, and seemingly empathetic. They are obviously handling things in a business first manner as corporations will but it hardly feels as inhuman as people seem to be making it out to be.
Name your expectations up front, please.
It may be easy to casually update software and break builds, but when human lives are at risk a healthy dose of conservatism is appropriate.
But you're right -- it's probably unfair to expect a SV company to prioritize anything over profits (we saw that with Facebook, right?)
Tesla says their cars have drive past that spot 85000 times so far. So you may be asking for some testing that's better at uncovering problems than 85000 whitenoise test instantiations are. Or you may be asking to test, and test, and improve until the software meets the developers' intentions: If their intention is to improve safety by 40% (or 400%, or 40000%) then you may be saying they shouldn't release a version that's improved over the baseline by 20% (or 200%, or 2000%) because that still doesn't realise the design.
Seems a bit absolutist to me. You could equally well make the opposite absolutist argument that having inhouse software that's better than the current released is putting people's lives in danger by inaction/negligence/whatever. That fixing a bug or improving a feature confers a moral obligation to release immediately, lest people's life be put in danger which would be avoided by the new code.
You could argue that Tesla should be compared to luxury vehicles (which have more safety features), but even if you just compare their crash rate against U.S. drivers of cars and light trucks, the Tesla Autopilot driver fatality rate is almost four times higher than typical passenger vehicles...
Why people are still using their shit is beyond me. Why they are still authorized to sell it under this marketing (like the name given to it, or the mismatch between the restriction and the practice), I also can't understand: it should be better regulated by the authorities.
Presumably Tesla/the car is telling the truth as it saw it. I don't think you're suggesting they're falsifying the logs.
> Which is a terrible way to treat a customer
I'd assume that Tesla sells you a vehicle, not PR or legal services that protect you in case of an accident. Your expectation that Tesla should take the blame is unreasonable by any standard.
I mean, look, it makes sense Tesla has to defend itself, especially legally, but there's a big difference between pulling out the logs when you need to for the lawyers and the regulators... and making a public blog post blaming the customer.
Whereas with Tesla, they're leveraged out to here with stock valuation based on promises and projections, and if their key tech is perceived as seriously flawed, they're not going to exist as a company in the long term.
I do think it's reasonable to ask of them to either keep completely silent; or to be completely open - perhaps by giving complete access to all data to a truly independent third party, and not picking which facts they think are fit for public consumption, and which are not.
I admire what Tesla has achieved - but this is a company under considerable pressure. They may well go bankrupt; and every incident matters. They are under terrible pressure to massage those facts to make them look good. I can't help but treat any statement like this no differently to any other ad - it's easily possible to be misleading even when there is some possible interpretation that isn't an intentional untruth.
Everybody knows people aren't good at sitting around doing nothing and maintaining focus. Distracted driving isn't a new risk. So the question is: are the safety benefits to a driver aid really greater than the risk they cause by encouraging distraction? Even if the driver aid catches 99% of all risks, that may yet be a net negative. Personally, I don't think it's reasonable for a system that encourages distracted driving to then claim it was the driver's fault they were distracted. No amount of small print or big fat warnings can ever excuse ignoring reality. If the autopilot is contributing to driver inattention, then the driver's inattention is no longer just the driver's fault. Not that I'm at all sure that's what's happening here, but there are some ominous indications, that is for sure. Without an autopilot, who takes their hands off the wheel for more than 6 seconds on a drive like that?
I have a friend that investigates planet crashes. He’s told me that between pilot error, human error, hubris, flying when conditions are too dangerous, poorly trained or sloppy mechanics only about about 1% really can’t be classified as being caused by humans. He said the stats are a bit skewed because like any organization that reports on human performance they’ve been influenced to be a bit “flexible” when assigning blame.
Given that plane auotpilots are roughly comparable with Tesla's autopilots, one wonders how many plane crashes have occurred while the plane was in autopilot, and what fraction of those crashes were attributed to pilot error.
I want this job. Although I'm pretty sure all planet crashes would be the result of gravity.
They omit what speed and what lane the car was travelling in. They omit the time at which the car was no longer behaving in the way in which it had (when did it start heading for the crash barrier? At what point did the wheels closest to the barrier move over the line? etc)
They have previously mentioned that their cars have navigated this spot many times in the past (over 200 as I recall) did they note the number with and without the crash barrier intact? How many times was the car in the exit lane and how many times in the lane to the right of the exit lane? How many times have they navigated past it with a damaged crash barrier.
The driver had driven this route before, it was his commute to work. Had he driven it with this car on that route in autopilot before? Did it work then? How many times? How many seconds were there between when the driver's car was doing something right (in lane etc) and doing something wrong (straddling the lane). The response indicates there were 5 seconds of visibility coming up to the crash barrier, at what point was the autopilot on a collision course? All of those five seconds? or the last 500mS ?
Reading the Tesla response I felt I wasn't getting the whole story, and was getting a whole lot of deflection. That isn't the tone that inspires confidence in me. And that makes me sad because they have done much better in the past. I cannot help but speculate that this time they feel they might have been contributory at least and that a full disclosure would be used against them in a civil suit.
EDIT: While I believe Tesla and the driver are the ultimate source of the accident, I believe CalTrans messed up worse with that barrier. At least Tesla owns up to the limitations and warns drivers to stay in control, but why the hell was that barrier not replace d?! What systemic mediocracy is playing out there that they have road work signs with dates 3 years old on them, and a critical safety barrier is just missing with traffic flow as usual?!
The analysis I've seen suggested that the self-driving system in the Uber vehicle had 4 or 5 seconds to detect the pedestrian crossing the road in front of them. Driving into a pedestrian on the road in that circumstance is /not/ acceptable for any driver, be they human or automated.
Don't be fooled by the video footage Uber released from the in-car camera - the quality is appalling. Compare it with the dashcam footage taken by people driving the same road at the same time & you'll see that visibility should have been perfect & a human driver would easily have avoided the pedestrian who was doing something that all of do every day - crossing the road.
Not only did the pedestrian instigate that, the failsafe(which are supposed to be very simple to back up complex and more fallable systems) to not getting hit and killed by a car was entirely in the pedestrians area of responsibility; don't walk into traffic.
Unless assisted driving is close to near perfection (that is, it becomes a true auto-pilot without the need of assistance), we're going to see much worse numbers of accidents on the streets for assisted-driving cars.
Assisted-driving requires the same, if not more attention, as it essentially becomes like supervising an inexperienced driver that is prone to make stupid mistakes in otherwise normal circumstances. If you ever had a kid, you should be pretty familiar with how stressful this is.
But thanks to the PR spin, the driver is led to think that he can keep less attention, until he pays none at all. Assisted driving is even more boring than regular driving, lowering the attention span.
What could possibly go wrong?
We all owe each other a duty of care. Excuses about 'she just walked out in front of me' are just that: excuses. The driver in this case had plenty of time to slow down and avoid a collision & simply chose not to do so. That's inexcusable.
The fact that the driver was a pile of self-driving code doesn't change that one iota.
No. Children wander into roads. Drivers have a responsibility to keep an eye out for the unexpected. They are faultless only if they were driving responsibly and had no opportunity to react — neither of which were the case with the Uber.
When reading something like this in sci-fi, I always thought that the stories where over the top and tongue in cheek, to make some other point. Now I see they were real.
Companies are allowed to create robots that kill people. Then we sit around here and discuss how much of the killing is on the the robot's manufacturer, the robot's owner, the street lights and the victim.
You forgot one entity and I think it is a major one.
The people of the world who just stood by and let sdv's on the street with no sort of regulations and tests.
I posted this  after the Uber incident, Look at the awesome response from HN community that I have received.
That is not to say CalTrans did not break some contractual obligations to replace that infrastructure in a reasonable timeframe, and shouldn't be held responsible in a commercial sense. But they are not in any way liable for the accident, the missing hardware was not a reason to stop traffic and all such devices are offered on a best effort, good to have, basis.
in both the driver was supposed not to let the car drive itself unsupervised but only in one the driver was paid to do so
"However, it seems the driver ignored the vehicle’s warnings to take back control" 
Followed by the quote from Tesla that at some point in the history of the ride a warning of some sort was given.
But to be honest, I am more worried with the markings of the road rather than the inability of the autopilot of not foreseeing the accident: https://imgur.com/a/hAeQI
What's wrong with the US road administration? Why is this even look like a driving line? Where are the obvious markings? It's a very misleading road layout, I am curious the amount of accidents that happen there every year.
This is how I expect this kind of thing to look like: https://i.imgur.com/dfZehmd.gif
Given on how the road looks like, it makes more sense why Tesla is reinforcing the fact that the driver wasn't paying attention to the road.
Edit: Since people are curious about the limitations of the RADAR, the manual of the car mentions this limitation:
"Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead."
You can also read here that Volvo's system faces the same problem: https://www.wired.com/story/tesla-autopilot-why-crash-radar/
That doesn't make any sense. Radar systems do detect stationary objects just fine. In fact, from the point of view of the moving car, almost nothing is stationary.
I wouldn't be surprised if you're correct about the markings on the road, I almost crashed into a temporary barrier driving at night, because there were two sets of lane markings - the old ones, and the ones going around the barrier. Construction workers simply didn't bother to erase the old ones.
So you’re sending out pulses and listening for echos, which tells you how far away something is in a particular direction. You correlate subsequent pulses to say whether the object is moving toward you or away. If you have a car 50 m ahead of you every time you ping, everything is good. Now that car suddenly swerves around a car stopped in front of it, and your ping off that object says 60 m. A crash is less likely, your model thinks! The object in front of you is rapidly speeding away! By the time it realizes it isn’t, boom, crash.
Again, not saying it's a limitation of RADAR; it sounds like a deficiency in the way they're using it.
What the fuck? That's not at all how radar works. It has no such limitation.
The effect of this is that current in car RADAR systems are great at avoiding collisions with vehicles that suddenly brake to a halt in front of you whilst at the same time will happily let you drive a full speed straight into the back of a parked car.
This is why I believe current self driving vehicles (apart from Tesla) are all using LIDAR for object detection.
(The above is my interpretation of my reading on the current state of the art. If anyone knows more detail, please correct me.)
Same if you are about to enter a right-hand sweeper, pedestrians standing on the sidewalk in front of you would look as if they are directly in your path.
Musk mentioned this in one of his blog posts. Tesla is attempting to build a database of obstacles like this and geotag them, so that the car can filter them out.
I think you're thinking of Doppler radar systems. You can do static scene imaging with radar, no problems (I've written the code to do it, even!).
There's no way in hell a car is going to be using a Doppler radar. In all likelyhood, they're using a plain-old FMCW system, and it can absolutely detect stationary stuff.
(It's for sure a Bosch radar, but maybe a different one)
It makes me wonder, is that layout actually on purpose?
I truly find it hard to believe that a road or traffic authority in this day and age could design that on purpose!
It's quite well known that human attention is a tenous thing, and that the attention given to an object is primarily proportional to its size. But here we have a barrier end that is signed as being about as dangerous as a a bit of debris.
Just because an in-air navigation aid that allows for essentially pilot-free navigation deserving of the name "autopilot" happens to be relatively trivial to build, does not mean any navigation aid of similar complexity for other modes of transport should be called autopilot simply because they're at least as complex. If you're not automatically piloting, the name autopilot is pretty disingenuous, is it not?
Also, planes have a bunch of other navigation aids, including collision avoidance systems and ground-based air traffic control, and pretty obvious stuff like predictable flight paths and huge safety margins between nearby aircraft. An autopilot for an airplane works in that context. I kind of doubt it would work if there were thousands of planes nearby, many of which are on paths separated by distances the vehicles in question cover in tiny fractions of a second, and some of which follow almost invisible rather unpredictable paths (that drunk guy without lights over there...). Context matters.
Trivial is good mind you because it actually works, and works reliably. The same is not yet clear of self-driving cars.
It's a marketing term, and they are intentionally committing murder by the act of not calling it "LaneAssist" or some other similarly boring name.
The crash attenuator was 'used up' from a previous accident and might have prevented death if it was in working condition.
I see this all the time where I live. Crashed attenuators or disfigured guard rails go months and months without repair.
This crash is looking to be the autopilots due to the strange lane markings.
Have you ever driven down a road the lines grinded away and replace by new ones? Sometimes you can still see both sets, it can be pretty confusing for a human.
Any system that works by not interpreting the world outside is bound to encounter nasty surprises.
From the one photograph we've seen of the slip road in question, I'd say the lack of chevrons poses quite a serious danger. It looks just like a normal lane. I've never noticed a slip road branch off like that in the UK, i.e. a big crash barrier up the middle but no chevrons.
(I'm not saying whoever designed this section of road is guilty of some sort of crime, or is somehow responsible for the crashes, I'm just saying it should have been designed better).
> What they are saying is that while he was driving there was an earlier point at which the car did not crash, where the car gave a visual and audible clues. And you have to ask? Why the hell is that relevant?
I see it being relevant that the driver had his follow distance set to the lowest and that several times in the drive prior to the accident he had to be prompted to maintain his hands on the wheel. This speaks to a driver that wasn't paying attention like they were supposed to be doing. That would increase the likelihood of him missing an upcoming hazard and responding in time.
Regardless of that, if he had reported and issue in that area to Tesla, why the hell was he not driving it himself at the time? I know that I personally value my life enough that I wouldn't trust AP if it had been veering into the wall repeatedly in one area.
I don't even read these posts from Tesla anymore because I know each one of them is just justification for how Tesla did nothing wrong, and it's the fault of the stupid human driver for using its Autopilot "wrong" and dying in the process. Meanwhile, Tesla is laughing all the way to the bank selling the usage of Autopilot as one of the primary features of its cars.
It almost reminds me of how Coke markets its sugar-filled soda "Drink as much as you can at every meal and at any event...but you know, in moderation (because that's what the government makes us say with its dietary guidelines)".
Tesla is kind of like that. It tells people to use Autopilot because it's such a great experience and whatnot, but then stuffs its terms and conditions with stuff like "but don't you dare to actually lay back and enjoy the Autopilot driving. You need to watch the car like a hawk, and if you happen to ignore any of our warnings, well it's your stupid-ass fault for ignoring them, even though we wrote everything there in the 11-point font." It's very irresponsible for any company to do stuff like this.
I don't know what the exact process is now, but I wish the authorities would be able to get access to un-tampered logs of the accident after the fact from the self-driving car companies.
I don't care if it's a car company that has "Elon Musk's reputation" (who I like) or Uber's reputation. These accidents, especially the deadly ones, should be investigated thoroughly. No company will ever have the incentive to show proof that leads to itself becoming guilty. It's the job of the authorities to independently (and hopefully impartially) verify those facts.
I'm a big fan of Tesla and its electric vehicles, big fan of Tesla Energy, big fan of SpaceX. But I've always criticized them for the Autopilot because it was obvious to me from day one that they're being very irresponsible with it and putting the blame on humans, when they were marketing it as a self-driving feature (they did for years, even if they don't do it as much now, but they still use the same name), while its system is far from being anywhere close to a complete self-driving system, and thus putting anyone who uses in danger of dying.
- Cruise control maintains speed, will automatically decelerate if a car in front slows down, speed back up if they speed or go away, etc.
- Collision avoidance: the car will automatically emergency brake if it believes a collision is imminent.
- Lanekeeping: the car will stay in the lane without driver attention, although if it doesn't detect your hands on the wheel with enough frequency, the system will disengage.
- The car can park itself.
A quick look at Wikipedia tells me it can also change lanes and be "summoned."
Most of these features, especially considering cruise control, collision avoidance, and lane-keeping as the core featureset, since at least in 2017, have been in wide deployment among most car manufacturers, down to even cheap economy cars, that people are buying right now without getting on waiting lists or anything of the kind. You can buy a car right now with this core featureset for around ~18k.
Not only that, people seem to dramatically overestimate what kind of auto-pilot Teslas have. Speaking with my doctor about it, he assured me that "you just tell the car where to go, and it goes there." How did Teslas get so hyped up in the common man's imagination?
Can you elucidate on what tesla lanekeeping does that others don't?
It's very irresponsible for regulation to allow them to do that. And statistically for us to vote for politics that are not against that kind of behavior. Big companies rarely give a shit about killing people in the process of making money, as long as they are not punished for doing that. (And we, btw, mostly don't give a shit when the people being killed live in other countries, even when the process to do so is far more direct than the 1st world problem that interest us here.) People who used to organize those kind of criminal behaviors used to be held responsible for the consequences, but it is more rare now and the laws in various countries have actually been changed so that people managing companies are far less accountable than before for the illegal and otherwise bad things they order. As for companies being accountable, it is usually in punitive money that is order of magnitude less amount than the extra profit they made in doing their shit, so why would them stop? The risk of the CEO being sent to jail would stop them. Ridiculously small fines against the company, in the astonishingly rare cases where they even actually fined, not so.
 - https://www.youtube.com/watch?v=j3osSJSGInQ
Yes. But as we all know autopilot is not the same as self-driving.
>> And no, you can't get out of culpability by claiming statistical superiority.
In fact no one has found Tesla culpable for the accident. You can however defend the safety of your system by referring to statistics. (In fact that is the ONLY way to measure safety level) And your analogy does not hold. It it more like a parachute company saying their parachutes are safer than other brands because it fails less often than the other brands. Which brings me to my final point. Even if it turns out Tesla was culpable and negligent in this accident, it would not be a complete judgment if we leave out the inherent risks in car manufacturing and the incidents attributed to other companies.
But Tesla is shoveling in the bullshit pretty early on. Besides the fuzzy and deliberately vague stats (why is the mileage attributed to Tesla's "equipped with Autopilot hardware"?), this graf is just despicable:
> Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.
"unequivocally" has an actual meaning: leaving no doubt. And yet it's used following the paragraph that pretends Tesla's and all U.S. vehicles are directly comparable. I know it sounds silly to focus on that word but someone in Tesla PR thought it needed to be used with the pile of crap numbers. Such blatant and pointless deception -- "significantly" would work just as well for that inane sentence -- feels like a strong a signal to doubt the rest of the info given, as if the evidence weren't already so obviously dubious.
Vehicle notifies the driver it's having difficulty operating the driver elected auto pilot mode based on the conditions. Driver elects to continue using the failing operating mode despite being warned.
I fail to understand how "It isn't" relevant that the driver chose not to take manual control when warned that conditions were not ideal.
Every MBA student needs this tattoo'd on the back of their hand in order to graduate from now on.
Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant. They’re trying to bamboozle the reader in an effort to improve their images at the expense of victims who can’t defend themselves.
I don’t understand why it is so impossible for these companies to act humbly and with a sense of dignity around all this. I don’t expect them to accept responsibility if indeed their technology was not to blame, but frankly that isn’t for them to decide. Until the authorities have done their jobs, why not show remorse, regardless of culpability, as any decent human would?
> Uber made sure to point out that the victim of their incident was homeless. Tesla is pointing out how the driver received unrelated cues earlier in the journey. None of this information is relevant.
I don't understand how you can equate those first two lines. Uber's observation is clearly irrelevant, but the fact that the Tesla driver received multiple "get your hands back on the wheel" notifications, as close as six seconds before the accident seems very relevant to me.
But that isn't what their statement says. It says the victim had his hands off the wheel for six seconds before the crash, and that he received hands-on warnings "earlier in the drive." It does not say that during those crucial six seconds he was being warned. Nor does it explain the fact the car plowed into a barrier.
This is critical. No matter how many notices you give, slamming into something at speed is the wrong answer. It’s almost unbelievable that they’re trying to use that as an excuse.
I’m supposed to trust an “autopilot” that warns me six seconds before it slams me into a wall?
Actually it doesn't even say that, it says his hands "were not detected." Which means nothing to me.
They're right when they say we don't speak of the accidents that didn't happen, and I bet there's a ton of them.
As someone who's been in multiple car crashes, as a passenger, I really on longer want to be at the mercy of human drivers.
Brief reminder of the same discussion, in a different context.
Yet on one, and I really mean no one complains about computer assisted landings and takeoffs. I'm not sure why. Lots of passengers even sleep through them.
I guess we've come to realise that the man machine symbiosis works well for flight. But it might take time for this to get engrained into our culture when it comes to driving.
Maybe what we need is to put the brakes on the notion that self-driving cars are nearly ready for prime time and in a few years they will be all over the road. A level-headed review of the state of the technology suggests that is extremely unlikely. Except for people hanging around /r/futurology and to some extent here on hacker news, who are perhaps too close to technology to really appreciate it's limits in the practical world.
It would probably be for the best if the enthusiasts and the companies rushing headlong towards this new driving paradigm put forth some effort to tone down the hype.
When the cost of them killing somebody by having a deficient QA process is minimal (and just affects shareholders), then the cars will be rolled out before they are ready.
What will happen to liability at present is still unclear but if the self driving car companies are successful in dodging most of it (and it appears they are trying) then self driving cars are going to kill a lot of people unnecessarily - quite possibly more than humans would.
If a CEO has many millions of cars running his software, their will always be a death.
In that case, nobody would want to take on that responsibility so nobody would build self driving cars.
Therefore humans will still drive cars and there will be 1000x more deaths, most preventable with self driving cars.
This is pretty much exactly the Trolley problem in ethics and philosophy. Read up on it :)
That is something that "tech" companies will in all likelihood never be able to do, since they seem uniquely designed to betray and lose the public's trust at the earliest opportunity.
"Tech" companies have become far too culturally accustomed to dishonest hype, "growth hacking", marketing BS, PR spin, etc. for their statements to be trusted, especially when there are life-and-death implications.
(of course Google did lots of ,,ungoogly'' things in the past few years, but at least they got Waymo right)
By characterising self driving cars as a "trolley problem" you have presented what is known as a "false dilemma". Maybe you could read up on that ;)
I feel that is the only way we could isolate for self driving and not other factors.
Obviously this would also mean that compensation would have to go up for self driving car engineers and execs to compensate for the risk. That's fair, and, if that cost makes a self driving car venture uneconomic - again, not viable. And that's okay.
People take responsibility for other people's lives every day when they get in a car. I don't think the idea that Uber engineers should do the same should be considered particularly controversial.
The tricky part is figuring out what constitutes negligence.
What exactly am I looking at? The video itself says it's a computer-piloted takeoff, the YouTube title says it's a computer-piloted landing, and the top-voted comment says it's a human-piloted fly-by.
edit: it appears to be this flight: https://en.m.wikipedia.org/wiki/Air_France_Flight_296
(That would be an inexcusable and cynical deflection.)
I must have missed the part where they explained why their autopilot system drove into a concrete divider at high speed. Please can you quote the explanation you're referring to.
Because humility requires you to admit wrongdoing, which from a legal standpoint is not advisable in a public statement.
Ultimately companies don't have feelings, their employees do. PR statements and the words chosen (unless written by an individual like the CEO) don't somehow make a company into a person and don't reflect the feelings of their employees: they are a crafted tool to create a certain outcome by instilling thoughts in the mind of the reader and/or providing legal cover.
I would venture a guess this is an outcome of the "sue for everything" legal environment in the US. Any statement that could be construed as anything other than "we did nothing wrong" could be seized on by shareholders suing the company for securities fraud and demanding class action status.
What I think they fail to address, especially in this case, is that the autopilot did something a human driver who was paying attention would never do. Autopilot does a great job of saving people from things even a wary driver would miss, much less a negligent one, but the fatal accidents that occur in statistics are not from people who are fully watching the road missing the fact that there's a concrete barrier with yellow safety markings directly in their path, and hitting it head on for no good reason (e.g. evasive action because of another driver, etc, oh a stupid last-minute "oh crap that's my exit").
I want autopilot to succeed, and I want Tesla (and Musk) to succeed, and for the sake of their public image they have to realize that this isn't an average accident statistic, a lapse in attention or evasive maneuvering. It's a car that seemingly plowed right into a concrete barrier while still under complete control. That's not a mistake a healthy human will make.
Then how did the crushed barrier get crushed before the Tesla hit it? Clearly, the stretch of road is unsafe enough to trick humans drivers (and, clearly, Tesla should improve).
The most likely scenarios I can think of would be not paying attention and drifting into the barrier, attempting to avoid a car merging into the lane (and not paying enough attention), or being struck by another car and being forced into the barrier.
Though of course that doesn't explain why it didn't recognize the barrier as something it should avoid at all costs. Unless perhaps something to the left confused it and made it think there was an obstacle there, too, thus causing it to think it was going to crash regardless. If so, maybe it did try to brake at the last second?
I think 3d mapping the surrounding world correctly using multiple cameras (what humans are doing) is more generalizable.
Reminds me of a scene in "La grande vadrouille" where a motorcyclist is killed on a mountain road because he was following the dotted line and the painter had gone to the side of the road to take a break.
Also, that's an immense pothole!
... well you certainly aren’t from southeast Michigan, that’s for sure.
The divider is from the fast lane no less.
There's a spot just like this in Houston on 610 East. HUUUUUUGE flyover as the HOV lane spends about a half mile merging into the left lane of 610 (nice fat fast freeway, they need the runway). A guy had thought the flyway would be a safe place to park his car and wait for a tow. Safe enough that he was sitting on the hood of his car. Sure as shit the Challenger in front of me got confused, cut into the flyaway thinking it was a lane, hit the disabled car, and sent the guy flipping up into the air a good 10, 15 fit. Fucked up accident.
Anyway point is after the accident I was talking to the driver of the Challenger, trying to calm him down (kept saying "holy shit I fucking killed that guy!") and other that shock he was sober as a duck. He got bloodtested and everything, clean. In court he said he just got confused and thought it was a lane.
LONG STORY SHORT human error mang, human error. Still not sure why a car was able to do it.
They have those same white lines leading into intersections and people cross that shit all the time. People speed.
The vehicle should have at the very least stopped short of the road obstruction. But we don't yet know what led up to being in that lane, I was hoping the article in the OP would have taken us through it. Instead I'm getting the feeling this was a catastrophic software failure. Without more transparency on the issue, despite what Tesla may prefer I don't think I have any choice but to feel uncomfortable.
A piece of software that mistakes that wedge for a lane in broad daylight has no business being deployed on the road, in my opinion.
any healthy human would not deliberately drive into the barrier
The worst such crash I've ever seen was barely 2 miles from there, on northbound 85 near Fremont Ave. There is a soundwall that comes to a connection point where a wall segment is directly perpendicular to the freeway. For some reason, the guardrail had a gap there almost exactly the width of a vehicle.
A few years ago, a vehicle veered off the right shoulder and perfectly threaded the gap, into the wall at full speed. It was compressed to maybe 5 feet long.
I'm sure that out of the 1.25 million annual automobile-related deaths, plenty of drivers were paying attention and still did stupid things similar to this accident.
Humans do exceedingly stupid things all the time because they stop paying attention, even momentarily (or subconsciously).
We put big lights on the back of cars that light up when they brake. And yet, despite a driver looking direct at a huge object with two lights, that rapidly grows larger right in front of them, does not always prevent a collision. Or even a chain of collisions. I don't get on the road much, and yet even I've seen a ton of accidents that are baffling and can only have been the result of a driver not paying attention for a bit.
I'm reminded of how in quite a few places removing signs and lights actually improved safety because it forced drivers to stay aware instead of 'driving on autopilot', so to speak.
I think that's the real issue here. The more we outsource our attention to a machine, the more important it is that said machine does MUCH better than we humans do. Especially if a mistake can be deadly.
But I wouldn't be surprised if, indeed, technically this accident could've happened just as easily by a non-autopilot car where the driver had a little 'micro-sleep', got distracted by something in his field of view, mistakenly thought he was on a lane and didn't notice the (let's be fair) ridiculously bad markers that were the only way to tell that part of the 'gray' stuff in front of him was in fact a wall of concrete.
I mean, just look at the image: https://imgur.com/a/iMY1x . Half of what makes the barrier stand out is the shadow!
All that said I might sound more argumentative than I am. I do agree with most of your comment.
"this isn't an average accident statistic, a lapse in attention or evasive maneuvering. "
"That's not a mistake a healthy human will make."
Almost every driver thinks they're significantly better than average. Few are.
If a human driver's lapse of attention causes a similar crash, how much less of a tragedy is it just because we can less ambiguously blame the victim?
My opinion is that the statistics compare just fine.
Actually, about half of drivers are better than average.
true. here’s the location (post #62)
but the average person in the US anyway is not a good driver. i very often see people get aurprised at the lane ending at that exact spot. they should put a rumble strip leading up to it.
So, my question is, is it fair to compare AP to a human driver paying attention or is it more fair to compare it to an average driver.
I mean that just for the sake of argument. In this specific case, NTSB or someone should ban AP. It is so obvious what’s about to happen there, an AP should do what a driver should do which is take the ramp whether it’s a wrong turn or not. So many a-holes try to squeeze in last second (oops) when they should just take the damn exit or miss the damn exit as the case may be. What AP did here is what a poor and panicked driver would do and that’s just not acceptable.
So, I thought it wasn't all that clever in the first place to try and marry the risks of getting electric cars into the market with the risks of telling the extremely wealthy that they didn't need to hold the damn steering wheel.
This statement is meaningless circular logic. You can handwave away any incident with a human driver by saying he wasn't "healthy".
A human pilot has intentionally driven an airplane full of passengers into the ground with full control because he wanted to kill himself. The airline believed he was "healthy".
Tesla's statement is not clear on this at all.
If that was the case, they would have said so. Their vagueness here is telling.
If one system is steering the vehicle into things that the other system would, in most conditions, reliably avoid, it bears some discussion.
The only reason a human drives into that barrier is if they swerve to avoid something or are otherwise not properly paying attention (texting, etc.).
There's no good reason for the autopilot to have hit that barrier.
The mental gymnastics going on in these comments attempting to absolve Tesla of any responsibility are truly next-level.
Again, not absolving Tesla, just being realistic about the capabilities of human drivers.
A) The autopilot could not see a concrete barrier in its path.
B) More likely, and as the story reports, it WAS aware of the danger but didn't do anything.
Either case is at least worth discussing, no?
Tesla demonstrates their usual abuse of statistics. 1.25 fatalities per 100 million miles is the average across all types of terrain, conditions, and cars.
The death rate in rural roads is much higher than in urban areas. The death rate on highways is much lower than average. The death rate with new cars is much lower than average.
The autopilot will not engage in the most dangerous conditions. This alone will improve the fatality rate, even if the driver does all the work.
Tesla cars are modern luxury cars. They appear to have done a great job building a well constructed, safe, car. This does not mean their autopilot is not dangerous.
What I do know is that Tesla have previously published a comparison of the safety of their cars before and after the autopilot feature was made available, and there is a statistically significant improvement of nearly 60%. This study should factor out most of those caveats, since it’s the same cars, same drivers and same roads before and after the feature release. http://bgr.com/2017/01/19/tesla-autopilot-crash-safety-stati...
It also doesn't mean that it is dangerous. That bag of doritos that you ate, it might also be dangerous. Let's use facts and not vague worrisome complaints. So if you want to talk about the higher danger of rural roads, please give a number, and then give a number for tesla, or estimate one. Don't just say "urgh".
Just as an example, click on California (it is mostly consistent all over) on the map . California has a lower average fatality rate of 1.01 per 100 million miles.
Rural roads have 2.62 fatalities per 100 million miles.
Urban roads have 0.70 fatalities per 100 million miles.
The fatality rate is much lower on highways, according to Wikipedia , freeways have 3.38 fatalities per billion km (0.54 per 100 million mile, if I managed the conversion).
Remember, there are several very normal cars models that have had ZERO deaths: https://www.nbcnews.com/business/autos/record-9-models-have-...
These autopilot cars are death traps.
Fatalities are rare events. They chose cars with 100,000 registered vehicles or more. The categories these vehicles belong to experienced around 30 deaths per million cars, so cars that have 100,000 registrations would expect 3 deaths. You could easily get a few models with 0 deaths just as a matter of chance.
I don't think these results necessarily show these cars are inherently safer than other cars in the same category.
I do take issue, however, with your claim that Teslas are “well constructed” - they are not. They are poorly designed, poorly constructed vehicles that feel and look cheap. Test one side by side with, say, a BMW 3-Series or a Volkswagen Golf and the difference in production quality will be palpably obvious.
(I haven’t seen scientific comparisons, though.)
Thanks, I never knew this term before. Seems like a dark humour gem - equally funny and terrifying.