This crash, as well as the one where a Tesla crashed into a concrete barrier are evidence that their tech is not ready for release.
How much do you want to bet the Waymo self-driving technology would have avoided both of these? Seems like one of the simplest cases to handle: something up ahead is blocking the way!
Anyone who uses Tesla machine-assisted driving features today is putting themselves in grave danger (except for low-speed features like parallel parking).
Some people feel the latter should be enough to stop allowing drivers to turn on autopilot. I am betting Musk feels that the former serves as a moral obligation to get the safety features of autopilot in as many cars as possible in order to save as many lives as possible. He is throwing the switch on the trolley tracks .
 - https://en.wikipedia.org/wiki/Trolley_problem
Obviously you see how this deduction is subject to dozens of biases stemming from the fact that people turn on autopilot when the road conditions are safe to begin with. One would have to really carefully filter this data to avoid these biases, otherwise such conclusions are absolutely bogus.
"It’s essentially as good as result as Tesla can have hoped for from the U.S. traffic safety agency’s investigation, which took place over the last six months. Reuters reported earlier on Thursday that the investigation would not result in a recall of Tesla vehicles, but the full findings show that in fact, the federal regulatory body found plenty to praise while conducting its inquiry.
The investigation does conclude with a minor admonition that Tesla could perhaps be more specific about its system limitations in its driver-assist features, but acknowledges that all required information is present and available to drivers and vehicle owners."
But that does not indicate anything about conditions where autopilot should not be used in which it is gravely dangerous.
It is all about tail of the distribution, which why it is not so obvious.
This has a long way to go for true "autonomy" and until then such marketing should be avoided and examples of misuse (which youtube is full of) should be explicitly discouraged and if possible penalized. Tesla should be more explicit and pro-active about it. This is my only point really.
Yes it does include those conditions because this specific accident occurred in conditions in which autopilot should not be used. That single fatality is the only one that autopilot has seen in close to one billion miles. That is an order of magnitude better than traditional fatality rates.
That leaves two possible scenarios. Either people are likely to use this system incorrectly, in which case many of those billion miles were used in poor conditions for the autopilot. Or few people are likely to use the system incorrectly, in which there is little to worry about since the autopilot system has never been involved in a reported fatality when used properly.
By the way, can you point me to statistics about “traditional fatality rates” at controlled access divided highway in conditions where the line markings are clearly visible? It's dishonest to compare apples to oranges, especially given that a Tesla can't or won't recognize one from the other.
Edit: nope, I was wrong. 1.3 to 0.8 airbag deployment crashes per million miles for Tesla cars before and after autopilot release.
The fact that something can be used used in a dumb manner is good to know so you can avoid doing so, but that doesn't negate the fact that cars are safer overall with it than without it.
Seeing people abandoning the precautionary principle so they can latch onto an over-marketed, over-promised Muskian view of the future is sad. And in five or ten years we're all going to be looking back wondering what made us so dumb to think that computers could actually drive cars.
Though by the time you get to the 8 billionth mile the technology will have evolved, rendering the earlier data moot. So it will be pretty much impossible to do any sort of authoritative study on how safe this technology is until it is widespread, and mature.
You could have a car that never gets into little collisions that often happen to human drivers because of carelessness, but are rarely deadly, but the same AI might have a faulty reasoning every once in a while leading it to accelerate to 200kph before running straight into a wall. This hypothetical AI would have very few collisions, but maybe still be much more deadly than a human.
Humans are the only data we currently have significant numbers of miles logged for. Any extrapolation to AI is meaningless.
However, the inference that Autopilot reduces incidents per mile seems very likely to be true.
And the NHTSA doesn't count as a third party?
> I'm speculating that the reason for that 40% safety improvement is more a result of the difference between where autopilot can and cannot be used.
To say that tesla owners without autopilot tend to drive in situations with 40% more likelihood of accidents than those with autopilot, sounds like a massive stretch without data of your own.
Statistics is really tricky, even with very large samples at recent presidential election had shown that beyond any doubt.
The design can help prevent people from using it when it's not appropriate or have mechanism to keep them alert and focused.
That is all NTSB seems to be saying here... they aren't calling into question the idea of highway driving assist systems, which are (already) showing potential to significantly make highway driving safer - the time when people are driving at the highest speeds.
No system will be perfect from preemptive testing either, it really does take real world mass adoption to detect some uncommon flaws unfortunately... Tesla is going to bear the brunt of this fact. You simply can't simulate every human (irrational) behaviour nor can you simulate every environment it will be used in.
This seems like very clean data, as it doesn't actually consider how often autopilot is turned on, and it's over a fairly large sample of drivers, locales and miles driven.
This is just correlation, not causality, but the effect is too big to be by chance.
If true, this would be an unfortunate example of selection bias.
To boot, these are, for the most part, the same cars before and after, since Tesla shipped cars for almost a year with the Autopilot hardware built in but not activated, then activated it for all cars at virtually the same time via an OTA. Before that date, any miles clocked show up in the "no autopilot" column, the next day they show up in the "with Autopilot" column.
To further flog an expired equine, the study showed a 40% reduction in airbag actuations per million miles driven across all miles driven in cars with vs. without Autopilot function, not comparing only miles where Autopilot was turned on.
There are certainly possibilities for this data to be skewed or biased, but it's definitely not a comparison of safe Autopilot miles vs. unsafe manual miles.
This is what is misleading in my opinion. The study says that Tesla's assistive technology works, that is they assist humans to make driving safer. All driving assistance technologies satisfy this.
In other words
Autopilot + human is better than a human.
Then why is the furore about Tesla? Because they named it autopilot and call it safer than a human.
To say autopilot is safer than a human seems to imply that autopilot alone reply be better than a human driver.
Maybe you're smart enough to see this, but as is evident, many believe that autopilot alone is better than a human driver.
It would be interesting to see how often autobraking is key and how often the rest of the autopilot improves safety.
His moral obligation is to admit its limitations instead of grand standing on what it might do one day and dancing around with selected statistics.
the simple fact is, it failed and will continue to do so. they have made some technology choices they may not be able to surmount not matter how much noise they generate or their sycophants generate.
the promise of level 4 and 5 autonomous driving is so great that we cannot afford to have it damaged in the public eye
Yes. There are four videos of Tesla cars crashing into objects which partly protruded into a lane. Three vehicles, one construction barrier , all on the left side. One fatal. Relying on the Mobileye vision system, which is "recognize car/pedestrian, compute range to target" for anti-collision was grossly negligent of Tesla. Tesla has a radar, but it can't be very good if this is happening. Even the old Eaton VORAD radar, which we used on our 2005 DARPA Grand Challenge vehicle, would have avoided that.
This is the simplest and most important part of automatic driving, and Tesla botched it.
If only there was something to put in the middle of this reasoning!
Elon Musk fucked up as soon as he named it "autopilot". He set expectations at autonomous, and released it as a "beta" because that's what you do with some web app, and then when people started using it as an autonomous driving product (because that's what "autopilot" means), they unsurprisingly got killed. But shruggie! Move fast and break things! It's beta! What ya gonna do?
On the other hand, if it was called "driver assist", or "super cruise", or some other thing, then drivers' expectations wouldn't be set at "autonomous driving", and then the'd be paying more attention, and probably wouldn't end up dead.
I'm sorry, but this is all about setting expectations, and Elon fucked this up.
I couldn't narrow down exactly where in life I learned about autopilot systems in aircraft, but I always considered it to be common sense that autopilot doesn't mean "can completely fly itself"
Also Tesla says so themselves:
>All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than...
A car is not a web app. It just isn't. This behavior literally wouldn't fly at say Boeing, or anywhere else that makes machines people's trust their lives to.
I think more people are aware of the second point than the first, and that's why the common perception of "autopilot" is more about level-5 driving than cruise control
I'd bet those 3 combined are much safer than Autopilot, which complicates their implementation to offer a marginal convenience of not having to steer (but to constantly have to nanny lest it put you into a ditch)
I think if used properly Autopilot can be safer than most drivers without it, but the crux of the issue is that many people will trust the system more than they should, which could make them more dangerous. We're in a dangerous inbetween state with autonomous driving where it's just good enough to fool some people into thinking it's foolproof when it's not.
2D scan radars exist. Tesla didn't use them.
That's my main issue with the parent posts and the self-driving sentiments they're expressing - the person using it was grossly negligent and ignored all the warnings presented. You accept that the technology is in beta and that you accept the risks of that. The person ignoring those warnings is, more than likely, much less safer of driver than Autopilot ever could be.
I just really don't see how this could be pinned as Tesla's fault. We're treading very closely to territory where we absolve people of personal responsibility simply because we have technology that we can blame, even if it's irrational or misdirected blame.
For me, I prefer life over a false sense of control.
If the technology were better than humans I'd agree.
I have seen a drunk driver do exactly that.
Someone who falls asleep might also do that.
 Come to think of it, I have almost caused an accident a decade ago. I was momentarily distracted as I turned to talk to my passenger. I was able to brake in time, as a car in front of me had suddenly stopped. Nevertheless, it was only a matter of maybe one second to avoid running into the other car. Me-a-decade ago might been in a similar accident.
Human car accidents are not news because roughly 100 Americans die in car accidents every day, out of ~15300 car accidents per day.
I can also avoid being in either of those situations.
If autopilot is comparable ot an exhausted or drunk driver why do I want to use it?
But separate cases are not really interesting when comparing accident rates. Think of a more extreme made up scenario:
You can either drive yourself and be part of the "1.25 deaths per 100 million vehicle miles" statistic (US), or use an automated car which causes 0.25 deaths - but it spontaneously explodes in that case. No other car would just explode randomly while you're driving it. (It's clearly worse in that way)
Do you want to drive yourself, or take the automated one?
That is, if the risks of one edge case goes up while the car performs far better in the vast majority of situations, is the car worse than a human?
It happens all the time, dude. Most famously with Jayne Mansfield, but commonly when people are paying attention to their damned phones, or trying to sort out a fight among the kids, or just distracted by their marriage, or work problems, or...
I used to know a woman who drove right into a city bus that was stopped at a bus stop, lights flashing and everything. Fortunately no one was injured in that accident.
Is it realistic to worry about a car that can't get that right, but is simultaneously safer than a human driver overall?
Tesla seems to be allowing only two options - one with no driving assistance, and one where they can call it autonomous.
That said, I'm casually in the market for a Tesla (ideally pre-owned) and one of the features I'd require is the latest auto-pilot hardware. I'd definitely make use of it right away too -- but I'd do so with the understanding that it is far from infallible, and that I still need to keep substantially all of my attention trained to the road just as I would in any other non-autopilot vehicle.
That's their right to decide or not. What bugs me is that they also put me (and everybody else on the road) in danger.
BTW same with the barrier incident. That was a driver failing to be vigilant. It was his fault for leaving autopilot on in a non-standard environment.
The 2 (in fact there were more) crashes are important because they show the state of the technology. Also keep in mind 99.9% drivers don't have an autopilot, 0.1% does. 2 crashes among the 0.1% are just as important as 32k crashes among the 99.9% of the rest.
Of course it should never have happened. Still, I'm impressed by how gracefully it handled the accident; emergency lights, keeping the lane and slowly braking the vehicle to a halt (again, such frontal collision was still unacceptable).
But then you're forced to compare the autopilot accident rates against the regular old human accident rates. When you do that, the autopilot wins every time.
Put another way, I'd be comfortable getting into one of these today. I trust the computer's bugs a lot more than I trust my own. I also trust its reaction time above my own, which is more likely to keep me safe if someone else does something stupid.
An autopilot doesn't have to be 100% safe, it just needs to be significantly safer than a human pilot and in the long run it will save lives.
I think the fallacy is thinking that it's okay for humans to be flawed but machines should be infallible.
Wouldn't you compare autopilot va autopilot assisted human? That is, where the autopilot is a passive system that only comes into play to prevent accidents.
Can you compare the number of hours driven with the autopilot and compare it to the statistics of human drivers, and determine which was safer?
This was a fatal crash, but faulty airbags caused the death of 10 people.
What selfdriving cars needs, rather than more time to bake, is a PR or lobby group, that can help spin (put in perspective) these types of events.
Autopilot requires manual consent from the user to even enable it and has so many overrides for the user that I feel like you're really stretching it by calling it a "fatal flaw". All it would have taken to prevent this accident was the person driving to hit the brakes in time. Even if they were just using cruise control in their vehicle, they would have died. This was completely about inattention and irresponsibility on the part of the driver. I can't see how anyone is making the leap to this being a "fatal flaw" in the system.
Have you examined the history of air travel? Entire airliners have crashed and killed everyone on board and people still continued to take flights. Today airplanes are safer than even trains. The reason is simple: the industry is very heavily regulated and everyone has to adhere to strict processes with tight controls.
The reason is even simpler: private companies using private money to improve flight technology
Governments after all are run by people, just like companies, and people are prone to greed, corruption, and abuse of power, regardless of their position.
I was unaware that Waymo was planning on shipping a Level 2 system.
Am I putting myself in danger?
Sounds like a typical software development effort. we always used to say that if we built bridges the way we write software, there wouldn't be many bridges. Well, now we build cars the way we build software, and guess what is happening...
I firmly believe that this is all safety critical code, and we should look to NASA to ask how its done:
But hey, all that boring "safety" stuff is expensive, and takes too much time to complete....
EDIT: As has now become usual for HN, I am being downvoted instead of debated. This used to be a site for good debate. Too bad that's no longer the case.
Downvotes should be about bad form, not about "I don't agree with you". HN used to be about the former, but now is much more the latter, and it is a poorer community because of it. HN isn't the first online tech-news community, and every single one of these communities that uses an up/downvote mechanism to self-regulate comment quality have gone the same way: as an increasing number of users gain voting privileges, voting replaces debate. When that happens, the community becomes a simple echo-chamber for the "established" truth of that community. This increasingly keeps away quality commenters, strengthening the effect.
When voting, especially downvoting, is a frictionless, costless action, the quality of varied discourse will sooner or later die.
I also think it's strange that they're allowed to sell cars equipped with Autopilot at the moment, there's too little known about its capabilities and weaknesses.
Negligence is already something the courts are well suited to handle. Tesla already has a big incentive to do this well. I don't think they are necessarily negligent here for having marketing material which doesn't 100% cover real world usage. If that was ALL the information customers received I would...but I highly doubt that's the case.
I also take issue with the autopilot name as well, but I do think it's something that can be effectively communicated elsewhere with a product like this - where it really isn't that big of a deal. It is not a large barrier to communicate the immature state of the software and the limited current use-cases to each buyer beforehand regardless of branding.
If Tesla is over-promising and under delivering their relative advances in autonomous driving as a sales tactic, then there is little defense of Tesla if drivers don't read enough fine print to understand the limitations of the autonomous driving systems.
In my assessment, Tesla is not being honest about capabilities. If they were, they'd have used the naming strategy that comes up with "adaptive cruise control," not "autopilot." (Sidebar: I don't care what "autopilot" actually means, the general public thinks it means computer magic makes a plane all by itself.)
Having misleading branding and marketing materials that show the CEO operating the product in an unsafe manner convey a message to viewer on how seriously to take the warning. They're giving the impression that the disclaimer is "under-cooked meats can carry food borne illness" on restaurant menus. The "don't try this at home" cover-your-ass warning while winking to try it at home.
If the answer is no, then Tesla's Autopilot shouldn't be allowed as it is.
If the answer is yes, then it means that other manufacturers are too cautious.
(have we forgotten Feynman and the Challenger investigation? Doesn't have to be for-profit for people to mislead about risk)
One very simple example from a complicated issue:
If Autopilot saves lives when used correctly, but Tesla's marketing material implicitly encourages people to use it incorrectly (e.g. video of Musk with no hands on the wheel), then the marketing is dangerous.
If you invent a cure for cancer, but 1/10th of your pills have rat poison that could easily have filtered out, it doesn't matter that you overall saved lives. You still killed people through negligence.
Every single person that uses Autopilot acknowledges the risk multiple times before they can use the system.
Debates about autopilot in cars have many more dimensions than simply number of deaths.
There are many effects of this technology way beyond number if deaths.
I'm all for the government keeping its nose out of places it doesn't belong, but as we've learned over the past century, companies don't give 2 shits about whether or not the products they release harm consumers; it's entirely up to government to set the boundaries. You won't find me in any car with any kind of self-driving "autopilot" feature for many, many years.
Don't they, though?
From what I understand, Tesla's "Autopilot" doesn't do much more than lanekeeping, adaptive cruise control, and emergency braking, which you can buy on a wide range of vehicles today. For example, I drive a Honda CRV that does all of this - on the highway it really does drive itself. You don't even have to buy the top trim levels, just one up from the bottom.
The marketing worked. What so weird about that?
The most effective tool of marketing, is that people think humanity is so smart that marketing isn't effective.
To manufacturers of vehicles equipped with Level 2 vehicle automation systems (Audi of America, BMW of North America, Infiniti USA, Mercedes-Benz USA, Tesla Inc., and VolvoCar USA):
5. Incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed.
6. Develop applications to more effectively sense the driver’s level of engagement and alert the driver when engagement is lacking while automated vehicle control systems are in use.
The NTSB points out that the recorded data from automatic driving systems needs to be recoverable after a crash without manufacturer involvement. For plane crashes, the NTSB gets the "black boxes" (which are orange) and reads them out at an NTSB facility. They want that capability for self-driving cars.
The NTSB's ability to remain apolitical and engineering-focused is really remarkable. They way they were portrayed in "Sully" is shameful and was uncalled for.
Their involvement can only be a positive in the self-driving car industry
Not something I have known about Tesla's autopilot before reading this.
You should probably have a stronger sense of safety making a long trip with a 16 year old that just got their license driving. Even then, I wouldn't feel too comfortable taking my attention from the road for too long.
I cycle through the streets of London every day, BTW, so I'm not suggesting that it's lack of recognition ability is a good thing.
Autopilot 2 keeps you in lane, does traffic-aware cruise control, and can do lane changes. That's it. No more, no less.
It sometimes gets spooked by shadows. I've had it brake sharply when going under bridges on the motorway, because of the shadow on the road. I've had it mis-read lane markers (in understandable circumstances; the old lines had been removed but the marks they left still contrasted with the tarmac).
If anything, I find Autopilot 2 to require too much hand-holding - literally! I rest my right hand gently on the bottom-right of the wheel when Autopilot is engaged, resting my wrist on my knee, which is also how I drive regular cars on the motorway. This isn't enough for Autopilot to think I've got my hands on the wheel, so it's constantly bugging me, and I have to wiggle the wheel to tell it I'm there.
Apparently some Model 3s have been seen with driver-facing cameras. Eye-tracking seems like a much better solution to me.
I love Autopilot. I recently had to drive ~500 miles on motorways in a thing that explodes ancient liquidised organic matter, and it was an absolute chore. Autopilot is ace when used as intended, and great in traffic jams.
IC cars have been shipping with all of the features currently available in Teslas for the better part of a decade. 1 horsepower = 745 watts, so we're talking about somewhere in the range of 20HP to generate as much power as the average Tesla consumption including powering the drivetrain.
I'd be surprised if all the electric features in a Tesla consume more than 5HP equivalent.
IC cars even have the bonus that heating the cabin is effectively free, no extra battery drain from it.
My wife has to do regular weekend interstate travel. She also has serious RSI issues with her hands and arms. The ability to keep focus, but completely disengage from the steering wheel for a while and accelerator for the length of the trip is incredibly valuable to her. It's why we purchased the car.
Tesla is clear on the limitations. The feature is turned off by default. Turning it on presents the limitations to the user.
>The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects
Does Tesla's software make the assumption that the lane is safely navigable unless it detects otherwise?
IMO it's pushing the envelope of recklessness to have a default software of "good to go" for a non-critical system (whereas something like a shipboard fire suppression system should default to "good to go" in order to prevent corner cases from impacting functionality).
While I understand that you don't want the car braking aggressively when it encounters a bag blowing in the wind or poor lane markings mid-corner not driving into something those are behaviors that are not unexpected from an excessively conservative human driver and preferable to running into the back of a van that's parked on the side of the road.
edit: If you're gonna down-vote you might wanna say why.
As for whether having shared autonomy at highway speeds is a good idea, that's a really good question that I hope Tesla is asking itself right now. But if you are trying to do that, cameras pretty much have to be your primary source of truth.
>As NHTSA found, the Automatic Emergency Braking feature on the Tesla—in common with just about every other AEB fitted to other makes of cars—was not designed in such a way that it could have saved Brown's life. The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects
Unfortunately, a poorly designed user experience lulled the driver into behaving foolishly. That's why the headline mentions UX.
Did it? Or did the driver know what they were doing and not care? I don't think it's responsible to make such assumptions about the driver's reasoning.
Had it been the same for all systems then the argument could be made that the current concept in general is flawed in this iteration but because the others demand a lot more driver involvement the problem pivots to being a Tesla problem, not an semi-autonomous driving problem.
Holy crap. Who thought that was a good idea?
If you want to have your hands off the wheel and your nose in your phone, then you are driving recklessly.
The answer is tough shit, you don't get to not pay attention while driving. Get an uber then.
Having adaptive cruise control that manages stop and go will absolutely change the number of people on their phones.
my average current driving speed on car display is something like 25mph. thanks to constant traffic. the car is rated to 160mph top speed. how useful is that.
The trade off for autopilot is that you don't have to move your foot 2-3inches every 20 seconds. That's not worth the increased risk.
As in, are you not watching the traffic outside because of it, and concentrating on other things? If that is the case, are you not worried about something unexpected happening?
Yeah it doesn't change that fact, it verifies it.
I'm a big fan of Tesla and Musk, and you can see that from one of my recent comments on HN, too, but I've pretty much always criticized the Autopilot for exactly this reason.
Humans can't handle Autopilot in its current form, no matter how many asterisks Tesla puts in its Terms of Service and how many prompts and alerts it adds to warn the drivers. Either it's completely worry-free "let me take a nap until destination" self-driving - or it's not, and the driver should be expected to do manual driving at all times.
I wonder, though, if manufacturers are going to greatly accelerate their progress to level 4 or above from the data they get having level 3 in production vehicles. They may get much more quickly to a higher quality level 4 by having what amounts to a marketing novelty feature in practice in 1000s of vehicles in real-world conditions.
I took a look through the level descriptions for a level where the human is driving but is being monitored by the system, but they all seem to be versions of having the system drive while being monitored by a human. Maybe we can do more in the way of collision avoidance as the technology progresses towards true autonomy.
This is only true, of course, if autopilot is actually safer than a human. From what I have read, there is little data to support either conclusion. Lacking proof that such a system is actually safer, Tesla should take the conservative engineering decision and reduce or eliminate drivers' access to such a system until enough data is available to make a conclusive statement on safety.
"An autopilot - is a system used to control the trajectory of a vehicle without constant 'hands-on' control by a human operator being required."
What Tesla has is not an autopilot!
Rip Riley: Uncuff me, you idiot! Holy God, if we overshot our chance to refuel...
Sterling Archer: I thought you put it on autopilot!
Rip Riley: It just maintains course and altitude! It doesn't know how to find THE ONLY AIRSTRIP WITHIN A THOUSAND MILES SO IT CAN LAND ITSELF WHEN IT NEEDS GAS!
Sterling Archer: Then I, uh... misunderstood the concept.