The 94 million mile mean is for all types of driving, in all types of situations, all types of people, in all types of cars.
First and foremost, highway driving is much much much safer than regular driving. There are no intersections (which cause 40% of all accidents), there is high viability, slow turns, no pedestrians, and a super simple traffic pattern. Autopilot is only doing the "easy" driving.
Second, that 94 million miles figure includes motorcycle deaths. Motorcycles have 34X more fatalities per mile driven. That juices the number of fatalities per mile.
Demographics matter too. Telsa owners are disproportionately older than average (not a lot of teens driving Teslas). They are more educated and richer. These people tend to be safer behind the wheel on average.
Telsa's also have a lot of safety features that should reduce fatalities to way below median anyway.
If you want to test if Autopilot is safer you have to test it against a control group that is the same but just didn't use Autopilot.
Completely agree with most of these points, but the demographics one is actually a mixed bag. Fatalities plotted against driver age follow a U-shaped curve [1], with younger and older drivers being the most likely to cause a fatality. The U is higher on the younger driver side of the curve, but this is likely due in part to the fact that teens have less money and therefore drive cheaper/less safe cars.
Because Tesla owners probably skew older, we might expect to see slightly more fatalities with Teslas than with other vehicles. The male/female fatality rates [2] are also relevant, and I would guess that Teslas are driven more by men than women.
(My estimation of Tesla demographics is based on my commute down Sand Hill road, so not scientific but probably the highest-traffic area for Teslas.)
Lastly, I haven't seen data that wealthier people are safer drivers than poorer people. Source? It's believable, but I have looked in the past for data on this and not seen this particular conclusion.
That's a superficial analysis. Tesla's MTBF has been discussed here before. For one thing, Tesla's auto-driving is only expected to work on freeway-type highways, and fatality rates for those are lower than for all driving.
Tesla's "autopilot" now has a second fatality, in China.[1] There's dash cam video. "When it was approaching the road sweeper, the car didn’t put on the brake or avoid it,” a police officer said in the CCTV report. “Instead, it crashed right into it." Take a look at the photo. It's possible that the big rotating brush behind the street sweeper didn't produce a radar return. Remember, Tesla's radar is only at bumper height. Teslas are radar-blind at windshield height. The rear profile of the street sweeper truck, which is high off the ground, may not have been recognized as a vehicle by MobileEye, especially since the street sweeper is throwing up dust. Video: [2]
So far, there seem to be at least two Tesla autopilot defects that can kill people:
#1 is a high obstacle. That was the trailer crash fatality. It's also the auto-parking failure where the vehicle didn't stop for a truck with overhanging ladders out the rear.[4] Teslas are radar-blind at windshield height, and if it doesn't look like a car or truck rear end, the Mobileye vision system won't see it.
#2 is a left side obstacle partially blocking a lane. There are two videos of crashes in this situation.[2][3] The "how big is the open space" vs the "how wide is the vehicle" analysis may be defective.
#1 is the result of an under-sensored system with a huge blind spot. #2 may be a software bug.
Looking forward to the NTSB report. Nothing new from them yet.
Tesla contests that the China fatality was due to autopilot (they say it may not have been active). But in actuality, it doesn't even matter that the sweeper was throwing dust - autopilot straight up can't detect and avoid stationary objects in some situations, regardless of ground clearance. Here's an example of it hitting a normal van stopped in the left lane:
Tesla didn't get telemetry data from the vehicle, so they have no idea what happened yet. (Does Tesla get telemetry data from cars in China at all? Does that get through the Great Firewall? Do Tesla cars in China report to a server in China or back to the US, or what?)
If you watch the video, the car is tracking the lane very accurately. Either the driver is working very hard to stay precisely in lane while not watching for obstructions, or the autopilot system is driving.
> #2 is a left side obstacle partially blocking a lane.
My theory is that since it only has a single camera they use expansion from the center as a way to determine distance and speed of obstacles.
Obstacles that are off center don't have a fixed angle so without parallax the system can't tell what the position or speed is. It could be a car that's close and moving the same direction, or a building that's far away and not moving. So it's just ignored.
You can see what Mobileye does. It's available as an aftermarket car accessory, and there are lots of videos.[1] There's a technical paper.[2] It's supposed to be able to detect pedestrians. It should be able to detect a stalled car.
But Mobileye doesn't know the steering plan. It warns of possible collisions, but can't warn of every parked car because of its genesis as a driver-warning system. So it may be set up to not warn of collisions which could be easily avoided by steering. So the steering control system isn't told of the avoidable obstacle.
Big arguments over this are now becoming public.[3] The Mobileye CEO says he personally warned Musk not to allow hands-off driving. A former Tesla engineer considered the system unsafe. CNN: "Musk’s philosophy seems to be, 'To make an omelet, you have to break a few eggs.' He is aware that there will be deaths and injuries while Autopilot is in use and is prepared to accept a few negative events in order to prevent many more."
Good background on statistics and sample size. Also relevant is the fact that Teslas are (1) heavier than the average car, (2) newer than the average car in America, and (3) more expensive than the average car. All of these factors are not only correlates of—but actually cause—better vehicle safety outcomes. For more detailed numbers, see http://www.thedailybeast.com/articles/2016/07/14/why-tesla-s... (disclaimer: I am one of the co-authors).
> One Tesla owner describes this Catch-22, after being told that a crash was her fault because she turned off Autopilot by hitting the brakes: “So if you don’t brake, it’s your fault because you weren’t paying attention,” she told The Wall Street Journal. “And if you do brake, it’s your fault because you were driving.”
If this is true, this is a fairly damning statement about the reliability of autopilot - it calls any statistical claims Tesla provides into question.
The "Catch-22" doesn't make a lot of sense, really.
In either case you are driving. Until we legally recognise autopilots as being fully in control of the car, the driver is always driving.
Her statement does make me wonder how relaxing it would really be to drive a car with autopilot, though. If I can't sit back and trust it 100%, I think I'd rather just be fully engaged in driving the car.
If you don't take control from autopilot when it does something dangerous, Tesla blames you for not paying attention.
When you do take control, Tesla blames you, because you disengaged autopilot. Either way, autopilot, by definition, didn't have an accident.
This lets them claim that it has driven XYZ miles without an accident - by ignoring the 150 feet that preceded one (As the human took control - even if not taking control would have resulted in the same accident.)
Thanks for the link, really interesting. It does seem that there's a huge amount to overcome as autopilot moves from driver aid to fully autonomous.
I'm starting to think the ideal compromise (for now) is a car where I'm in control, but with lots of systems that cut in when I make a mistake, rather than the other way around.
Read back what you just wrote, and then consider the impact that might have on any statistics which seek to describe the safety of the Autopilot. By definition you've just decided that it's always driver error, which would skew those statistics.
What I'm trying to say is that it's not appropriate to classify accidents as either caused by autopilot or human error.
The fatal accident was, tragically, a failure of both the autopilot and the driver. But the driver was relying on the autopilot in a way that has never been recommended.
It does seem as though Tesla has been overstating autopilot's safety, based on arguments made in the article and in the comments here. But it seems grossly unfair to judge the autopilot against a use case that it was never recommended for. It's a bit like braking at the last second on a wet road and then blaming your ABS when you crash.
I would love to see some more nuanced analysis of autopilot safety when compared to an appropriate control group.
Great article on the statistical nuance of their claims. But even before you get that far, you also need to address the fact that national highway statistic they are comparing to is highly misleading. Like gnicholas mentioned, the age, price, geographical distribution, weather conditions in which autopilot runs, and typical owner characteristics of a Tesla, all differ sharply from your average car which rides in the highway. And, I'd suspect if you zeroed in on proper comparables, all of these factors would drive the average highway safety way up.
So not only is their claim yet statistically unproven, they are also setting themselves an artificially low bar.
What I was trying to get at is that the benchmark that Tesla is using as comparison (1 fatality/94 million highway miles) takes into account all highway vehicles. Tesla is a high-end, expensive, luxury vehicle, and the autopilot also does not function in certain conditions (like heavy rain). It would seem to me that if you just took into account other luxury vehicles, and excluded really adverse driving conditions, that the 94 million mile number would probably be higher, and I think that would be a fairer comparison for Tesla to use if they want to claim the car is faster than human driving.
Are you trying to say that you think taking these factors into account would actually make comparables comparitively less safe?
Also keep in mind that the average doesn't just include low-end cars, but a non-negligible tail of twenty-year old low-end cars that don't even have airbags and proper crumple zones. Imagine something like a 1994 Mazda Protege or Geo Metro.
I couldn't find a representative early 90s pre-airbag crash test vid, but here is a '96 Chevy Astro small overlap test:
From what I understand, if you limit the stat to a) cars made in the last 2 years and b) luxury cars and c) the conditions where Autopilot is mostly driven, the fatalities per million miles would be lower. i.e 94 million is too low of a number
Right, that's what I was saying! I guess we just misread each other, but this is absolutely right - their comparison should use a higher denominator than 94 million!!!
Driving on a divided highway is safer then an undivided one. Driving in conditions where you can use cruise control is safer then in ones where you can't. (Snow, ice, sleet, heavy rain, poor visibility.)
I'm not sure where driving on a highway falls compared to city driving.
I think this point from the post deserves emphasis, for consideration in future discussions:
> If Tesla makes a "major" change to software, pretty much all bets are off as to whether the new software will be better or worse in practice unless a safety critical assurance methodology (e.g., ISO 26262) has been used. (In fact, one can argue that any change invalidates previous test data, but that is a fine point beyond what I want to cover here.) Tesla says they're making a dramatic switch to radar as a primary sensor with this version. That sounds like it could be a major change. It would be no surprise if this software version resets the clock to zero miles of experience in terms of software field reliability both for better and for worse.
I am admittedly a broken record on this point, but Tesla moves very fast and very nimble on a system that is under design controls. It would be very educational to learn about their development process and how it maps to ISO 26262.
They should have called it "Cruise Control Plus™" or something similar. Regardless of the usual arguments about how accurate the name is according to the avionics definition of the term, "autopilot" is misleading in the context of the consumer car market. Tesla should not have named it that.
I am surprised that Merriam Webster's definition only mentions steering and not acceleration. Telsa's autopilot does have a steering component, so it doesn't seem to be that much of a stretch.
I think the argument against the term 'autopilot' is less about the technical accuracy, and more about the implication and assumptions of the layman. To most people, 'autopilot' suggest that they can be totally hands-off, which isn't true yet.
Yeah, I feel like the people here/elsewhere who take great pains to point out the limitations of aircraft autopilot systems somewhat miss the point - the general perception of what is offered.
The average lay person doesn't care about different axes of autopilot, Flight Management Systems (FMS), Inertial Navigation Systems (INS) versus GPS, autloand capabilities and categories of Instrument Land Systems (ILS). (Yes, I grouped several different, complementary technologies together, whilst aware of their definitions, to further my point).
Lay understanding of autopilot being 'tell it a destination, let it do the work'.
Am I missing something with this argument? It's consistently made on these anti-Tesla threads here and on Reddit, but from my understanding an aircraft's autopilot is actually dumber than Tesla's.
You set bearing, altitude, and speed and the plane will adjust to and keep that course—that's it. Some will land in optimal conditions, but they're unable to taxi. Also a great deal of plane crashes are due to "handoff" issues.
So it seems to me that Tesla's auto-pilot is aptly named.
The problem with the naming has nothing to do with the technical capabilities, but an understanding of how that will impact human-machine interaction. You are absolutely right that aircraft autopilot is a lot "dumber" in a sense. But there is no ambiguity in the fact that when the pilot turns on the autopilot, he can -- and does expect -- the computer to take over all flight operations.
By naming the system autopilot, Tesla has also created that same expectation with their customers. Not only that, but it has at best turned a blind eye to reports and videos and stories of customers who willfully take their hands of the wheel/sit in the back seat/etc. and at worst encouraged an erroneous public perception of how the autopilot system should be used.
The issues you cited in handoffs with airplanes are far more complicated when it comes to cars, where decisions need necessarily to be taken in a much shorter period of time. But all of that seems really downplayed by Tesla marketing, which instead chooses to aggressively market just how close the system is to full driving autonomy, instead of more like a gentle aid to monotonous highway driving.
The disconnect between their official documentation and their marketing is immense. That is what is potentially dangerous.
To add to this, aircraft autopilots have one huge advantage over terrestrial based autopilots: the lack of need to manage collision avoidance. This is handled by ATC in most cases, with the pilot only offering minimal backup (in the case of flying through clouds, no backup).
Any vehicular autopilot will need to reliably and consistently take care of collision avoidance. Any autopilot which wants to deem itself better than humans needs to do so much better than the equivalent human (and in a multitude of conditions, such as rain, snow, gravel, idiot drivers...).
And this doesn't even consider handover times. An aircraft autopilot could fail completely, and the pilot would still have on the order of minutes to recover (assuming this occurs mid-flight, at cruising altitude). A car autopilot failure would give seconds, in the best case.
"But there is no ambiguity in the fact that when the pilot turns on the autopilot, he can -- and does expect -- the computer to take over all flight operations."
What do you mean by this? There are many autopilots which won't take over all flight operations. Some only handle a single control axis, or two. More sophisticated ones can indeed handle everything, but that's not the only kind out there.
If people think that's what all autopilots do, then the name is still a problem, but that's a bit different.
An autopilot in newer light GA aircraft, like Bendix/King KAP-140, is able to hold/climb/descend to an altitude, and maintain heading based on the GPS input, so it does handle hands-on flight operations leaving the pilot to maintain awareness and program the GPS.
> You set bearing, altitude, and speed and the plane will adjust to and keep that course—that's it.
And that's a very useful and safe way to get a plane from A to B, given the amount of route planning and air traffic control in modern aviation. An autopilot can generally provide ample warning of situations it can't handle, and a plane at cruise altitude has quite a bit of room for recoverable error.
Until we have traffic control on our highways that makes sure our cars know about obstacles early enough, the same basic features are not enough to provide the same experience for a car.
And yet, we have Elon Musk saying that half a million people would have been saved by autopilot, if it were rolled out universally. [1] This doesn't sound like he's pitching a 'maintain speed and heading' device.
So, can we rely on it to make the right decision in a safety-critical situation, or not? (The answer to this question seems to be 'yes, until there's an accident.')
Right, exactly. This is the same with their auto park / summon systems that can't tell whether they're plowing through a half-open garage door.
All shiny and glittery world: "Press a button, walk inside, while car is putting itself away for you!" "Press a button as you walk out the front door and the car will pull up in front of you!" "Half a million lives could be saved by using our ever-so-safe autopilot system!"
Cold reality, faced with liability and lawsuits: "Your fault. You didn't stand there and supervise the whole process!" "Your fault, it's not an autopilot that lets you not be in control of the vehicle, despite allowing you to go up to 15 minutes without touching the steering wheel" "Your fault, you hit the brakes with insufficient time to stop the vehicle before a collision. The vehicle wasn't under autopilot control at the time of the collision, so it doesn't count. The fact that you only hit the brakes because the vehicle was about to happily collide at speed is ... entirely ... separate, and unrelated!"
There are many degrees of autopilot. It can be something as simple as a connection between a gyrocompass and the rudder to hold heading, all the way up to a fully automatic takeoff, cruise, and landing system.
The issue supposedly is that people have no idea about this, though, and think (because of movies?) that "autopilot" means it does everything. I'm still not sure if I believe it, but if so it has nothing to do with real autopilots.
When I hear "autopilot" I think of movies where the pilot flicks a switch then gets up and has a coffee while chatting to an air hostess.
Tesla's marketing many people know this is what people think of too, as the general population is never well versed in any area of speciality, by definition.
Calling it autopilot is misleading. Cars aren't aircraft and drivers aren't pilots.
The difference is that the aircraft autopilot system does not have to be that intelligent. My imagination is that if I engage an aircraft autopilot system I'm free to do other stuff for >= 1minute, because it's enough to simply hold the direction and height. There are no sudden events.
In a car however with the current state of automotive assisantance systems the timeframe for "doing other stuff" is less than 1 second because we have a fast varying environment there, including other cars (or pedestrians, bicycles, ...) taking sudden movements. Even many 5s situations can not be handled by todays assistance systems, like some surface conditions, traffic lights, track changes due to constructions, etc. Which in the end means you have to be constantly be in control of car, which is just not what "automatic" suggests.
On top of other comments, don't forget that pilots are highly trained professionals, with a good knowledge of their vehicle and its operation. Very few drivers fall into that category.
Naming a much-hyped feature of your multi-billion dollar company's only product - in such a way that causes confusion about its capabilities is not unfortunate. It is deliberate.
Putting aside the semantics of the word "autopilot", haven't Tesla always been clear that the system was in beta and should only be used by a fully attentive driver, with your hands held close to the steering wheel?
This understanding only comes from watching autopilot videos on YouTube. I've never seen the official instructions sent to owners, so I could be wrong.
If this is the case, it seems unfair to blame Tesla for incidents where people have used the autopilot in direct contravention of the instructions, just because we all slightly disagree on what the name means.
Can any Tesla owners comment on what information was sent to them when autopilot was rolled out?
Really? The autopilot on my boat will attempt (badly) to go in a straight line, care nothing about speed, and will happily run me into buoys, boats, beaches, bridges, and boulders. If you are foolish enough to attempt to provide a route with waypoints for turns it will throw the rudder over without warning so suddenly that you may be injured or thrown from the vessel.
It's a big world, I suppose there are people that think "autopilot" means "crawl into the back seat and take a nap", but there is certainly a spectrum, and if you try to name something so it can never be misunderstood you will probably end up calling it "NaviaMODE™®℠" and then no one knows anything.
I don't think the average consumer will think about boat 'autopilot' when he hears the term. For most people autopilots are associated with airplanes, because that's where there are actual pilots, not drivers or helmsmen.
In the English language, a "pilot" has meant "one who steers a ship" for almost 500 years. In 1848 it got used for balloonists, in 1907 it got used for airplane steering people.
What we learn is: Autopilot is a very clear name for the commenters in this thread. These commenters understand that an autopilot does not replace a driver. Apparently, it's not such a clear name for other commenters on the same post, and the company called Tesla, who argue that autopilot is safer than a driver.
I think maybe you don't understand what autopilot actually does on a plane...but regardless I agree it was a dumb name that made sense to engineers but should have been shot down immediately by marketing and legal.
This name has certainly been HIGHLY vetted by both marketing and legal. I would wager that the name "Autopilot" was created by marketing, not engineering.
The naming of Autopilot was likely originated by Tesla's marketing department.
Just like the blog post this article critiques was likely written by content marketers, or even more likely, by a collaboration between top-level marketing and engineering officers. The success of the Autopilot feature is a critical aspect of Tesla's business and all communications about it certainly rises to the C-suite.
Edit: the success of self-driving cars in 2016 is more about reputational and liability risk than it is about engineering effort. A self-driving car rollout effort by any company is being driven (no pun intended) by legal and marketing, not engineering.
TLDR for every piece written about this, it's still safer than a human driving alone, period. When used properly, it's vastly safer, and when used improperly, it's probably still safer but only just...the thing to consider is the kind of driver who's selfish and dumb enough to misuse this kind of thing is a high risk driver without it and would probably still be worse.
On what basis? That computer plus human has to be safer than the sum of the parts, if both parts are working as intended?
There's no logical reason to believe that must be so. The sum of two things can be less safe than either one individually.
If Autopilot suddenly decides to veer into a railing, as it did in Montana, then a human operator may not even be capable of responding quickly enough even if they were totally paying attention.
Not disagreeing, but is there any hard, independent third-party research to back up this claim? As an engineer, I tend to lean this way. As a father of two, I want to see some numbers to back up statements like "still safer than a human driving alone, period."
Hard to find current statistics. This [1] seems pretty believable. Not sure if it's "good" or not, but there you go. It's interesting that, just eyeballing it, mass seems to be the dominating factor. The crown vic is discontinued, but was fairly inexpensive, looks like it was originally $25-30,000. Compared to a C Class, $30-35,000, which had more fatalities.
Of course, this is all 5 years old, so maybe there are better safety features available now.
There are a couple of things that may or may not be believable. Maybe it's possible for one or a few drivers to greatly improve the global traffic system [2]. I believe it, but i can't find a real study that proves it. I do think a handful of self driving cars can smooth out those compression waves, just by driving sanely.
Finally, purely anecdotally, a friend of mine is a truck driver. We get to catch up over dinner every few months. He sees people doing crazy things multiple times a day. He sees the helicopter ambulances on a weekly basis, passing by wrecks in either direction. His assertion, if people would just stay calm and consistent, a lot fewer people would die. It's either someone not paying attention, or being super aggressive.
I, also, would like to see a study. I would bet quite a bit of money, that a highway with only autopilot drivers would have zero wrecks.
I guess the real question is, does sensor failure (device failure, snow, things like that) dominate wrecks? or is it interaction with unpredictable (aggressive, inattentive) drivers?
The evidence so far is looking like it's more dangerous than a human driving alone. There may be one fatality per 94 m miles for normal driving overall but the rate is no doubt much less for mature drivers on open freeway driving normally. Also humans paying attention don't drive straight into trucks across the road and similar failings.
I'm all for letting people use risky technology - I ride around asia on a clapped out motorcycle at times for example - but it's best to be realistic about it.
And that's based on what, things like the discounting of collisions caused by the driver having to brake hard because the autopilot was about to happily slam into another vehicle/object/person? Tesla's summation of those is that they were not a failing of Autopilot, because it was not active at the moment of impact. That's ... disingenuous ... at best.
First and foremost, highway driving is much much much safer than regular driving. There are no intersections (which cause 40% of all accidents), there is high viability, slow turns, no pedestrians, and a super simple traffic pattern. Autopilot is only doing the "easy" driving.
Second, that 94 million miles figure includes motorcycle deaths. Motorcycles have 34X more fatalities per mile driven. That juices the number of fatalities per mile.
Demographics matter too. Telsa owners are disproportionately older than average (not a lot of teens driving Teslas). They are more educated and richer. These people tend to be safer behind the wheel on average.
Telsa's also have a lot of safety features that should reduce fatalities to way below median anyway.
If you want to test if Autopilot is safer you have to test it against a control group that is the same but just didn't use Autopilot.