I think this is a foolish statement: fatal accidents in the US are 1.3 per 100M miles driven. Tesla reports autopilot has driven 100M miles. That's not enough data collected to draw this kind of conclusion.
There's a parallel here with software testing. I've seen bugs that happen 25% of the time, for example, and take 10 minutes to run a test. Our test guys have great intentions, but if they test 5 times and see no bug, they think the bug is gone. There is no instinctive understanding that they have insufficient data to draw a conclusion.
Also, not all miles are equal. I'd hope that people are enabling Autopilot primarily while driving on straightish roads with good driving conditions -- in other words, at times when the rate of fatalities with a human driver is far less than 1.3 per 100M miles.
The only measure that makes sense in this context is time, bringing distance into it is statistical trickery.
If you expose yourself on a live fire range for 1 second and travel 100m/s, it's far less dangerous than exposing yourself on a live fire range for 100 seconds travelling at 1m/s.
It's the same thing with cars, the longer you are physically on the road timewise, the longer you are exposing yourself to the chance of an accident, regardless of distance travelled.
Not sure why you think it's "absolute", you haven't listed a single reason or argument, just a tautology.
Conversely, when you hit something on the highway, it's likely to be moving much more slowly than you (e.g. the chicken crossing the road), or at a speed similar to you (the drunk guy driving on the left side). So your speed matters - you're more likely to reach that obstacle in any given period of time if you can explore more space in that time.
If my job is 2 hours away because there is no motorway, I will get a different job. If there is a motorway, making the commute 40mins, I will take that job.
People care how long the journey is, not how far the journey is.
People live by commute time, not commute distance. Same for service areas, if the roads all have low speed limits you would need more drivers, they wouldn't just drive for longer.
Accidents per hour of driving would be more useful for public policymakers – given fixed average commute 1 hour per day, what's the tradeoffs of building highways (worker mobility, commerce, pollution, landscape, accidents).
And most of the highways that don't have a speed limit are quite narrow so while it is possible, no sane driver would do so.
If you look at highway A5 as an example. This highway has 8 lanes, which makes driving fast quite dull since you can not really feel the speed relative to the environment.
Outside of EU, the only other European location that would qualify would be Isle of Man (as referenced in that article).
For those familiar with the UK road network see: http://roadsafetyfoundation.org/media/32639/rrm_britain_2015...
I know when I tried my friend's auto-pilot that I was concentrating 10x more on the road than I would be normally. Obviously if I got more comfortable with the tech then I would be concentrating less but I can't imagine that I would ever stop paying attention unless I knew the system was fool-proof.
This must be harder than concentrating when you're actively engaged?
"Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability."
For near-misses, disconnects, poor decisions, and accidents, they replay the event in the simulator using the live data, and work on the software until that won't happen again. Analyzing problems which didn't rise to the level of an accident provides far more situations to analyze. They're not limited to just accidents.
See Chris Urmson's talk at SXSW, which has lots of playbacks of situations Google cars have encountered.
Note that the number of crashes per 100 million miles is a lot bigger than the number of injuries. One would hope a statement about safety would look at all of the data.
PS: Beside should one more fatalities occur soon, by they're own standard, Model S would become statistically more deadful. To much risk of backfire, this PR is madness...
Before the accident occurs, these numbers were 1/100B miles and 5.3/100M miles.
If a second fatality occurred, those numbers would increase to 3.4/1B miles and 9.3/100M miles.
However, I'm pretty sure that their claims of safety are based on more than just this calculation: "...a wealth of internal data demonstrating safer, more predictable vehicle control performance..."
100M miles of driving is easily enough time to observe things like fewer near misses, greater distance between the Tesla and other cars, etc. In math terms, what Tesla is most likely doing is working off some model relating driving behavior with accident rates and observing that 1 single accident does not invalidate this model.
This is from http://www.rand.org/pubs/research_reports/RR1478.html
I'd love to hear any problems with RAND's analysis, because it'd be great for this to not be the case.
But each of those assumptions is problematic at best. It's a variant of the "Soldiers in Vietnam" problem. In 1961, the U.S. had 3,200 soldiers in Vietnam, and only 16 of them died. If straight extrapolation worked, you could infer that in 1968, when troop levels peaked at 536,000, we'd be looking at 2,680 deaths.
Actually, 16,899 American soldiers died in 1968. The mortality rate climbed 6x, because the war became quite different. American troops were exposed to all sorts of lethal situations that weren't part of the mix in 1961. Data is here
In Tesla's case, it's likely that a lot of the test miles so far have been incurred on California freeways or their equivalent, where lanes are wide, markings are good, rain is rare and snow is unheard of. Start moving more of the traffic load to narrower roads where nastier weather is common, and you're adding stress factors. Demographics change, too. The high price tag of Tesla vehicles today means that most drivers are likely to be in their prime earning/wealth years (let's say 35 to 70.) If the technology becomes more ubiquitous, we'll need to see how it performs in the hands of people of teenagers and the elderly, too. Once again, the risks rise.
Again, please note that all I'm saying is that the secondary metrics are the primary predictor here since they are statistically significsnf. The primary metric is just an order of magnitude check since we don't have enough data (note the width of my posterior) to do more than reject the secondary metrics if they are crazily off.
Finding that the lower end of a credible interval close to zero is an order of magnitude below the top end is pretty common. It just means you need more data, that the beta hasn't approached a gaussian yet.
In : from scipy.stats import beta
In : from numpy import percentile
In : data = beta(1, 100e6+1).rvs(1e7)
In : percentile(data, 0.5), percentile(data, 50), percentile(data, 99.5)
Out: (5.0460240722165642e-11, 6.9295652915902601e-09, 5.3005747758871231e-08)
Back to basics: If you assume that each mile is uncorrelated, (on the large not a crazy assumption), then the overall number of accidents over N miles can be modeled by a binomial distribution with N trials and some crash probability p. You're trying to use bayesian analysis to estimate that p.
The core concept there is that the probability of the observation given a certain p is directly proportional to the probability of the p given a certain observation.
Given p, it's trivial to compute the chance of the observation; just plug it into the pmf. For exactly one crash (and a small p), you can even quite accurately approximate it by Np/exp(Np) due to the trivial binomial coefficient for k=1 and the taylor expansion of the base-near-1 power.
So regardless of the numerical analysis that lead you to believe otherwise: you can just plug in numbers can get a numeric approximation of the (scaled) distribution of the probability mass for p. When I do that, both with an exponential prior (i.e. p in 0.1-0.01 is as likely as p in 0.01-0.001) or a uniform prior (i.e. p in 0.1-0.2 is as likely as p in 0.2-0.3) I get the unsurprising result that the maximum likelihood estimation for p is 1/N, and that the median p is a little lower; at around 1 accident in 1.5e8 miles
In practice that means that common sense is correct: if you observe 1 accident in N miles, a reasonable approximation of the best guess of the true accident rate per mile is 1/N.
Now, you can do this analytically, and the conjugate prior of the binomial is indeed a beta distribution. But that forces you to pick parameters for the prior from that beta distribution, and it's easy to do that wrongly.
A reasonable prior might be Beta[1,3] which assumes p is likely closer to 0 than 1. Analytically [then](http://www.johndcook.com/blog/conjugate_prior_diagram/), the posterior distribution is Beta[1+1,3+1e8-1]; with a mean p of 2e-8 (but note that the details of the chosen parameters definitely impact the outcome). This is quite close to the numerically derived p, though probably less accurate since it is constrained to a unreasonable prior.
So I'm not sure exactly what your python script is computing, but commonsense, numerical analysis, and Bayesian analysis all arrive at roughly 1/N in my case - you probably made some mistake in your reasoning somewhere (if you elaborate in detail how you arrive at this numpy snippet, maybe we'll find the error).
Note that this is not particularly sensitive to whether you choose beta[1,1] or Beta[1,3] as the prior.
Is there some principled reason to pick one over the other, or is it just "whatever looks reasonable"?
I also can't explain why plain numerical simulation doesn't come up with almost identical p values - I get 1/1.5e8 to 1/1.7e8, but that's 6.25e-9. I mean, it's the same order of magnitude, but it's nevertheless quite different.
Oh well, unless you've got some insight there, I guess I'm going to give up on that analysis - that kind of nitty gritty stuff is a timesink ;-).
This is also the company that thinks level 4 self-driving can be done without LIDAR. They're idiots, frankly, completely ignoring the difficulties of computer vision (both in computation and optics) that will not be overcome anytime soon. Even their fatality seems to have been rooted in their camera's inability to distinguish the color of the truck from the color of the sky.
Tesla hasn't even taught a car to navigate an intersection on its own, much less all the things Google's car is capable of these days.
I think what all this shows is the importance of regulators getting off their asses and establishing standards for self-driving vehicles. Perhaps the fatality was the event that will get them in gear. Poor guy.
They probably mean that 1 occurrence is not enough to invalidate a statistical model built on wealth of other data.
It has data from 1994 to 2014, including this plus several other statistics. In particular, it looks like they are tracking on the order of trillions of miles driven per year, so you're right, making any kind of statement after only 100 million is more shameful marketing than anything else.
Edit: this is US-only, while Tesla is claiming vs. worldwide. Clearly more miles driven worldwide, and probably higher deaths per 100m miles.
The proper comparison would be with non-Autopilot Tesla miles. But that comparison makes Autopilot look bad.
Remove the mileage driven from the cars that have never used autopilot, to control for drivers too cautious to trust their lives to beta software, and you should have your demographic.
You raise whether there is statistical significance about which is safer, autopilot or a typical driver. However it's worth noting that in a lawsuit the burden of proof is on the plaintiff.
So at best they matched what a human alone can do. Meaning the autopilot has done nothing at all.
>Seventy-four per cent of road traffic deaths occur in middle-income countries, which account for 70% of the world’s population, but only 53% of the world’s registered vehicles, burdens for 74% of world's road deaths. In low-income countries it is even worse. Only one percent of the world's registered cars produce 16% of world's road traffic deaths.
i also stumbled over this statement, but i think they are using all telemetry data (e.g. what would the autopilot have done vs what the human actually did) in all accidents, not just fatalities per miles driven.
>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic.
My problem is with Autopilot's branding - it's called AUTOPILOT.
The name isn't "Maintain Lane Position" or "Cruise Distance" or something boring that describes it better - it has AUTO in the name.
Typical drivers aren't airline pilots who complete thousands of hours in flight training and have heavily regulated schedules. We're just people who are busy and looking for solutions to our problems.
If Tesla doesn't want people to think Autopilot functions as crash avoidance/smart vehicle control/better than humans in all situations or blame Tesla for accidents (whether human or machine is at fault) it should come up with a less sexy name.
It's not designed for collision avoidance, runway taxiing, emergency situations, short/soft field landings or departures. It's occasionally used for normal landings (according to https://www.quora.com/How-often-are-commercial-flights-lande...) but it doesn't seem prevalent.
Obstacles don't suddenly pop up in mid-air, and a lot of infrastructure makes sure other traffic is nicely separated at all times.
Thus a plane actually can do most of a flight automatically and it's okay if it has to fall back on not fully attentive humans in edge-cases, because there is some time for error-correction.
The car equivalent might be if highways had lanes separated by walls and cars could detect obstacles a few hundred meters away, then taking the hands of the wheel wouldn't be an issue. On real-world streets, you can't be as hands-off as you could be in a plane at 35,000 feet.
Lockheed L1011, 1972. Flight trials led to demonstration of a fully automated trans-continental flight, from rest to rest. Pilots did not touch the controls.
Incidentally also the only airliner which was certified for operational Cat IIIC autoland, with zero visibility. Frequently used at London-heathrow but needed a ground-control radar to guide the pilots to the gate once the aircraft had stopped itself on the runway.
Aircraft autopilots are technically capable of completely controlling the flight but are restricted from doing so by technical provision ( e.g. lack of rearward-facing ILS / MLS for departure ) or regulatory caution ( e.g. not executing TCAS collision-resolution automatically, even though every FBW Airbus can do this ).
Full-authority automated ground collision avoidance is now on many F-16 fighters. It's a retrofit, and 440 aircraft had been retrofitted by 2015. First avoided crash was in 2015. Here's a test where the pilot goes into a hard turn and releases the controls, and the automated GCAS, at the last second, flips the plane upright and goes into a steep climb. Because this is for fighters, it doesn't take over until it absolutely has to. The pilot can still fly low and zoom through mountain passes. The first users of this system, the Swedish air force, said "you can't fly any lower". It's a development of Lockheed's Skunk Works.
This technology is supposed to go into the F-35, but it's not in yet.
This may eventually filter down to commercial airliners, but they'd need more capable radars that can profile the ground. This is not GPS and map based; it's looking at the real world.
But, it matches neither the common use case for airplane autopilot systems nor the common perception--right or wrong--of what those autopilot systems do. So, Tesla enjoys the cachet that comes with that latter perception, while relegating the true description to the "fine print" of the owner's manual.
If the public consistently misunderstands the term and their life depends on it, the term has to be changed period. This is no place for semantic nazism.
Tesla autopilot works like an aeroplane autopilot actually does, not how people seem to think it does.
Your plane autopilot analogy fails in a crucial manner:
First, to get a license to fly a plane, you have to undergo a much more rigorous training and much more stringent scrutiny than what an ordinary Joe/Jane undergoes to get a license to drive the Tesla car.
Another, a plane pilot does not ride (fly) his/her plane as much as our ordinary Joe/Jane drives his/her car.
Yet another, there are assistant pilots in plane
and the list goes on.
The foolish management at Tesla should have labeled their assistance system just what it is 'semi automatic assistance system' and they could have been slightly more prudent by clearly mentioning the "dangerous" components of it upfront, rather than cleaning the shit now.
It's a sad affair. I had/have more hopes from Tesla. But they should abandon their foolish autopilot thingy, to begin with now.
If you are referring to the parent's analogy, then yes, I said his/her analogy fails.
Tesla is not making it mandatory to their buyers go through a serious and rigorous training to use their so tauted auto-pilot, which is a freaking dangerous thing as it's far from being a autopilot and it's only half-baked semi-auto-pilot potentially riddled with a lot of hidden AI bugs, that their machine learning team may find hard to even locate.
The airplane autopilot analogy, the parent is making to justify Tesla's claims fails miserably, IMO, anyway.
But if a lawsuit gets filed, Tesla will have a very hard time justifying this type of claim.
Another important thing (from their business success point of view), is: this incidence and their shameless justification of the faults in their so-tauted autopilot will tarnish (already has tarnished to some extent) their image in the public. They can't just now show their fucking warnings they originally published in fine-print and got the unsuspecting users signed, and expect the users to happily purchase their now-perceived death-traps.
Competitors just have to point this death-trap autopilot feature of Tesla to turn a potential buyer in their favor.
To clarify, I'm not saying Tesla made a dumb or unforgivable misstep (there will always be a dumber customers) but if they're going to do a (literal) post-mortem, they need to acknowledge that their branding is a factor.
If you have a human-supervised safety-critical automated system (where "difficult" situations are to be overridden by the human) you end up needing the human supervisor to be much more skilled (and often faster-reacting) than they would have needed to be just to do the operation manually.
Musk made the call. He might not have proposed it, but given how involved he is with the marketing and PR aspect of Tesla, there's no way he didn't OK the decision to call it Autopilot.
The "one forward radar" was the decision that killed. Tesla cars are radar blind at windshield height, which is why the car underran a semitrailer from the side. Most automotive radars have terrible resolution and no steering in elevation. There are better radars, but they haven't made it down to automotive yet.
I agree with your comment regarding their branding choice, and I'd add that the design itself is flawed: specifically, the entire notion of the car taking over specific reaction-based functions, but a.) leaving other such functions to the driver and b.) requiring the driver to supervise and override according to split second situations.
So does autocorrect, but I don't see people complaining that banging on the keyboard doesn't produce sonnets.
This whole "auto" thing is ridiculous.
By the way, it's an automobile and has been for a while ... where's the expectation that it will drive itself? Should we not call them automobiles anymore, in case someone gets the wrong impression? This is silly.
Their implementation is much better than the competition.
> Or the fact that they don't push so much on keeping the hands on the wheel to make this more comfortable for the users vs. the rest.
I'm glad they don't. Here is why:
But a bucketload of rage-filled complaints about when autocorrect gets it wrong and your email to Sinead was modified to "Dear Pinhead"... :)
It's especially hard when you sell the Autopilot for $2500-$3000 .
 Reference to the last accidents
 https://www.tesla.com/models/design see pricing
Automobiles contain the prefix auto- yet nobody assumes the car is self-driving. Most people understand it to mean gears don't need to be changed manually (not applicable to Tesla since it's single gear and gains full torque).
Actually, in that regard yes they do -- "auto-mobile" = "moves itself", as in, you don't have to pedal or Fred Flintstone it...
> Most people understand it to mean gears don't need to be changed manually
No, that'd be "automatic gearbox". As in "Do you drive a manual or an automatic?"
So "autopilot" would suggest it does the piloting as well (ie, the stuff the person sitting at the controls -- the common view of a "pilot" -- normally does)
Quite a fitting name for an AUTOmobile, don't you think?
To rate an automatic driving system, you want to look at accident rates, not fatality rates. Accident rates reflect how well the control system works. Fatality rates reflect crash survivability. Tesla needs to publish their crash data. That's going to be disclosed, because the NHTSA ordered Tesla to submit that information.
California has a fatality rate of 0.94 per 100 million miles traveled. That's lower than the US average. But it's not broken down by freeway/non freeway. (You can request a login to query the database directly and download data, and it might be possible to compute freeway accident rates.)
His point was that miles driven on freeway (when you are using Tesla's autopilot) seems less dangerous than miles driven everywhere else.
He found the UK data and wonders if the same applies to the US. The implication is that if it did, then Tesla would be comparing apples with oranges in its 100m miles reference.
And neither does Autopilot work in heavy rain and in other circumstances that are possibly higher in risk (although I'm not sure driving in the rain is actually more dangerous, due to risk compensation)
Looks like in the US motorway driving is about twice as safe than "all roads" with a fatality around every 200 million miles.
I love Tesla, but they are SO weak at taking criticism or realising when they make a mistake.
 http://a16z.com/2016/06/29/feifei-li-a16z-professor-in-resid... (you'll need to listen to the podcast though)
I think there's a strong recognition that self-driving vehicles, when they can be made to happen generally, will be a significant social good. And that it's tricky to get there unless society is willing to put something out onto the streets.
It's taken fifty or more years of popular human-driven vehicles to get to the stage that most of our cars are pretty safe, and quite a lot of effort in improving road design too.
Eventually, though, I suspect it won't be solved until we redesign the roads. A significant part of rail safety is that the signalling system can sense whether there is a train on a stretch of line. (via the rather simple technique that the axles form an electrical connection between the two rails) Right now, it's as if we're trying to do autonomous traffic by an ant colony model -- independent agents that know nothing about each other except what they can sense. Which is always going to be harder than if the road can help them out too.
I agree autonomous cars hold great potential. But that is precisely why Tesla should not ship a "beta" feature with lives at stake, as that risks squandering that potential. If this is the response to a single Autopilot user killing himself in an accident, imagine the potential backlash if more accidents crop up. Or worse, an Autopilot user kills someone else in an accident.
Critics of Tesla's Autopilot are not only concerned about the danger to Tesla drivers, but to the industry as a whole ("Jaguar engineer: A mishap with Tesla's Autopilot could set back self-driving cars by a decade"): http://mashable.com/2015/12/12/jaguar-semi-autonomy/#QMl5uUB...
Tesla argues that the data it collects from Autopilot users is worth the risk because it can help Tesla develop true self-driving cars faster, but other companies pursuing self-driving cars (including Google) have opted for more controlled testing instead of conducting a grand experiment with customers and the general public.
Tesla relabeling what other OEMs call Advanced Driver Assistance Systems (ADAS) under the Autopilot moniker is dishonest and misleading.
The trick is to do it economically. Do some of the major trucking routes first, as well as common city roads. ie, automate trucks and busses first. Especially as to begin with you'd probably want to exclude bicycles, horses, etc, so that means not doing every road.
But that's just speculation.
Also, not wanting to use autopilot is different than thinking Tesla should be legally responsible for all accidents that occur when autopilot is on. If they get sued for it then they'll likely just remotely disable it on all of their cars with a software update and I'm not sure anybody wins in that scenario.
The problem here is Tesla's Autodrive implementation. I think it is fair that questions are asked.
That is such a weak statistical claim, that it border the disingenuous.
Previous discussion: https://news.ycombinator.com/item?id=12082893
edit: I'm being downvoted for this, but I wasn't using "belligerent" negatively here; I was wondering aloud whether Tesla's characteristically aggressive approach to damage control is the result of direct involvement from Musk. Doesn't seem that crazy to imagine that it is.
FWIw, when I owned cars, they were VWs. I loved driving them, and I never had mechanical problems, but the electrical components were a flaky mess.
Are they perfect cars? No. But the company operates in a fundamentally different manner than the rest of the auto industry, and that is exciting to some.
Electric cars are more efficient than ICE vehicles and they pollute less, but they are not powered by rainbows and dreams of utopia. Zero emmisions at the tailpipe is nice, but don't try to make the claim that these cars are not following that grand SV tradition of moving the negative externalities somewhere else so that someone else can pay the cost.
Sure, nothing there is on par with Model S, but is it because of RnD deficit? Or because nobody is that much interested in going after that market (> $85k EVs) while they could milk their "old" tech for a few years more and keep the profits? This ugly Nissan Leaf has about one third of Model S range, but also one third of batteries. Once they up that number and put it into Qashqai Tesla won't look that good no more.
I'm not against Tesla, I'd gladly drive one. But they're running out of the time. Model 3 is a great promise today, by 2020 it could be just an exotic option for those not willing to drive an electric VW Passat.
I really would like to hear Tesla's response to this criticism of their apparently flawed statistical analysis.
you might try the "deaths per passenger mile" metric, or "deaths per million passenger-miles".
On that metric, long-distance air travel is very safe, as one trip transports hundreds of people a very long way, and motorcycles are at the other end of the scale.
The human factors here are tough. But safe design needs to account for human factors. The enthusiast community seems especially prone to over-trusting the autopilot, and that's something Tesla should be examining in their safeguards.
Obviously, a dead man wouldn't be available to testify.
> That Tesla Autopilot had been safely used in over 100 million miles of driving by tens of thousands of customers worldwide, with zero confirmed fatalities and a wealth of internal data demonstrating safer, more predictable vehicle control performance when the system is properly used.
Why do they do this? I can understand it when government property is not insured, e.g. UK Civil Service, as the enterprise is so vast and general taxation can fill the gaps. I can also understand that some things can`t be insured, e.g. nuclear power plants, but why does Tesla `vertically integrate` insurance, particularly given that the product is statistically likely to kill someone in due course?
That's what insurance companies do for risks with a long tail.
Think of it as buying car insurance with a very high excess. Those can be very cheap.
Thing is, insurers live by the law of large numbers (https://en.wikipedia.org/wiki/Law_of_large_numbers).
Cutting away the high risk, low payout parts of the insurance decreases the number of payouts significantly. That does mean variance in payouts goes up.
So, insurers will either need to find lots of new customers to get N up again, or relatively high amounts of capital to survive those high payouts.
If they think they cannot find those customers, they need more capital. To finance that, they need more income, which means charging you more, which means fewer customers, which means charging you even more, etc.
I'm sure you get that insurance through Lloyd's, but it wouldn't be cheap.
For car insurance, isn't an excess you can choose a standard feature? (Some insurance companies give you a no-claims rebase, which is the same as a high excess---only shifted in time.) I don't know about house insurance. What does house insurance insure against?
You Know the story with the microwave and the cat inside. The old ma that just wanted a Quick solution to dry its beloved pet.
That's the same thing with the guy driving in its Tesla with the Autopilot on. He just believed in the marketing campaing
If you're not aware that potential dangers still exist when you step into a car, you shouldn't be driving the car (which is a shame as it's a fantastic car).
Sorry. Tesla is not at fault here - however much people want to call it that way.
The Model S is not some magical car designed by aliens. It's a machine. Problems may occur. We are not at autonomous-vehicle stage yet. However Autopilot is a damned comfortable upgrade compared to the old cruise control.
I can't believe people are blaming Mr Musk or the marketing department for people not taking responsibility or being careful when they get into a car. As they should in any car. Especially any car with autopilot like capabilities.
1. "STATISTICALLY SAFER" CUSTOMERS. Yes, this statement makes no sense. One fatal crash is not a large enough sample size to make such conclusion. However, this article was aimed not at Hacker News readers, but at average buyers. Most of them do not have a firm grasp of high school math, so for them "statistically safer" means just "don't worry." And indeed there are reasons for them to worry, given that independent news agencies continually publish hysterical things (It is a “a wake-up call!” “Reassess” self-driving cars! The crash “is raising safety concerns for everyone in Florida!” ). Tesla's response was nothing but a necessary defence. Or did you expect them to say, "You know, there are not enough data yet, so let's wait until 10 or so more people die, and then we will draw conclusions." This is much more logical, but I feel that customers wouldn't like it.
2. WHY IT IS CALLED "AUTOPILOT." This is just marketing. They couldn't sell it under the name "The Beta Version Of The System That Keeps Your Vehicle In Lane As Long As You Keep Your Hands On The Steering Wheel And Are Ready To Regain Control At Any Moment™." And honestly, I do not think that even relatively stupid customers will just press the button and hope for the best without reading what the Autopilot is all about in advance.
In my opinion, it is now a difficult time for Tesla, and we should not criticise it for trying to stay afloat.
EDIT: You might think that the phrase "trying to stay afloat" is unnecessary pathos, since a single crash, even coupled with a bunch of nonsense news articles, cannot lead to anything serious. However, the history shows it can. In 2000, Concorde crashed during takeoff, killing all people on board . The event was caused by metal debris on the runway, not by some problem with the plane itself. Nevertheless, Concorde lost it reputation of one of the safest planes in the world. The passenger numbers plummeted, and Concorde retired three years later. That crash is the number one reason why it now takes 12 hours to get from Europe to America.