Hacker News new | past | comments | ask | show | jobs | submit login
Misfortune (tesla.com)
224 points by dwaxe on July 17, 2016 | hide | past | web | favorite | 174 comments



From Tesla's statement: "contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all"

I think this is a foolish statement: fatal accidents in the US are 1.3 per 100M miles driven. Tesla reports autopilot has driven 100M miles. That's not enough data collected to draw this kind of conclusion.

There's a parallel here with software testing. I've seen bugs that happen 25% of the time, for example, and take 10 minutes to run a test. Our test guys have great intentions, but if they test 5 times and see no bug, they think the bug is gone. There is no instinctive understanding that they have insufficient data to draw a conclusion.


I think this is a foolish statement: fatal accidents in the US are 1.3 per 100M miles driven. Tesla reports autopilot has driven 100M miles. That's not enough data collected to draw this kind of conclusion.

Also, not all miles are equal. I'd hope that people are enabling Autopilot primarily while driving on straightish roads with good driving conditions -- in other words, at times when the rate of fatalities with a human driver is far less than 1.3 per 100M miles.


I thought most fatalities are on the highway during good driving conditions, where the road is so smooth and boring that humans lose attention, fall asleep or staring at their phones. Speed is much higher when driving on those roads and there is a bigger chance a car will rollover/bigger impact.


On a per mile basis, they most definitely are not. Here's one set of US DOT numbers[0] from 2007 that show the rate on interstates is 0.7/100M miles while the rate is highest on collector roads at 1.99/100M miles followed closely by local roads at 1.94/100M miles. Interstates have less than half the fatalities on a per mile basis as local roads.

[0] http://safety.fhwa.dot.gov/speedmgt/data_facts/docs/fataltbl...


Surely per mile makes no sense, as you travel faster on a motorway?

The only measure that makes sense in this context is time, bringing distance into it is statistical trickery.


Absolutely not the case. If you're driving from A to B, you have to cover similar distance regardless of type of road. (Yeah not conpletely true.) It turns out that highways are both quicker and safer, but the two are not necessarily correlated.


That's not how it works.

If you expose yourself on a live fire range for 1 second and travel 100m/s, it's far less dangerous than exposing yourself on a live fire range for 100 seconds travelling at 1m/s.

It's the same thing with cars, the longer you are physically on the road timewise, the longer you are exposing yourself to the chance of an accident, regardless of distance travelled.

Not sure why you think it's "absolute", you haven't listed a single reason or argument, just a tautology.


The analogy is poor: on a live fire range the bullets move much more quickly than you. The speed of the bullets relative to you is largely determined by the bullets, and your speed is irrelevant - so how fast you move is naturally irrelevant (assuming random fire, of course). The chance of collision will be determined largely by the number of bullet-meters made. If you will: it's as if the bullets are exploring the space in a probabilistic/ballistic fashion - the more bullets and the faster they go, the more they can explore.

Conversely, when you hit something on the highway, it's likely to be moving much more slowly than you (e.g. the chicken crossing the road), or at a speed similar to you (the drunk guy driving on the left side). So your speed matters - you're more likely to reach that obstacle in any given period of time if you can explore more space in that time.


Correcting for time (I am making up an average speed): Highway 49.0/1Mhours @70mph Collector 99.5/1Mhours @50mph Local 77.6/1Mhours @40mph


That's not how roads work, the average speed of a road isn't simply the max speed of the road. Especially not in towns, compared to motorways, which will skew it heavily towards motorways.


Thanks for pointing this out, I was trying to figure out why my brain didn't like per mile for things moving at vastly different speeds.


When traveling, most people do so for a given distance, from hither to yon, not for a given period of time.


Again, not true, you fundamentally misunderstand the majority of car trips.

If my job is 2 hours away because there is no motorway, I will get a different job. If there is a motorway, making the commute 40mins, I will take that job.

People care how long the journey is, not how far the journey is.

People live by commute time, not commute distance. Same for service areas, if the roads all have low speed limits you would need more drivers, they wouldn't just drive for longer.


Personally when facing decision between highways, local roads and other means of transportation I almost always have a specific destination in mind. Accidents per mile is the statistic I want.

Accidents per hour of driving would be more useful for public policymakers – given fixed average commute 1 hour per day, what's the tradeoffs of building highways (worker mobility, commerce, pollution, landscape, accidents).


Out of curiosity, if the motorway is shut down due to an accident or construction, would you drive for 40 minutes and then pull over to the side of the road and work from there?


At least in Germany, driving on the Autobahn is about 3x to 4x safer than driving on other road types: https://en.wikipedia.org/wiki/Autobahn#Safety, not sure how it compares to the US, but "fast and boring roads" seem to be safer.


Getting a drivers license in Germany requires a lot more training (and money) than in the US. The people who do get licenses in Germany are generally better prepared. As a result, they get into less accidents.

https://en.wikipedia.org/wiki/Driving_licence_in_Germany


Germany is a unsuitable country for comparison, as they have many highways without speed limits. I would argue that this invalidates the boring part.


I think the missing speed limit isn't that big of a factor, as most traffic is still flowing at around the same not-so-high speed, maybe somewhere between 130 and 160 km/h (except for trucks on the right lane). It's the relative speed differences that matters, and there the differences on other roads where traffic is stopping and accelerating all the time, with other vehicles, pedestrians and cyclists crossing the road which makes urban roads so dangerous.


Seems you have never been to Germany ;-) No offense. Most highways have a speed limit only some have none.

And most of the highways that don't have a speed limit are quite narrow so while it is possible, no sane driver would do so.

If you look at highway A5 as an example. This highway has 8 lanes, which makes driving fast quite dull since you can not really feel the speed relative to the environment.


The figure I've seen is 50% of Autobahn is without speed limit, 25% with permanent limits (due to noise control, permanently high traffic volume or intersections) and 25% with temporary limits (depending on buildings works, weather, congestion). Of course the perception is probably worse because more people use the congested parts with speed limits, and due to the limits they spend more time on those parts...


It's the same in all European countries.


Germany is the only EU country having roads without speed limits - https://en.wikipedia.org/wiki/Speed_limit#Roads_without_spee... .

Outside of EU, the only other European location that would qualify would be Isle of Man (as referenced in that article).


And one road in the Northern Territory, Australia: https://en.wikipedia.org/wiki/Speed_limits_in_Australia


I don't think the Isle of Man has any dual carriageways, it certainly doesn't have any motorways.


I was referring to the highways being more safe than other road types.


In the UK the motorways (straight, multi-lane, higher speed limits, barriers) are the safest roads, in terms of death or serious injury per vehicle mile. The most dangerous roads are typically single carriage A roads that are twisty, contain blind junctions and so on.

For those familiar with the UK road network see: http://roadsafetyfoundation.org/media/32639/rrm_britain_2015...


I would agree that autopilot miles are not equal to overall miles. I was not able to find data on which are most risky. My guess is that most accidents happen on surface streets but most fatalities on highways.


I think most accidents are within a couple miles of home. It's so familiar people stop paying attention.


Is that most accidents, or most per mile driven? Because for many, quite a bit of driving is within a few miles from home, skewing the data.


Couple this with the fact that most people are still concentrating on the road while autopilot is in action. So in effect you still have a human driver, they are just not fully engaged with the cars controls.

I know when I tried my friend's auto-pilot that I was concentrating 10x more on the road than I would be normally. Obviously if I got more comfortable with the tech then I would be concentrating less but I can't imagine that I would ever stop paying attention unless I knew the system was fool-proof.


>most people are still concentrating on the road while autopilot is in action //

This must be harder than concentrating when you're actively engaged?


RAND conducted a study that criticized the statistical analysis that most autonomous vehicle companies are doing to try and prove their safety:

"Given that current traffic fatalities and injuries are rare events compared with vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles — an impossible proposition if the aim is to demonstrate performance prior to releasing them for consumer use. Our findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability."

http://www.rand.org/pubs/research_reports/RR1478.html


Google does a lot of driving in simulation, using data captured by real vehicles. They log all the raw sensor data on the real vehicles, so they can do a full playback. They do far more miles in simulation than they do in the real world. That's how they debug, and how they regression-test.

For near-misses, disconnects, poor decisions, and accidents, they replay the event in the simulator using the live data, and work on the software until that won't happen again. Analyzing problems which didn't rise to the level of an accident provides far more situations to analyze. They're not limited to just accidents.

See Chris Urmson's talk at SXSW, which has lots of playbacks of situations Google cars have encountered.[1]

[1] https://www.youtube.com/watch?v=Uj-rK8V-rik


RAND's point is true but irrelevant. As Tesla likes to point out, when you roll out to an entire fleet (rather than a few dozen test cars), you rack up hundreds of millions of miles almost overnight. 'Hundreds of millions' may sound like a lot, but Americans drive trillions of miles per year.


One of the points addressed in this paper is the degree of testing that already needs to be done in order to proclaim that these systems are safer than humans with statistical rigour. Tesla says that their systems are _statistically safer_ than human drivers, but there is simply not enough data to make this conclusion. I respectfully suggest that you read the paper.


I did read the RAND paper, when it came out months ago, and I double-checked the traffic accident per mile citation and the power calculation as well. Their point is irrelevant because their fleet size estimate is ludicrously small, and their statistics is a little dodgy as well: it should be a one-tailed test (since the important question is only if the Tesla is worse than a human driver), and if one wanted to debate the statistics, this is somewhere that Bayesian decision theory minimizing expected lives lost would be much more appropriate, and that approach would roll out self-driving cars well before a two-sided binomial test yielded p<0.05.


From the paper: "Therefore, at least for fatalities and injuries, test-driving alone cannot provide sufficient evidence for demonstrating autonomous vehicle safety."

Note that the number of crashes per 100 million miles is a lot bigger than the number of injuries. One would hope a statement about safety would look at all of the data.


I still can't believe that they are using statistically safer in an official response. It now have been widely discussed everywhere that 1 occurrence is not enough to draw any reliable statistical conclusion!

PS: Beside should one more fatalities occur soon, by they're own standard, Model S would become statistically more deadful. To much risk of backfire, this PR is madness...


That's not really true. Using a beta distribution model (i.e. treating each mile driven as a single event), we can say that with 99% probability the true accident rate is at least 1/1B miles and no more than 7/100M miles.

Before the accident occurs, these numbers were 1/100B miles and 5.3/100M miles.

If a second fatality occurred, those numbers would increase to 3.4/1B miles and 9.3/100M miles.

However, I'm pretty sure that their claims of safety are based on more than just this calculation: "...a wealth of internal data demonstrating safer, more predictable vehicle control performance..."

100M miles of driving is easily enough time to observe things like fewer near misses, greater distance between the Tesla and other cars, etc. In math terms, what Tesla is most likely doing is working off some model relating driving behavior with accident rates and observing that 1 single accident does not invalidate this model.


"How many miles would autonomous vehicles have to be driven to demonstrate that their failure rate is statistically significantly lower than the human driver failure rate? ... Suppose a fully autonomous vehicle fleet had a true fatality rate that was A=20% lower than the human driver fatality rate of 1.09 per 100 million miles, or 0.872 per 100 million miles. We apply Equation 7 to determine the number of miles that must be driven to demonstrate with 95% confidence that this difference is statistically significant ... It would take approximately 5 billion miles to demonstrate that difference. With a fleet of 100 autonomous vehicles test-driven 24 hours a day, 365 days a year at an average speed of 25 miles per hour, this would take about 225 years."

This is from http://www.rand.org/pubs/research_reports/RR1478.html

I'd love to hear any problems with RAND's analysis, because it'd be great for this to not be the case.


Their calculation is frequentist, but a quick Bayesian calculation gives similar numbers.


Hmmn. If you assume that increased adoption still leaves us with a similar cohort of drivers traveling on a similar collection of roads with similar driving habits, then this is statistically legitimate.

But each of those assumptions is problematic at best. It's a variant of the "Soldiers in Vietnam" problem. In 1961, the U.S. had 3,200 soldiers in Vietnam, and only 16 of them died. If straight extrapolation worked, you could infer that in 1968, when troop levels peaked at 536,000, we'd be looking at 2,680 deaths.

Actually, 16,899 American soldiers died in 1968. The mortality rate climbed 6x, because the war became quite different. American troops were exposed to all sorts of lethal situations that weren't part of the mix in 1961. Data is here http://www.archives.gov/research/military/vietnam-war/casual... http://www.americanwarlibrary.com/vietnam/vwatl.htm

In Tesla's case, it's likely that a lot of the test miles so far have been incurred on California freeways or their equivalent, where lanes are wide, markings are good, rain is rare and snow is unheard of. Start moving more of the traffic load to narrower roads where nastier weather is common, and you're adding stress factors. Demographics change, too. The high price tag of Tesla vehicles today means that most drivers are likely to be in their prime earning/wealth years (let's say 35 to 70.) If the technology becomes more ubiquitous, we'll need to see how it performs in the hands of people of teenagers and the elderly, too. Once again, the risks rise.


This could be true. If so, their secondary metrics (e.g. # of near misses, avg distance to other cars, etc) should rise as Tesla use expands out of CA.

Again, please note that all I'm saying is that the secondary metrics are the primary predictor here since they are statistically significsnf. The primary metric is just an order of magnitude check since we don't have enough data (note the width of my posterior) to do more than reject the secondary metrics if they are crazily off.


What are you basing that on? Looks like you've misplaced the decimals there someplace, because you're concluding that the likely rate is around 1/10 of the observed rate. Depending on your priors, you can get some difference of course, but that sounds absurd - can you more precisely explain the method you used to arrive at that conclusion?


Take a uniform prior for gamma = fatality probability | drive one mile. Use Bayes rule. Here is a tutorial:

https://www.chrisstucchio.com/blog/2013/bayesian_analysis_co...

Finding that the lower end of a credible interval close to zero is an order of magnitude below the top end is pretty common. It just means you need more data, that the beta hasn't approached a gaussian yet.


Some people, when their posterior doesn't match the facts, update their prior.


Are you asserting that the posterior I described doesn't match the facts? If so, can you explain?


More detail; you're assuming a binomial likelihood function? What exactly is "uniform prior for gamma" supposed to mean?


Please go read the tutorial I linked to. It explains exactly how this analysis works. Here's the code:

    In [1]: from scipy.stats import beta
    In [2]: from numpy import percentile
    In [3]: data = beta(1, 100e6+1).rvs(1e7)
    In [4]: percentile(data, 0.5), percentile(data, 50), percentile(data, 99.5)
    Out[4]: (5.0460240722165642e-11, 6.9295652915902601e-09, 5.3005747758871231e-08)


You're definitely messing something up somewhere in that analysis. Where's that numpy/betadistribution snippet coming from?

Back to basics: If you assume that each mile is uncorrelated, (on the large not a crazy assumption), then the overall number of accidents over N miles can be modeled by a binomial distribution with N trials and some crash probability p. You're trying to use bayesian analysis to estimate that p.

The core concept there is that the probability of the observation given a certain p is directly proportional to the probability of the p given a certain observation.

Given p, it's trivial to compute the chance of the observation; just plug it into the pmf. For exactly one crash (and a small p), you can even quite accurately approximate it by Np/exp(Np) due to the trivial binomial coefficient for k=1 and the taylor expansion of the base-near-1 power.

So regardless of the numerical analysis that lead you to believe otherwise: you can just plug in numbers can get a numeric approximation of the (scaled) distribution of the probability mass for p. When I do that, both with an exponential prior (i.e. p in 0.1-0.01 is as likely as p in 0.01-0.001) or a uniform prior (i.e. p in 0.1-0.2 is as likely as p in 0.2-0.3) I get the unsurprising result that the maximum likelihood estimation for p is 1/N, and that the median p is a little lower; at around 1 accident in 1.5e8 miles

In practice that means that common sense is correct: if you observe 1 accident in N miles, a reasonable approximation of the best guess of the true accident rate per mile is 1/N.

Now, you can do this analytically, and the conjugate prior of the binomial is indeed a beta distribution. But that forces you to pick parameters for the prior from that beta distribution, and it's easy to do that wrongly.

A reasonable prior might be Beta[1,3] which assumes p is likely closer to 0 than 1. Analytically [then](http://www.johndcook.com/blog/conjugate_prior_diagram/), the posterior distribution is Beta[1+1,3+1e8-1]; with a mean p of 2e-8 (but note that the details of the chosen parameters definitely impact the outcome). This is quite close to the numerically derived p, though probably less accurate since it is constrained to a unreasonable prior.

So I'm not sure exactly what your python script is computing, but commonsense, numerical analysis, and Bayesian analysis all arrive at roughly 1/N in my case - you probably made some mistake in your reasoning somewhere (if you elaborate in detail how you arrive at this numpy snippet, maybe we'll find the error).


Whoops, you are right, my posterior should have been Beta[2,100e6+1] (I chose a uniform prior). With that I get a median probability of 1.7e-8, and a mean probability of 2e-8. Good catch!

Note that this is not particularly sensitive to whether you choose beta[1,1] or Beta[1,3] as the prior.


Yeah, that makes virtually no difference. I'm not familiar with picking these priors, so I could imagine picking (say) Beta[0.5,1] or Beta[5,30] as prior too, and that does make some difference (as in the prior alpha has some impact).

Is there some principled reason to pick one over the other, or is it just "whatever looks reasonable"?

I also can't explain why plain numerical simulation doesn't come up with almost identical p values - I get 1/1.5e8 to 1/1.7e8, but that's 6.25e-9. I mean, it's the same order of magnitude, but it's nevertheless quite different.

Oh well, unless you've got some insight there, I guess I'm going to give up on that analysis - that kind of nitty gritty stuff is a timesink ;-).


That they are using it in official statements should tell you exactly how intelligent Tesla is being with self-driving technology.

This is also the company that thinks level 4 self-driving can be done without LIDAR. They're idiots, frankly, completely ignoring the difficulties of computer vision (both in computation and optics) that will not be overcome anytime soon. Even their fatality seems to have been rooted in their camera's inability to distinguish the color of the truck from the color of the sky.

Tesla hasn't even taught a car to navigate an intersection on its own, much less all the things Google's car is capable of these days.

I think what all this shows is the importance of regulators getting off their asses and establishing standards for self-driving vehicles. Perhaps the fatality was the event that will get them in gear. Poor guy.


Probably because "Our cars are super safe, if in perfect weather conditions, otherwise you are on your own", wouldn't sell very well outside of California.


> I still can't believe that they are using statistically safer in an official response. It now have been widely discussed everywhere that 1 occurrence is not enough to draw any reliable statistical conclusion!

They probably mean that 1 occurrence is not enough to invalidate a statistical model built on wealth of other data.


Except that the other data is certainly suspect. I mean - what other source of realistic data can they possibly have that is realistic enough and large enough to be more significant that 100000000 miles of actual usage? At this level, it's the odd, unpredictable events that are going to get you, and whatever models they have cannot include all of those.


I was wondering about the 1.3/100m number and found this:

http://www-fars.nhtsa.dot.gov/Main/index.aspx

It has data from 1994 to 2014, including this plus several other statistics. In particular, it looks like they are tracking on the order of trillions of miles driven per year, so you're right, making any kind of statement after only 100 million is more shameful marketing than anything else.

Edit: this is US-only, while Tesla is claiming vs. worldwide. Clearly more miles driven worldwide, and probably higher deaths per 100m miles.


Not only is the statistics a big issue, but the apples-to-oranges comparison is what really irks me. The NHTSA average is over all car models on the road. Even in 2014, a non-negligible fraction of those are cars without even decent airbags and crumple zones.

The proper comparison would be with non-Autopilot Tesla miles. But that comparison makes Autopilot look bad.


It also seems like they should control for the demographic subset of riders that own Teslas. It seems possible that affluent car owners have factors at play that affect the likelihood of a fatal accident vis-a-vis the global average.


Just compare Teslas fatalities per mile with and without autopilot.

Remove the mileage driven from the cars that have never used autopilot, to control for drivers too cautious to trust their lives to beta software, and you should have your demographic.


In fact, there is a study which suggests that affluent car owners drive more recklessly than not so wealthy ones. Here's an article about it: http://usa.streetsblog.org/2013/07/16/study-wealthier-motori... I remember reading a more detailed article about the study somewhere but I can't find it right now.


Also autopilot can only be used in certain scenarios (freeway driving). Are hours driven by autopilot in this ideal environment being compared with millions of hours spent driving in non-autopilot-eligible roads/conditions?


Autopilot _should_ only be used on divided highways. However in practice it can be activated on most roads as long as there are lane markings. It's up to the driver to assess the driving conditions and decide whether it's safe to activate Autopilot.


If you torture the data long enough it will confess


It's not a fooling statement, it's a savvy political statement by a company's PR department that cares about self-interest.

You raise whether there is statistical significance about which is safer, autopilot or a typical driver. However it's worth noting that in a lawsuit the burden of proof is on the plaintiff.


Isn't it "balance of probabilities" for civil cases?


It's doubly foolish because the rates for autopilot are for human plus autopilot, not autopilot alone.

So at best they matched what a human alone can do. Meaning the autopilot has done nothing at all.


Not only that using worldwide data is being very disingenuous. Basically zero Teslas are driven in the places with the highest vehicle fatality rates.

https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...

>Seventy-four per cent of road traffic deaths occur in middle-income countries, which account for 70% of the world’s population, but only 53% of the world’s registered vehicles, burdens for 74% of world's road deaths. In low-income countries it is even worse. Only one percent of the world's registered cars produce 16% of world's road traffic deaths.


from the article: "...with zero confirmed fatalities and a wealth of internal data demonstrating safer, more predictable vehicle control performance" and "That contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all." (emphasis mine)

i also stumbled over this statement, but i think they are using all telemetry data (e.g. what would the autopilot have done vs what the human actually did) in all accidents, not just fatalities per miles driven.


You can use the same argument when Tesla has 10000M miles driven. Fatal accidents in the US are 130 per 10000M miles driven.


Not if you understand statistics.


Maybe a nitpick, but if your bug happens 25% of the time, then you haven't yet found the root cause. Until you do so, no amount of data will help you draw a conclusion.


If I say "I have found and understood the bug", we still have to re-test to verify that the bug is not still present. Assuming that we are looking for a bug which, based on a race condition, occurs 25% of the time...


I didn't really have a problem with Tesla or Autopilot's latest issues until I re-read this sentence:

>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic.

My problem is with Autopilot's branding - it's called AUTOPILOT.

The name isn't "Maintain Lane Position" or "Cruise Distance" or something boring that describes it better - it has AUTO in the name.

Typical drivers aren't airline pilots who complete thousands of hours in flight training and have heavily regulated schedules. We're just people who are busy and looking for solutions to our problems.

If Tesla doesn't want people to think Autopilot functions as crash avoidance/smart vehicle control/better than humans in all situations or blame Tesla for accidents (whether human or machine is at fault) it should come up with a less sexy name.


Isn't the plane's auto-pilot pretty much a pilot assistance system designed to keep the plane at the specified altitude and follow a straight line (heading bug on older systems, GPS coordinates in the flight plan on modern ones)?

It's not designed for collision avoidance, runway taxiing, emergency situations, short/soft field landings or departures. It's occasionally used for normal landings (according to https://www.quora.com/How-often-are-commercial-flights-lande...) but it doesn't seem prevalent.


I think an important difference is in the necessary reaction time. Situations in which a plane flying at altitude under automatic control suddenly requires a sub-second reaction by the pilots are extremely rare, in traffic they happen way more often.

Obstacles don't suddenly pop up in mid-air, and a lot of infrastructure makes sure other traffic is nicely separated at all times.

Thus a plane actually can do most of a flight automatically and it's okay if it has to fall back on not fully attentive humans in edge-cases, because there is some time for error-correction.

The car equivalent might be if highways had lanes separated by walls and cars could detect obstacles a few hundred meters away, then taking the hands of the wheel wouldn't be an issue. On real-world streets, you can't be as hands-off as you could be in a plane at 35,000 feet.


"The Avionic Flight Control System (AFCS) provides manual or automatic modes of control throughout the total flight envelope from take-off to landing."

Lockheed L1011, 1972. Flight trials led to demonstration of a fully automated trans-continental flight, from rest to rest. Pilots did not touch the controls.

Incidentally also the only airliner which was certified for operational Cat IIIC autoland, with zero visibility. Frequently used at London-heathrow but needed a ground-control radar to guide the pilots to the gate once the aircraft had stopped itself on the runway.

Aircraft autopilots are technically capable of completely controlling the flight but are restricted from doing so by technical provision ( e.g. lack of rearward-facing ILS / MLS for departure ) or regulatory caution ( e.g. not executing TCAS collision-resolution automatically, even though every FBW Airbus can do this ).


The technology exists, but there's reluctance to give it too much authority. That's changing, especially in the military.

Full-authority automated ground collision avoidance is now on many F-16 fighters. It's a retrofit, and 440 aircraft had been retrofitted by 2015. First avoided crash was in 2015.[1] Here's a test where the pilot goes into a hard turn and releases the controls, and the automated GCAS, at the last second, flips the plane upright and goes into a steep climb.[2] Because this is for fighters, it doesn't take over until it absolutely has to. The pilot can still fly low and zoom through mountain passes. The first users of this system, the Swedish air force, said "you can't fly any lower". It's a development of Lockheed's Skunk Works.

This technology is supposed to go into the F-35, but it's not in yet.

This may eventually filter down to commercial airliners, but they'd need more capable radars that can profile the ground. This is not GPS and map based; it's looking at the real world.

[1] http://aviationweek.com/defense/ground-collision-avoidance-s... [2] https://www.youtube.com/watch?v=aPr2LWctwYQ


As I was saying down the thread, I think its fair to say that Tesla use the autopilot naming because it sounds catchy and better than the terms rest of the world use. I think its a conscious marketing decision to make their look better when its probably not. Everyone in the airlines industry use the same term and the people involved are too informed/trained for their to be confusion. Tesla's branding vastly different from the market is meant to confuse by making them sound better. Thats what they should be penalized for.


But, Tesla's system is designed for collision avoidance, automatic braking, etc. There is absolutely a problem that they call it "Autopilot", as it brings to mind the airplane meaning of the term.

But, it matches neither the common use case for airplane autopilot systems nor the common perception--right or wrong--of what those autopilot systems do. So, Tesla enjoys the cachet that comes with that latter perception, while relegating the true description to the "fine print" of the owner's manual.


Are we going to fight the colloquial use at this point? The redux of "hacker" vs "cracker"?

If the public consistently misunderstands the term and their life depends on it, the term has to be changed period. This is no place for semantic nazism.


You are correct, but I'm constantly surprised by how many people think the pilot taxies to the runway and pushes 'go'.

Tesla autopilot works like an aeroplane autopilot actually does, not how people seem to think it does.


>>Isn't the plane's auto-pilot pretty much a pilot assistance system designed to keep the plane at the specified altitude and follow a straight line (heading bug on older systems, GPS coordinates in the flight plan on modern ones)?

Your plane autopilot analogy fails in a crucial manner: First, to get a license to fly a plane, you have to undergo a much more rigorous training and much more stringent scrutiny than what an ordinary Joe/Jane undergoes to get a license to drive the Tesla car.

Another, a plane pilot does not ride (fly) his/her plane as much as our ordinary Joe/Jane drives his/her car.

Yet another, there are assistant pilots in plane

and the list goes on.

The foolish management at Tesla should have labeled their assistance system just what it is 'semi automatic assistance system' and they could have been slightly more prudent by clearly mentioning the "dangerous" components of it upfront, rather than cleaning the shit now.

It's a sad affair. I had/have more hopes from Tesla. But they should abandon their foolish autopilot thingy, to begin with now.


How is that relevant, much less crucial?


What is "that"?

If you are referring to the parent's analogy, then yes, I said his/her analogy fails.

Tesla is not making it mandatory to their buyers go through a serious and rigorous training to use their so tauted auto-pilot, which is a freaking dangerous thing as it's far from being a autopilot and it's only half-baked semi-auto-pilot potentially riddled with a lot of hidden AI bugs, that their machine learning team may find hard to even locate.

The airplane autopilot analogy, the parent is making to justify Tesla's claims fails miserably, IMO, anyway.

But if a lawsuit gets filed, Tesla will have a very hard time justifying this type of claim.

Another important thing (from their business success point of view), is: this incidence and their shameless justification of the faults in their so-tauted autopilot will tarnish (already has tarnished to some extent) their image in the public. They can't just now show their fucking warnings they originally published in fine-print and got the unsuspecting users signed, and expect the users to happily purchase their now-perceived death-traps.

Competitors just have to point this death-trap autopilot feature of Tesla to turn a potential buyer in their favor.


Exactly, this is a failure of marketing, not technology. Somebody in Tesla made a decision to prioritize "building a brand" and "making a sale" over "accurately communicating product limitations to customers". At a certain point it doesn't matter what you put in the manual or in the fine-print, you've got to ensure the customer has the correct mental model about what they are (or aren't) buying.

To clarify, I'm not saying Tesla made a dumb or unforgivable misstep (there will always be a dumber customers) but if they're going to do a (literal) post-mortem, they need to acknowledge that their branding is a factor.


No, it's a problem with automation that has been well-known since the 1980s. There is not an easy solution to it.

https://www.ise.ncsu.edu/nsf_itr/794B/papers/Bainbridge_1983...

If you have a human-supervised safety-critical automated system (where "difficult" situations are to be overridden by the human) you end up needing the human supervisor to be much more skilled (and often faster-reacting) than they would have needed to be just to do the operation manually.


I like how on Hacker News, Elon Musk gets all the credit when Tesla is awesome, but "Somebody in Tesla made a decision" when people die.

Musk made the call. He might not have proposed it, but given how involved he is with the marketing and PR aspect of Tesla, there's no way he didn't OK the decision to call it Autopilot.


Musk personally made the decision to not use LIDAR and rely to a great extent on single-camera vision.[1] Musk, in 2015: "I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR. I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context."

The "one forward radar" was the decision that killed. Tesla cars are radar blind at windshield height, which is why the car underran a semitrailer from the side. Most automotive radars have terrible resolution and no steering in elevation. There are better radars[2], but they haven't made it down to automotive yet.

[1] http://9to5google.com/2015/10/16/elon-musk-says-that-the-lid... [2] http://www.fhr.fraunhofer.de/en/businessunits/security/the-u...


Move fast, kill people. Oh wait can't we rebase th ... Damned !


>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic

I agree with your comment regarding their branding choice, and I'd add that the design itself is flawed: specifically, the entire notion of the car taking over specific reaction-based functions, but a.) leaving other such functions to the driver and b.) requiring the driver to supervise and override according to split second situations.


> it has AUTO in the name.

So does autocorrect, but I don't see people complaining that banging on the keyboard doesn't produce sonnets.

This whole "auto" thing is ridiculous.

By the way, it's an automobile and has been for a while ... where's the expectation that it will drive itself? Should we not call them automobiles anymore, in case someone gets the wrong impression? This is silly.


Do you disagree that Tesla is using autopilot and not 'drive assist' or something like that to make the potential think their implementation is so much better than the rest of the world? Or the fact that they don't push so much on keeping the hands on the wheel to make this more comfortable for the users vs. the rest. They are actively trying to exploit the fact people think autopilot is more capable then the rest. Hence you/they can't complain when that perception backfires on them.


> Do you disagree that Tesla is using autopilot and not 'drive assist' or something like that to make the potential think their implementation is so much better than the rest of the world?

Their implementation is much better than the competition.

http://www.caranddriver.com/features/semi-autonomous-cars-co...

> Or the fact that they don't push so much on keeping the hands on the wheel to make this more comfortable for the users vs. the rest.

I'm glad they don't. Here is why:

https://www.youtube.com/watch?v=Kv9JYqhFV-M


> So does autocorrect, but I don't see people complaining that banging on the keyboard doesn't produce sonnets.

But a bucketload of rage-filled complaints about when autocorrect gets it wrong and your email to Sinead was modified to "Dear Pinhead"... :)


The "auto" in "autopilot" stands for automatic, not autonomous. Autopilot systems have existed for decades, and they've always referred to systems that automate the most tedious parts of operating a vehicle, while still requiring a human operator to handle new/unexpected situations.


Exactly! People seem to get really caught up and adamant about this label. However, airplane autopilot is arguably significantly dumber than Tesla's autopilot. Yet for some reason people expect more even though Tesla has been clear on how limited the use case is.


Telling is that this letter uses the term "driver assistance system" multiple times. I wonder if that's how they'll market it, or if that phrasing is reserved for PR damage control. (They also use the phrasing, "a death that had taken place in one of its vehicles" as opposed to the common "fatal crash")


It's hard to change the name from "Autopilot" to "Merely assist you in straight line, but please regain control when there's a white truck coming from the right [1], and DO NOT TOUCH THE BRAKE [2] because it disengages the emergency stop".

It's especially hard when you sell the Autopilot for $2500-$3000 [3].

[1] Reference to the last accidents

[2] http://arstechnica.com/cars/2016/05/another-driver-says-tesl...

[3] https://www.tesla.com/models/design see pricing


"Driver assistance system" sounds like a good summary, in the sense that it's smarter to say "fire-retardant" instead of "fireproof"


I agree with your post. Having said that, autopilot on airplanes is meant to be a "macro," rather than an autonomous flying function (in that they do not replace a human operator; in fact the pilot programs the instructions into the flight computer). Somehow autopilot's meaning was lost in translation, and people interpreted it to mean they don't have to take control when things go wrong. Perhaps the prefix auto- is what's wrong--people think the car will drive itself.

Automobiles contain the prefix auto- yet nobody assumes the car is self-driving. Most people understand it to mean gears don't need to be changed manually (not applicable to Tesla since it's single gear and gains full torque).


> Automobiles contain the prefix auto- yet nobody assumes the car is self-driving.

Actually, in that regard yes they do -- "auto-mobile" = "moves itself", as in, you don't have to pedal or Fred Flintstone it...

> Most people understand it to mean gears don't need to be changed manually

No, that'd be "automatic gearbox". As in "Do you drive a manual or an automatic?"

So "autopilot" would suggest it does the piloting as well (ie, the stuff the person sitting at the controls -- the common view of a "pilot" -- normally does)


Another problem with the name is that autopilots in planes and ships do exactly what the pilot programmed them to do, e.g. hold a certain course. The Tesla autopilot on the other hand tries to intelligently react to the subset of the environment which it see through it sensors, which makes it more unpredictable. I assume that pilots don't have to monitor the behaviour of the autopilot itself, whereas in a Tesla you have to do that and be ready to interfere any second. That doesn't really reduce the workload for the driver, so people just trust the system instead.


Pilots do have to monitor the behaviour of the autopilot. Even though they are programmed to do one thing (hold a course or altitude, change to a course or an altitude with a certain rate of change), they rely on potentially faulty sensors and control many parameters to meet the goal. Usually the autopilot will simply disconnect if it detects a problem, but there are examples of autopilots causing passenger injuries: https://en.wikipedia.org/wiki/Qantas_Flight_72


Agreed. It should be called "CoPilot"


Except that copilots tend to be human and thus smarter than autopilots.


how about "CruiseControl+" or some other marketing riff on CC, since that's basically what it is.


> My problem is with Autopilot's branding - it's called AUTOPILOT.

Quite a fitting name for an AUTOmobile, don't you think?


I haven't found US figures, but for the UK, motorway driving has a far lower fatality rate than non-motorway driving. "Although motorways carry around 21 per cent of traffic, they only account for 6 per cent of fatalities and 5 per cent of injured casualties. In 2015, the number of fatalities on motorways rose from 96 deaths to 110."[1] Since Tesla's "autopilot" is only rated for motorway (freeway) driving, it should be compared against motorway fatality rates, which are about a third of general driving rates. So a realistic estimate of the fatal accident rate for human freeway driving is maybe 0.3 per 100 million miles driven. Tesla is doing much worse than that.

To rate an automatic driving system, you want to look at accident rates, not fatality rates. Accident rates reflect how well the control system works. Fatality rates reflect crash survivability. Tesla needs to publish their crash data. That's going to be disclosed, because the NHTSA ordered Tesla to submit that information.

[1] http://www.racfoundation.org/motoring-faqs/safety#a5


I don't think it's fair to compare Uk and US figures. In the UK you actually have to make an effort to get a driver's license.


I've found New York State figures.[1] But they don't have a breakdown by fatal/nonfatal. Like the UK data, they show much lower accident rates on divided, limited access highways.

California has a fatality rate of 0.94 per 100 million miles traveled.[2] That's lower than the US average. But it's not broken down by freeway/non freeway. (You can request a login to query the database directly and download data, and it might be possible to compute freeway accident rates.)

[1] https://www.dot.ny.gov/divisions/operating/osss/highway-repo... [2] https://www.chp.ca.gov/programs-services/services-informatio...


His point was not to compare US and UK.

His point was that miles driven on freeway (when you are using Tesla's autopilot) seems less dangerous than miles driven everywhere else.

He found the UK data and wonders if the same applies to the US. The implication is that if it did, then Tesla would be comparing apples with oranges in its 100m miles reference.


A Tesla is also probably a much safer car than the average, skewing the statistics even more.

And neither does Autopilot work in heavy rain and in other circumstances that are possibly higher in risk (although I'm not sure driving in the rain is actually more dangerous, due to risk compensation)


Here is some data for the USA and some other countries: https://en.wikipedia.org/wiki/Autobahn#Safety:_international...

Looks like in the US motorway driving is about twice as safe than "all roads" with a fatality around every 200 million miles.


But this accident didn't occur on a motorway (or equivalent). We need to know how many miles Autopilot has been used on roads of this type, but we don't.


I don't get why people are so eager to defend Tesla autopilot. We've had Andrew Ng call it irresponsible[1] and Li Fei Fei say she wouldn't let it drive with her family in the car[2]. These aren't anti-tech luddites, but people with a very good understanding of the current state of the art.

I love Tesla, but they are SO weak at taking criticism or realising when they make a mistake.

[1] http://electrek.co/2016/05/30/google-deep-learning-andrew-ng...

[2] http://a16z.com/2016/06/29/feifei-li-a16z-professor-in-resid... (you'll need to listen to the podcast though)


> I don't get why people are so eager to defend Tesla autopilot.

I think there's a strong recognition that self-driving vehicles, when they can be made to happen generally, will be a significant social good. And that it's tricky to get there unless society is willing to put something out onto the streets.

It's taken fifty or more years of popular human-driven vehicles to get to the stage that most of our cars are pretty safe, and quite a lot of effort in improving road design too.

Eventually, though, I suspect it won't be solved until we redesign the roads. A significant part of rail safety is that the signalling system can sense whether there is a train on a stretch of line. (via the rather simple technique that the axles form an electrical connection between the two rails) Right now, it's as if we're trying to do autonomous traffic by an ant colony model -- independent agents that know nothing about each other except what they can sense. Which is always going to be harder than if the road can help them out too.


> I think there's a strong recognition that self-driving vehicles, when they can be made to happen generally, will be a significant social good. And that it's tricky to get there unless society is willing to put something out onto the streets.

I agree autonomous cars hold great potential. But that is precisely why Tesla should not ship a "beta" feature with lives at stake, as that risks squandering that potential. If this is the response to a single Autopilot user killing himself in an accident, imagine the potential backlash if more accidents crop up. Or worse, an Autopilot user kills someone else in an accident.

Critics of Tesla's Autopilot are not only concerned about the danger to Tesla drivers, but to the industry as a whole ("Jaguar engineer: A mishap with Tesla's Autopilot could set back self-driving cars by a decade"): http://mashable.com/2015/12/12/jaguar-semi-autonomy/#QMl5uUB...

Tesla argues that the data it collects from Autopilot users is worth the risk because it can help Tesla develop true self-driving cars faster, but other companies pursuing self-driving cars (including Google) have opted for more controlled testing instead of conducting a grand experiment with customers and the general public.


> And that it's tricky to get there unless society is willing to put something out onto the streets.

Tesla relabeling what other OEMs call Advanced Driver Assistance Systems (ADAS) under the Autopilot moniker is dishonest and misleading.


I agree with this in principle. But I'm not at all convinced the technology is ready, nor that the Teslas are an appropriate platform without additional sensors.


Rail safety and autonomy were designed without the use of advanced machine learning/computer vision that we have today. Also, redesigning highways seems like a rather expensive proposition: the US is already not investing enough in its existing infrastructure.


Tarmac / asphalt only has a service life of 26 years. (And resurfacing after 13 years.) So within the sort of timeframes governments are already used to for infrastructure construction (eg, HS2 is due for completion in 2033), almost all the road surfaces will already have been replaced anyway.

The trick is to do it economically. Do some of the major trucking routes first, as well as common city roads. ie, automate trucks and busses first. Especially as to begin with you'd probably want to exclude bicycles, horses, etc, so that means not doing every road.

But that's just speculation.


I think people are more interested in defending Tesla as a whole because of its environmental friendliness, overall safety ratings, and being a market underdog.

Also, not wanting to use autopilot is different than thinking Tesla should be legally responsible for all accidents that occur when autopilot is on. If they get sued for it then they'll likely just remotely disable it on all of their cars with a software update and I'm not sure anybody wins in that scenario.


Volvo are claiming they will accept liability: http://www.forbes.com/sites/jimgorzelany/2015/10/09/volvo-wi...


This is from July 6th BTW. There has since been a fair amount of back-and-forth on this between Musk and Stephen from Fortune. This episode has also granted us this particularly delightful AMA on reddit, wherein Stephen roundly ignores comments calling out the questionable links between recent Fortune coverage and the Koch's ongoing crusade against renewable energy: https://www.reddit.com/r/IAmA/comments/4rqa6q/hey_i_am_steph...


I don't see why the Koch's campaign against electric cars and renewables is particularly relevant. Driverless cars can be powered by fossil fuels too.

The problem here is Tesla's Autodrive implementation. I think it is fair that questions are asked.


"That Tesla Autopilot had been safely used in over 100 million miles [...] That contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all."

That is such a weak statistical claim, that it border the disingenuous.

Previous discussion: https://news.ycombinator.com/item?id=12082893


Belligerent as usual. I wonder if Musk writes these himself.

edit: I'm being downvoted for this, but I wasn't using "belligerent" negatively here; I was wondering aloud whether Tesla's characteristically aggressive approach to damage control is the result of direct involvement from Musk. Doesn't seem that crazy to imagine that it is.



I don't get Tesla. What is so special about them? Why does everyone want one? If the demand for electric vehicles is so high, why hasn't all the big makers already started offering electric versions of their little hot hatches? That way you get an electric vehicle AND a build quality based on decades and decades of quality control engineering. Autopilot seems to be a curio rather than a drawcard, but again, surely the big makers, with access to billions of metric readouts from existing cars would be best place to develop AI to control them. Is it the Elon Musk factor? Seems to be a strange reason to buy a car. The fan factor might be justified for an Apple car...but I just don't get the buzz and hype around Tesla.


The biggest thing Tesla has done is design cars to be electric first. A ton of compromises have to be made to stick a gas engine (and a gas tank) in a car. A purpose-built electric car is, in several ways, capable of surpassing a gas-powered car (performance, crash safety) at the expense of having a fixed range. There's nothing in the hot hatch segment, because nobody has really been able to make a product at scale.

FWIw, when I owned cars, they were VWs. I loved driving them, and I never had mechanical problems, but the electrical components were a flaky mess.


Have you ever driven one? They are nice cars that don't burn fossil fuels. The super-car level acceleration is a huge draw for some. The range is also more or less unmatched, as far as I know. Their software upgrade policy makes it feel like the car is always up-to-date with the latest stuff—something that I can't ever picture other car makers ever doing. Also you don't have to deal with car dealerships (and almost every encounter I've had with a car dealership has been negative).

Are they perfect cars? No. But the company operates in a fundamentally different manner than the rest of the auto industry, and that is exciting to some.


They are nice cars that burn fossil fuels somewhere else, preferably places where rich people and SV types do not live. As their PR spin after the recent "autopilot" fatality shows, they are just another car company with much better marketing.


We hardly burn any fossil fuels for electricity where I'm from (Toronto). And even if we did, it's more efficient to burn them at industrial scale than in individual cars.


Power plants are vastly more efficient than car engines.


I never said they were less efficient, but they do pollute. As to the ones that do not use fossil fuels, they are few and far between at the moment. The bulk of our electicity production is still gas a coal. Toronto may get a lot of green electricity from Quebec hydro, but please do not claim that this is anything but an outlier at the moment.

Electric cars are more efficient than ICE vehicles and they pollute less, but they are not powered by rainbows and dreams of utopia. Zero emmisions at the tailpipe is nice, but don't try to make the claim that these cars are not following that grand SV tradition of moving the negative externalities somewhere else so that someone else can pay the cost.


And many don't burn fossil fuels


They've made electric cars sexy and manly. Before that, CEOs, managers, fathers and tradespeople who had an electric car appeared weak because they looked like caring for others. Corporates tend to promote people who are best at being selfish. You may think it's off-topic, but I think Tesla is successful at targeting rich people who care for ecology. It's really sad, but in today's society, if you really care a lot for equality, justice, fairness, inclusion, compassion, then you have much fewer chances of being promoted (unless you create your company) - which is altogether a big problem for ecology, for women equality and for men who don't exhibit traits of manliness. Now you can switch from fossils to electricity without appearing weak.


The Model S has won many awards. And it's built (in automotive terms) by a very young startup. Their approach is very software savvy whereas all the other automakers treat software as an afterthought. Its a completely different approach and the recent Model 3 unveil (400,000+ reservations) pretty much confirms that petrol cars have a decade left at most. Other manufacturers may catch up, but at the moment they are miles behind. We'll see whether the chevvy bolt works out or not soon.


Do other manufacturers really miles behind? Or just on a different roadmap? All BMWs are going to be hybrids/EVs by 2022, Volvo is skipping a few iterations of semi-AI on purpose, VW puts automatic controls even in their budgets makes (Skoda and Seat), Nissan has a Model 3 rival already on the market, a new Chevy Bolt is almost there, Toyota/Lexus is so deep in this stuff that they may introduce a hyrdogen powered Lexus even tomorrow, and so on, so on.

Sure, nothing there is on par with Model S, but is it because of RnD deficit? Or because nobody is that much interested in going after that market (> $85k EVs) while they could milk their "old" tech for a few years more and keep the profits? This ugly Nissan Leaf has about one third of Model S range, but also one third of batteries. Once they up that number and put it into Qashqai Tesla won't look that good no more.

I'm not against Tesla, I'd gladly drive one. But they're running out of the time. Model 3 is a great promise today, by 2020 it could be just an exotic option for those not willing to drive an electric VW Passat.


Maybe. Putting a bigger capacity battery in a car is easy. Doing it without making the car too heavy or expensive is very difficult. Tesla are currently building the biggest factory in the world in order to make cheaper batteries. Will anyone else be able to match their $/kWh ratio? Maybe. Let's see how the chevvy bolt goes. It sounds brilliant at the moment but let's see if they can actually deliver it in volume.


You said it, they are the Apple of cars. Musk is the new Jobs. They make the best cars.


I think their points here are valid, but I must admit that it's starting to shake my confidence the way that, every time something bad happens, they instantly respond with such strident defensiveness.


This is from 10 days ago. Why am I finding it at the top of HN now ?


Because HN loves to bash on Tesla and since nothing negative has come out in the last ten days they felt the need to re-discuss a two week old article


every comment section on HN about tesla and musk in general was always extremely positive until they started to cover up their mistakes. seriously, these mistakes can result in governments outlawing autonomous driving outright.


"Cover up their mistakes"? Which part of the fact that fewer accidents occur with autopilot on, than when it's off, is arguable for you?


Tesla has not shown that that is the case, despite proclaiming so. I've commented enough in this thread already, but see http://www.rand.org/pubs/research_reports/RR1478.html

I really would like to hear Tesla's response to this criticism of their apparently flawed statistical analysis.


Is there such a thing as the universally acknowledged definition of "car safety"? When you start combining the word "statistically" and "safe", I feel that the statement loses its scientific rigor. To me "safe" is a very vague term and it's mostly used in a subjective context. It makes me wonder if their assumption was really valid when they were testing this feature.


> Is there such a thing as the universally acknowledged definition of "car safety"

you might try the "deaths per passenger mile" metric, or "deaths per million passenger-miles".

On that metric, long-distance air travel is very safe, as one trip transports hundreds of people a very long way, and motorcycles are at the other end of the scale.

See:

http://www.bustle.com/articles/83287-are-trains-safer-than-p...


Labeling this a "statistical inevitability" obscures the issue. It seems clear from numerous demos posted to YouTube that some autopilot users are very comfortable taking their hands off the wheel. That's an outright violation of the TOS. Yet, lots of people do it. Some erroneously refer to it as "hands free" mode.[1] Some even observe the fact that they're not supposed to do it while doing it.[2]

The human factors here are tough. But safe design needs to account for human factors. The enthusiast community seems especially prone to over-trusting the autopilot, and that's something Tesla should be examining in their safeguards.

[1] https://www.youtube.com/watch?v=2geQ4hvvkNA

[2] https://www.youtube.com/watch?v=8H1qUhpjE5M


Once again this confirms Aaron Swartz was right about the news - http://www.aaronsw.com/weblog/hatethenews


> there is no evidence to suggest that Autopilot was not operating as designed

Obviously, a dead man wouldn't be available to testify.

> That Tesla Autopilot had been safely used in over 100 million miles of driving by tens of thousands of customers worldwide, with zero confirmed fatalities and a wealth of internal data demonstrating safer, more predictable vehicle control performance when the system is properly used.

https://en.wikipedia.org/wiki/Lies,_damned_lies,_and_statist...


> We self-insure against the risk of product liability claims, meaning that any product liability claims will have to be paid from company funds, not by insurance.

Why do they do this? I can understand it when government property is not insured, e.g. UK Civil Service, as the enterprise is so vast and general taxation can fill the gaps. I can also understand that some things can`t be insured, e.g. nuclear power plants, but why does Tesla `vertically integrate` insurance, particularly given that the product is statistically likely to kill someone in due course?


Probably because they feel that their true product liability costs will be far lower than what the insurance companies think they would be. Given that insurance companies have no previous data on crash rates and little visibility into Tesla's products, they would have to be extremely conservative, and charge Tesla a prohibitively large amount to insure them.


Insurance companies are profitable because they charge more then they pay out, in aggregate. If you have adequate capital, it is cheaper to self-insure since you don't need to pay for the profits of the insurance company.


Many large corporations self-insure. If you have the assets to pay out expected claims, it makes sense to avoid paying premiums.


And even if you don't have the assets, you can self-insure the first few million USD, and use reinsurance for the rest.

That's what insurance companies do for risks with a long tail.

Think of it as buying car insurance with a very high excess. Those can be very cheap.


I have always wished that I could as an individual could reinsure my long tail risk. I am happy to take on the risk that I can afford to lose and just let me pass on the risk I can't.


If you chat with the right insurance agent, that might be possible. While it wouldn't go by the name 'reinsurance', look for a policy with a large deductible and a large maximum payout.


I recently did for a sports club. It wasn't an option.

Thing is, insurers live by the law of large numbers (https://en.wikipedia.org/wiki/Law_of_large_numbers).

Cutting away the high risk, low payout parts of the insurance decreases the number of payouts significantly. That does mean variance in payouts goes up.

So, insurers will either need to find lots of new customers to get N up again, or relatively high amounts of capital to survive those high payouts.

If they think they cannot find those customers, they need more capital. To finance that, they need more income, which means charging you more, which means fewer customers, which means charging you even more, etc.

I'm sure you get that insurance through Lloyd's, but it wouldn't be cheap.


I'm fairly certain there are policies designed for high net worth individuals who can cover the first million in losses themselves but don't want to be exposed to long tail risk. I've definitely heard people talking about it, particularly for real estate.


If reinsurance is available to insurance companies there is nothing in theory stopping it being available to individuals. This sounds like a market that is ripe for disintermediation.


They have programs like these for health care in the US. Can be quite cheap, if you can stomach paying the first 6k USD (per year or so?) yourself.


I am in Australia so I am not concerned about health insurance - more things like house, disability and car insurance. I can afford to carry quite a bit of risk, it is just the long tail that is an issue.


For disability insurance, I assume a high excess would translate into a longer time before the insurance kicks in?

For car insurance, isn't an excess you can choose a standard feature? (Some insurance companies give you a no-claims rebase, which is the same as a high excess---only shifted in time.) I don't know about house insurance. What does house insurance insure against?


I have to say I have not looked into this in great depth, but this might be a great idea for a startup.


It could end up in a legal historical litigation.

You Know the story with the microwave and the cat inside. The old ma that just wanted a Quick solution to dry its beloved pet.

That's the same thing with the guy driving in its Tesla with the Autopilot on. He just believed in the marketing campaing



Of course, I sympathize with any casualties. However, this seems to be a question of responsibility in my eyes.

If you're not aware that potential dangers still exist when you step into a car, you shouldn't be driving the car (which is a shame as it's a fantastic car).

Sorry. Tesla is not at fault here - however much people want to call it that way.

The Model S is not some magical car designed by aliens. It's a machine. Problems may occur. We are not at autonomous-vehicle stage yet. However Autopilot is a damned comfortable upgrade compared to the old cruise control.

I can't believe people are blaming Mr Musk or the marketing department for people not taking responsibility or being careful when they get into a car. As they should in any car. Especially any car with autopilot like capabilities.


While I agree that Tesla's article is not perfectly logical and its marketing campaign is not impeccable, I would like to demonstrate that people at Tesla Motors have a point.

1. "STATISTICALLY SAFER" CUSTOMERS. Yes, this statement makes no sense. One fatal crash is not a large enough sample size to make such conclusion. However, this article was aimed not at Hacker News readers, but at average buyers. Most of them do not have a firm grasp of high school math, so for them "statistically safer" means just "don't worry." And indeed there are reasons for them to worry, given that independent news agencies continually publish hysterical things (It is a “a wake-up call!” “Reassess” self-driving cars! The crash “is raising safety concerns for everyone in Florida!” [1]). Tesla's response was nothing but a necessary defence. Or did you expect them to say, "You know, there are not enough data yet, so let's wait until 10 or so more people die, and then we will draw conclusions." This is much more logical, but I feel that customers wouldn't like it.

2. WHY IT IS CALLED "AUTOPILOT." This is just marketing. They couldn't sell it under the name "The Beta Version Of The System That Keeps Your Vehicle In Lane As Long As You Keep Your Hands On The Steering Wheel And Are Ready To Regain Control At Any Moment™." And honestly, I do not think that even relatively stupid customers will just press the button and hope for the best without reading what the Autopilot is all about in advance.

In my opinion, it is now a difficult time for Tesla, and we should not criticise it for trying to stay afloat.

[1] http://www.vanityfair.com/news/2016/07/how-the-media-screwed...

EDIT: You might think that the phrase "trying to stay afloat" is unnecessary pathos, since a single crash, even coupled with a bunch of nonsense news articles, cannot lead to anything serious. However, the history shows it can. In 2000, Concorde crashed during takeoff, killing all people on board [2]. The event was caused by metal debris on the runway, not by some problem with the plane itself. Nevertheless, Concorde lost it reputation of one of the safest planes in the world. The passenger numbers plummeted, and Concorde retired three years later. That crash is the number one reason why it now takes 12 hours to get from Europe to America.

[2] https://en.wikipedia.org/wiki/Air_France_Flight_4590


Sue them for libel then?


This is two weeks old and is on front page of HN for the first time on a sat night...a little late to the discussion




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: