Hacker News new | comments | show | ask | jobs | submit login
NTSB: Tesla’s Autopilot UX a “major role” in fatal Model S crash (arstechnica.com)
178 points by notlob 72 days ago | hide | past | web | 215 comments | favorite



I'm a huge Musk fan and wish him and Tesla all the success in the world. But releasing any kind of machine-assisted driving without a near infallible "don't hit objects in front of the vehicle" function is simply irresponsible.

This crash, as well as the one where a Tesla crashed into a concrete barrier[1] are evidence that their tech is not ready for release.

How much do you want to bet the Waymo self-driving technology would have avoided both of these? Seems like one of the simplest cases to handle: something up ahead is blocking the way!

Anyone who uses Tesla machine-assisted driving features today is putting themselves in grave danger (except for low-speed features like parallel parking).

[1] https://www.youtube.com/watch?v=YIBTBxZ3NWw


I think Musk looks at it differently. There appear to be two fundamental truths about the current autopilot system. First off, the number of fatalities that have occurred with autopilot active compared to the amount of miles driven with autopilot active suggest that autopilot is safer than a human driver. Secondly, autopilot still is not flawless and it is possible for it to cause fatalities.

Some people feel the latter should be enough to stop allowing drivers to turn on autopilot. I am betting Musk feels that the former serves as a moral obligation to get the safety features of autopilot in as many cars as possible in order to save as many lives as possible. He is throwing the switch on the trolley tracks [1].

[1] - https://en.wikipedia.org/wiki/Trolley_problem


"the number of fatalities that have occurred with autopilot active compared to the amount of miles driven with autopilot active suggest that autopilot is safer than a human driver."

Obviously you see how this deduction is subject to dozens of biases stemming from the fact that people turn on autopilot when the road conditions are safe to begin with. One would have to really carefully filter this data to avoid these biases, otherwise such conclusions are absolutely bogus.


The 40% reduction in accident rate was the result of the NTHSA investigation: https://techcrunch.com/2017/01/19/nhtsas-full-final-investig...

"It’s essentially as good as result as Tesla can have hoped for from the U.S. traffic safety agency’s investigation, which took place over the last six months. Reuters reported earlier on Thursday that the investigation would not result in a recall of Tesla vehicles, but the full findings show that in fact, the federal regulatory body found plenty to praise while conducting its inquiry.

The investigation does conclude with a minor admonition that Tesla could perhaps be more specific about its system limitations in its driver-assist features, but acknowledges that all required information is present and available to drivers and vehicle owners."


I know this report. This does not deal with the bias problem however. Obviously it reduced the number of accidents in good road conditions where the autopilot works and is primarily used (much like auto-stop feature did and BLIS). Particularly after they introduced the driver monitoring (this report was conducted after that) - now in fact the driver using autopilot is likely more alert then one not using it.

But that does not indicate anything about conditions where autopilot should not be used in which it is gravely dangerous.

It is all about tail of the distribution, which why it is not so obvious.


Why does it matter if it's dangerous if used incorrectly, if the majority of users use it correctly, and the overall effect is that fatalities go down?


I don't think the argument here is whether technology features can improve safety. The core of the argument is (and always has been) over misleading marketing and naming (suggestive "autopilot"), which exactly encourages the misuse of the feature. When used correctly, with alert driver it can clearly be beneficial. But when misused e.g. by this poor young person who plowed into a truck, this feature could be deadly.

This has a long way to go for true "autonomy" and until then such marketing should be avoided and examples of misuse (which youtube is full of) should be explicitly discouraged and if possible penalized. Tesla should be more explicit and pro-active about it. This is my only point really.


40 years old is "young"? You must be as old as I am :-)


I was thinking about the Chinese boy who hit the truck parked on the service lane, not the guy who decapitated himself in Florida. But either way, both were tragic.


Put "don't crash into things" in the manual and all accidents are incorrect use by definition? That's why it matters.


uh, because if the effectiveness drops dramatically then you are literally going to be killing people.


>But that does not indicate anything about conditions where autopilot should not be used in which it is gravely dangerous.

Yes it does include those conditions because this specific accident occurred in conditions in which autopilot should not be used. That single fatality is the only one that autopilot has seen in close to one billion miles. That is an order of magnitude better than traditional fatality rates.

That leaves two possible scenarios. Either people are likely to use this system incorrectly, in which case many of those billion miles were used in poor conditions for the autopilot. Or few people are likely to use the system incorrectly, in which there is little to worry about since the autopilot system has never been involved in a reported fatality when used properly.


There was another accident in China, in a controlled access divided highway where a Tesla failed to notice a maintenance vehicle moving slowly ahead on the side of the road, ended up rear-ending it and killing the driver.[1]

By the way, can you point me to statistics about “traditional fatality rates” at controlled access divided highway in conditions where the line markings are clearly visible? It's dishonest to compare apples to oranges, especially given that a Tesla can't or won't recognize one from the other.

[1] https://www.nytimes.com/2016/09/15/business/fatal-tesla-cras...


How can autopilot be involved in a fatality when used correctly, when by very definition the correct use implies alert driver who is responsible for everything that happens?


We have both used the term "involved" and not at fault. If autopilot is on when (or immediately before) a fatality occurs, that is being involved in a fatality. The driver may be at fault in those instances if they are not alert. Although it is also possible that a separate driver is at fault and both the autopilot and Tesla driver performed adequately. For example a second driver might rear end the Tesla driver and push the car into oncoming traffic.


That sounds like a comparison between autopilot miles and general driving miles. What would the comparison look like between autopilot miles and interstate highway miles?

Edit: nope, I was wrong. 1.3 to 0.8 airbag deployment crashes per million miles for Tesla cars before and after autopilot release.


This seems like trying to argue that brakes are bad because, while people normally use them at times when they should stop, it's also possible to use your brakes at times when you shouldn't stop.

The fact that something can be used used in a dumb manner is good to know so you can avoid doing so, but that doesn't negate the fact that cars are safer overall with it than without it.


Yes but if I asked you to prove brakes were safer you'd be able to do it pretty easily. The fact that no one has actually set out and done a study proving that autopilot is safer than human driving is disconcerting considering these vehicles are already being put on the road.

Seeing people abandoning the precautionary principle so they can latch onto an over-marketed, over-promised Muskian view of the future is sad. And in five or ten years we're all going to be looking back wondering what made us so dumb to think that computers could actually drive cars.


Toyota came out saying you'd have to gather 8.5 billion miles of driving data to make any claims with statistical certainty about the ability of any autonomous driving system to prevent fatalities (I've heard other outrageous numbers thrown out to this effect, though I don't recall them off the top of my head).

Though by the time you get to the 8 billionth mile the technology will have evolved, rendering the earlier data moot. So it will be pretty much impossible to do any sort of authoritative study on how safe this technology is until it is widespread, and mature.


If a study could reliably demonstrate that autonomous vehicles have fewer collisions it would strongly hint that fatalities are also fewer.


Number of collisions and number of fatalities will not necessarily have the same proportions as for humans.

You could have a car that never gets into little collisions that often happen to human drivers because of carelessness, but are rarely deadly, but the same AI might have a faulty reasoning every once in a while leading it to accelerate to 200kph before running straight into a wall. This hypothetical AI would have very few collisions, but maybe still be much more deadly than a human.

Humans are the only data we currently have significant numbers of miles logged for. Any extrapolation to AI is meaningless.


After the first fatality while on Autopilot, the NHTSA did a study on whether Autopilot was inherently safe and whether it had any defects that needed to be addressed. It found that Teslas with Autopilot were substantially safer than Teslas without. I'd like to see a better study (since that one was mainly aimed at answering the question of whether Teslas with Autopilot should be recalled), but someone has looked into it at least.


That's not quite accurate, the NHTSA study was not designed, in any way, to compare human-driven miles and autopilot-driven miles for safety. The only thing it showed was that vehicles with TACC + Autosteer had less accidents than vehicles with just TACC. Whether TACC + Autosteer is safer than human drivers is up for debate.


True. However, it shows a very substantial decrease in accident rate in the population with Autopilot features, vs. the population without. Correlation, not causality. It could be that Model X drivers are inherently safer than Model S drivers (they would be in the latter set), or it could be that just having the features in the car makes drivers do better even without using them.

However, the inference that Autopilot reduces incidents per mile seems very likely to be true.


The key point is that you can say things like, "per million miles, Tesla auto pilot use has resulted in X fewer fatalities", however, unless the data leading to that statement has been normalized against data representing the exact use case of Tesla auto pilot then it is false. The only way to say that Tesla autopilot is safer per million miles than a human is to base the figure on when the Tesla auto-pilot could be used. I'm not sure I've ever seen that data from Tesla. Who knows, maybe they have it and I've just not seen it. If they do, very cool. I would assume they could get the data by taking sensor readings from Tesla cars under human control in situations where auto-pilot could be used.


The NHTSA found a 40% safety improvement when comparing Teslas with Autopilot to Teslas without. I wouldn't put a huge amount of stock in the exact number, but because one group never uses Autopilot and the other offers a realistic view of when people actually do use Autopilot, it seems like enough to say "Yes, Autopilot makes cars safer in aggregate."


The data is suspect at best and should require a third party evaluation. I'm speculating that the reason for that 40% safety improvement is more a result of the difference between where autopilot can and cannot be used. People with autopilot equipped Tesla vehicles get the option because they likely drive more highway miles where it can be used mile after mile. Those who don't, likely don't and may be in statistically more dangerous driving situations like short commutes over high speed roads. When I commuted to the office, I'd pull out of my 25 MPH subdivision road and immediately accelerate to 65-70 MPH onto the highway and proceed to drive two miles. There's no reason for autopilot in those conditions so I wouldn't purchase the package, but yet making a left-hand turn onto a five lane highway where cars are regularly traveling at high speed is one of the more dangerous things you can do while driving for which if I got hit in a side impact could easily be killed and because of the location on a highway, would be counted as a highway fatality.


> The data is suspect at best and should require a third party evaluation.

And the NHTSA doesn't count as a third party?

> I'm speculating that the reason for that 40% safety improvement is more a result of the difference between where autopilot can and cannot be used.

To say that tesla owners without autopilot tend to drive in situations with 40% more likelihood of accidents than those with autopilot, sounds like a massive stretch without data of your own.


You are making an incorrect assumption. When Tesla released their original Autopilot, it was a no-charge SW update for all cars equipped with the necessary hardware (which was all cars built within the previous xx months). Thus, the drivers with Autopilot capability were driving the same commute in non-Autopilot capable cars the day before. There was no customer selection based on perceived utility of Autopilot.


I think you misunderstand the data. This isn't "people who opted into Autopilot vs. people who opted out" — the no-Autopilot group is people whose Teslas were made before Autopilot was introduced.


And therefore obviously this is not an unbiased control group. Perhaps the more recent teslas have slightly better/less used brakes? Have they controlled for that? There are countless ways in which such conclusions can be flawed.

Statistics is really tricky, even with very large samples at recent presidential election had shown that beyond any doubt.


How many fatal collisions did they use in this dataset? This can't be a serious assessment.


With the same logic if people use it when road conditions are safer then they are simply using it as intended.

The design can help prevent people from using it when it's not appropriate or have mechanism to keep them alert and focused.

That is all NTSB seems to be saying here... they aren't calling into question the idea of highway driving assist systems, which are (already) showing potential to significantly make highway driving safer - the time when people are driving at the highest speeds.

No system will be perfect from preemptive testing either, it really does take real world mass adoption to detect some uncommon flaws unfortunately... Tesla is going to bear the brunt of this fact. You simply can't simulate every human (irrational) behaviour nor can you simulate every environment it will be used in.


Note, the original NHTSA report compares accidents per mile traveled in Tesla vehicles with and without autopilot, and notes that vehicles WITH autopilot have 40% fewer accidents than those without.

This seems like very clean data, as it doesn't actually consider how often autopilot is turned on, and it's over a fairly large sample of drivers, locales and miles driven.

This is just correlation, not causality, but the effect is too big to be by chance.


The comment you are replying to is suggesting that the miles driven with autopilot may have been inherently safer miles, e.g. autopilot was more likely to be turned during the safer parts of a drive.

If true, this would be an unfortunate example of selection bias.


You are missing the point. The report doesn't compare miles driven with and without autopilot. It compares ALL miles driven in cars with Autopilot function, whether it is turned on or not, vs. cars without Autopilot function.

To boot, these are, for the most part, the same cars before and after, since Tesla shipped cars for almost a year with the Autopilot hardware built in but not activated, then activated it for all cars at virtually the same time via an OTA. Before that date, any miles clocked show up in the "no autopilot" column, the next day they show up in the "with Autopilot" column.

To further flog an expired equine, the study showed a 40% reduction in airbag actuations per million miles driven across all miles driven in cars with vs. without Autopilot function, not comparing only miles where Autopilot was turned on.

There are certainly possibilities for this data to be skewed or biased, but it's definitely not a comparison of safe Autopilot miles vs. unsafe manual miles.


> suggest that autopilot is safer than a human driver.

This is what is misleading in my opinion. The study says that Tesla's assistive technology works, that is they assist humans to make driving safer. All driving assistance technologies satisfy this.

In other words

Autopilot + human is better than a human.

Then why is the furore about Tesla? Because they named it autopilot and call it safer than a human. To say autopilot is safer than a human seems to imply that autopilot alone reply be better than a human driver.

Maybe you're smart enough to see this, but as is evident, many believe that autopilot alone is better than a human driver.


There's nothing controversial about front crash avoidance being beneficial, especially with auto braking. As a feature it is more and more widely deployed, probably becoming standard in the next few years.

It would be interesting to see how often autobraking is key and how often the rest of the autopilot improves safety.


but most safety devices are explicitly for when conditions are not their best and sadly Autopilot doesn't function in those bad conditions. Its a fair weather driver assistant.

His moral obligation is to admit its limitations instead of grand standing on what it might do one day and dancing around with selected statistics.

the simple fact is, it failed and will continue to do so. they have made some technology choices they may not be able to surmount not matter how much noise they generate or their sycophants generate.

the promise of level 4 and 5 autonomous driving is so great that we cannot afford to have it damaged in the public eye


Releasing any kind of machine-assisted driving without a near infallible "don't hit objects in front of the vehicle" function is simply irresponsible.

Yes. There are four videos of Tesla cars crashing into objects which partly protruded into a lane. Three vehicles, one construction barrier [1], all on the left side. One fatal.[2] Relying on the Mobileye vision system, which is "recognize car/pedestrian, compute range to target" for anti-collision was grossly negligent of Tesla. Tesla has a radar, but it can't be very good if this is happening. Even the old Eaton VORAD radar, which we used on our 2005 DARPA Grand Challenge vehicle, would have avoided that.

This is the simplest and most important part of automatic driving, and Tesla botched it.

[1] https://www.youtube.com/watch?v=YIBTBxZ3NWw [2] https://www.youtube.com/watch?v=fc0yYJ8-Dyo


And yet, according to the NHTSA, cars are still much safer with it than without it. You seem to be smack-talking rather than offering a reasoned analysis of whether Autopilot is better than no-Autopilot.


>Autopilot is better than no-Autopilot.

If only there was something to put in the middle of this reasoning!

Elon Musk fucked up as soon as he named it "autopilot". He set expectations at autonomous, and released it as a "beta" because that's what you do with some web app, and then when people started using it as an autonomous driving product (because that's what "autopilot" means), they unsurprisingly got killed. But shruggie! Move fast and break things! It's beta! What ya gonna do?

On the other hand, if it was called "driver assist", or "super cruise", or some other thing, then drivers' expectations wouldn't be set at "autonomous driving", and then the'd be paying more attention, and probably wouldn't end up dead.

I'm sorry, but this is all about setting expectations, and Elon fucked this up.


I'll agree that the name "Autopilot" seems to mislead people, but that is a complete side track. Neither the person I replied to nor the person he replied to were talking about the name; they were talking about the technology and whether it was responsible to release it at all, with any name. (For one clear example, the GGP criticized "releasing any kind of machine-assisted driving" with Autopilot's limitations.)


It doesn't seem completely misleading, airplanes do a lot of flying on autopilot, but it doesn't mean there's no need for the pilot/co-pilot to be at the controls...


No it doesn’t but maybe most people think that’s exactly what it means...


What you're starting to edge towards, then, is an educational issue. Personally I feel the only time I've seen an 'autopilot' shown as a fully autonomous system was in the 1980's film Airplane!

I couldn't narrow down exactly where in life I learned about autopilot systems in aircraft, but I always considered it to be common sense that autopilot doesn't mean "can completely fly itself"


The media is constantly bringing together the terms autopilot, and driverless or self-driving cars. There's definitely the suggestion that the driver is completely unnecessary. You can't expect the average Joe to look this up. It should clearly be marked as beta and dangerous without supervision.

Also Tesla says so themselves: >All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than...


AFAIK Autopilot is clearly marked as a beta and displays a dialog reminding you of that when you activate it in your car. And "full self-driving capability" is not a reference to Autopilot — it's saying that Tesla eventually plans to implement full self-driving (as a separate feature) and the car includes hardware that will allow you to enable it with just a software upgrade.


Marking something as beta is a bullshit CYA move. If it was truly a beta test, it wouldn't be in wide public release.

A car is not a web app. It just isn't. This behavior literally wouldn't fly at say Boeing, or anywhere else that makes machines people's trust their lives to.


I think the big problem is that "autopilot" in a small plane just means "fly in the same direction at the same height and speed" like cruise control, but the big commercials jets absolutely can and do take off and land themselves, technically requiring zero input from a pilot at any point in the flight.

I think more people are aware of the second point than the first, and that's why the common perception of "autopilot" is more about level-5 driving than cruise control


I'm gonna need something better than myth and superstition to buy into this meme that autopilot users don't understand what autopilot is/isn't capable of because of how it is named.


The car makes it clear to drivers what "autopilot" means. The only people who might be confused by the term are non-Tesla drivers.


The NHTSA should compare ACC+LKA+AEB vs Autopilot (which is all those plus Lane Centering).

I'd bet those 3 combined are much safer than Autopilot, which complicates their implementation to offer a marginal convenience of not having to steer (but to constantly have to nanny lest it put you into a ditch)


I think the problem is deeper than 'is Autopilot better than no-Autopilot'. Is Autopilot better than no Autopilot under what conditions? Is Autopilot being marketed to customers in a way that encourages them to use it in the wrong conditions? How are changes to the Autopilot code being vetted before being deployed OTA to customers?

I think if used properly Autopilot can be safer than most drivers without it, but the crux of the issue is that many people will trust the system more than they should, which could make them more dangerous. We're in a dangerous inbetween state with autonomous driving where it's just good enough to fool some people into thinking it's foolproof when it's not.


They have a Bosch supplied radar but insisted on doing their own signal/object processing.


What seems to have gone wrong in radar processing is that they'd get false alarms when approaching an overhead object such as an overpass when not on flat ground. Older model automotive radars scan horizontally, but not vertically, so there's no height information. The "solution" was to ignore some of the radar returns, not fit a better radar. Even with that, there's no way that the China incident of rear-ending a street sweeping machine on a freeway should have happened. From the crash videos, it looks like any obstacle not directly ahead of the vehicle centerline is ignored.

2D scan radars exist.[1][2] Tesla didn't use them.

[1] http://www.fujitsu-ten.com/business/technicaljournal/pdf/38-... [2] http://www.oculii.com/sensors.html


On the other hand, if you know that the technology is not completely infallible, but better than humans in a general sense and will save lives by releasing it, wouldn't you be morally obligated to do so? Wouldn't holding it back even though it would save lives be worse?


I think people will prefer risking being killed by their own mistakes than by the mistakes of an AI, even if that risk is somewhat higher, maybe even considerably higher.


Nope... I am actually happy to admit that I am a below average driver (as driving is sooo boring), and from what I read, the AI is already driving better than me - if used in the conditions it is designed for.


Don't they already have that choice, though? No one is forcing anyone to use Autopilot.

That's my main issue with the parent posts and the self-driving sentiments they're expressing - the person using it was grossly negligent and ignored all the warnings presented. You accept that the technology is in beta and that you accept the risks of that. The person ignoring those warnings is, more than likely, much less safer of driver than Autopilot ever could be.

I just really don't see how this could be pinned as Tesla's fault. We're treading very closely to territory where we absolve people of personal responsibility simply because we have technology that we can blame, even if it's irrational or misdirected blame.


Surely they can not turn on autopilot, if that's the case?

For me, I prefer life over a false sense of control.


I do. I have no interest in a self driving car. I hate being a passenger in a human driven car too. Give me control of my own destiny!


Does it matter what you prefer? You're also risking everybody else's lives while driving. Should they have a decision in whether you're allowed to drive?


Disagree. I will have my family use whichever is statistically safer.


This might be harder to gauge than you think. Maybe a Tesla is much safer on the highway, but less safe in the city, or the other way around. Or less safe on ice, etc. The averaged risk (and we probably won't have anything else for a while) over the whole population might be quite different from that for your actual use.


It can't be marginally better than humans, it has to be an order of magnitude or more better. Humans are irrational, and incidents like this erode public trust.


I agree, but there is also news about how Tesla's automatic braking features saved people from getting into collisions. I would think that might counteract some of the negative news. See this one from six months ago: https://www.youtube.com/watch?v=Ndeb1pMAsh4


It shouldn't counteract the negative news because every car manufacturer offers automatic emergency breaking as a feature so it's not a distinguishing benefit of Tesla, it works fine without Tesla's controversial and dangerous Autopilot mode enabled, the main example Elon Musk himself posted of it preventing a collision was in manual mode in conditions totally unsuitable for Autopilot, and the entire industry already agrees that it's a worthwhile safety improvement. I believe it's even going to become mandatory on new cars in the US at some point for safety reasons.


Most manufacturers have automatic braking. But they do not pretend it's "autopilot" nor let user to rely on it. It's just there to save their ass if shit hits the fan.


Apart from this being a very basic feature in almost every (premium) car manufacturer offers, this is a very grave misunderstanding of human psychology. You can never cancel fear with good news. Otherwise we'd be happy about how safe we are today, not worrying about terrorism (in the US that is, there are countries with legitimate elevated terrorism risks that are scary and actually worrisome).


I think your point is void here, because in both of the cases I mentioned, the car was clearly worse than a human. Who drives into a concrete barrier or the middle of a tractor trailer truck?

If the technology were better than humans I'd agree.


>>Who drives into a concrete barrier or the middle of a tractor trailer truck?

I have seen a drunk driver do exactly that.

Someone who falls asleep might also do that.

[edit] Come to think of it, I have almost caused an accident a decade ago. I was momentarily distracted as I turned to talk to my passenger. I was able to brake in time, as a car in front of me had suddenly stopped. Nevertheless, it was only a matter of maybe one second to avoid running into the other car. Me-a-decade ago might been in a similar accident.


I know one person that drove into a light post in the middle of an empty parking lot. Daylight, sober, not texting, not tired, not stunting - they were looking at the exit and simply didn't see the one light post between their car and the exit. "That light came out of nowhere!"

Human car accidents are not news because roughly 100 Americans die in car accidents every day, out of ~15300 car accidents per day.


Also somebody who's been blinded by a bright light, or is suffering from a medical event, or is looking at their cell phone. There are lots of things that cause people to hit objects that should be easily avoidable.


Neither of those people should be using autopilot. It wouldn't save them.

I can also avoid being in either of those situations.

If autopilot is comparable ot an exhausted or drunk driver why do I want to use it?


> I think your point is void here, because in both of the cases I mentioned, the car was clearly worse than a human.

But separate cases are not really interesting when comparing accident rates. Think of a more extreme made up scenario:

You can either drive yourself and be part of the "1.25 deaths per 100 million vehicle miles" statistic (US), or use an automated car which causes 0.25 deaths - but it spontaneously explodes in that case. No other car would just explode randomly while you're driving it. (It's clearly worse in that way)

Do you want to drive yourself, or take the automated one?


Should we judge performance in every scenario distinctly, or in aggregate?

That is, if the risks of one edge case goes up while the car performs far better in the vast majority of situations, is the car worse than a human?


> Who drives into a concrete barrier or the middle of a tractor trailer truck?

It happens all the time, dude. Most famously with Jayne Mansfield, but commonly when people are paying attention to their damned phones, or trying to sort out a fight among the kids, or just distracted by their marriage, or work problems, or...

I used to know a woman who drove right into a city bus that was stopped at a bus stop, lights flashing and everything. Fortunately no one was injured in that accident.


Who drove into the rear end of my car two weeks ago on the interstate? A human.


The bar is not infallible in general, just infallible at driving directly into obstacles.

Is it realistic to worry about a car that can't get that right, but is simultaneously safer than a human driver overall?


What if Tesla released autopilot as a completely passive accident prevention driving assist?

Tesla seems to be allowing only two options - one with no driving assistance, and one where they can call it autonomous.


Emergency braking is still active even if you don't have autopilot turned on. This is also true for a lot of other cars with Level 2 autonomy.


The statistics being touted for reduced accident include emergency breaking.


Yes. I'd like to see AEB broken out of the accident number. Looks like this NTSB investigation did not release such a number.


It's a tough thing to weigh however when it's perhaps "better than humans" in 98% of situations, but "on par with" or perhaps even "worse than humans" for the remaining 2%. That's where it gets sketchy to me, at least.

That said, I'm casually in the market for a Tesla (ideally pre-owned) and one of the features I'd require is the latest auto-pilot hardware. I'd definitely make use of it right away too -- but I'd do so with the understanding that it is far from infallible, and that I still need to keep substantially all of my attention trained to the road just as I would in any other non-autopilot vehicle.


> Anyone who uses Tesla machine-assisted driving features today is putting themselves in grave danger

That's their right to decide or not. What bugs me is that they also put me (and everybody else on the road) in danger.


"In 2013, 32,719 people died in motor vehicle collisions." (fro: https://www.millerandzois.com/car-accident-statistics.html) And you think it's a big deal that 2 crashes have happened? Also that truck driver broke the laws of the road and caused that mans death. That man broke the laws of the road and caused his own death. Pretending that the computers are to blame is silly. Put the blame on the people responsible not a computer.

BTW same with the barrier incident. That was a driver failing to be vigilant. It was his fault for leaving autopilot on in a non-standard environment.


Obviously this truck may have broke down in that lane and could not be moved out, which removes the fault from the truck driver. In general cars have drivers, exactly because such things can happen and will happen without anyone else to blame. Neither does the blame matter if you are dead as a result.

The 2 (in fact there were more) crashes are important because they show the state of the technology. Also keep in mind 99.9% drivers don't have an autopilot, 0.1% does. 2 crashes among the 0.1% are just as important as 32k crashes among the 99.9% of the rest.


> [...] where a Tesla crashed into a concrete barrier[1] [...]

Of course it should never have happened. Still, I'm impressed by how gracefully it handled the accident; emergency lights, keeping the lane and slowly braking the vehicle to a halt (again, such frontal collision was still unacceptable).


This crash, as well as the one where a Tesla crashed into a concrete barrier are evidence that their tech is not ready for release.

But then you're forced to compare the autopilot accident rates against the regular old human accident rates. When you do that, the autopilot wins every time.

Put another way, I'd be comfortable getting into one of these today. I trust the computer's bugs a lot more than I trust my own. I also trust its reaction time above my own, which is more likely to keep me safe if someone else does something stupid.


That's a major flaw in people's thinking about autopilot programs.

An autopilot doesn't have to be 100% safe, it just needs to be significantly safer than a human pilot and in the long run it will save lives.

I think the fallacy is thinking that it's okay for humans to be flawed but machines should be infallible.


Why are your options autopilot can unaided human?

Wouldn't you compare autopilot va autopilot assisted human? That is, where the autopilot is a passive system that only comes into play to prevent accidents.


I feared him rushing the feature long ago. Thought they raised the bar adequately since. He should stop spawning new projects and give more resources to this one instead.


I mostly agree and I wonder how long until incidents like these start eroding public trust. I fully believe that fully autonomous driving can and will bring a whole new paradigm to transportation in the US, but it won't happen if we try to rush something so critical.


Was it rushed though?

Can you compare the number of hours driven with the autopilot and compare it to the statistics of human drivers, and determine which was safer?

This was a fatal crash, but faulty airbags caused the death of 10 people.

What selfdriving cars needs, rather than more time to bake, is a PR or lobby group, that can help spin (put in perspective) these types of events.


I can only speculate, but the fact that engineers have left Tesla over this very issue is telling.


No, these are inexcusable deaths. This is literally a fatal flaw in the system.


I'm sorry... I just don't buy that. Airbags have deployed and killed people by snapping their necks yet we include airbags in every single vehicle on the market without requiring that the drivers or passengers read and accept a warning about airbags.

Autopilot requires manual consent from the user to even enable it and has so many overrides for the user that I feel like you're really stretching it by calling it a "fatal flaw". All it would have taken to prevent this accident was the person driving to hit the brakes in time. Even if they were just using cruise control in their vehicle, they would have died. This was completely about inattention and irresponsibility on the part of the driver. I can't see how anyone is making the leap to this being a "fatal flaw" in the system.


The public will trust it as long as it is properly regulated.

Have you examined the history of air travel? Entire airliners have crashed and killed everyone on board and people still continued to take flights. Today airplanes are safer than even trains. The reason is simple: the industry is very heavily regulated and everyone has to adhere to strict processes with tight controls.


> The reason is simple: the industry is very heavily regulated and everyone has to adhere to strict processes with tight controls.

The reason is even simpler: private companies using private money to improve flight technology


You really honestly think that those same companies would achieve the same record without FAA?


Banks are highly regulated, yet that hasn't stopped them from enriching themselves and the wealthy elite at the expense of the average citizen. It's all sanctioned by the government, built into the system.

Governments after all are run by people, just like companies, and people are prone to greed, corruption, and abuse of power, regardless of their position.


So, basically, you want to outlaw most current implementations of Level 2 autonomy, which are being shipped by a large number of carmakers.

I was unaware that Waymo was planning on shipping a Level 2 system.


BMW's auto cruise control and distance keeping + emergency braking etc isn't nearly good enough either as I know from experience. But it still helps.


I use cruise control on my 2012 Honda all the time.

Am I putting myself in danger?


their tech is not ready for release.

Sounds like a typical software development effort. we always used to say that if we built bridges the way we write software, there wouldn't be many bridges. Well, now we build cars the way we build software, and guess what is happening...

I firmly believe that this is all safety critical code, and we should look to NASA to ask how its done:

https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...

and

https://www.fastcompany.com/28121/they-write-right-stuff

But hey, all that boring "safety" stuff is expensive, and takes too much time to complete....

EDIT: As has now become usual for HN, I am being downvoted instead of debated. This used to be a site for good debate. Too bad that's no longer the case.


I think you're being downvoted because you're making the assumption that Tesla and every other autonomous car manufacturer isn't putting any resources into "all that boring 'safety' stuff". That singular point really devalues the entirety of the rest of your post.


Interesting point, and not what I was assuming at all. I simply believe they don't put enough thought into it (and clearly the NTSB agrees).

Downvotes should be about bad form, not about "I don't agree with you". HN used to be about the former, but now is much more the latter, and it is a poorer community because of it. HN isn't the first online tech-news community, and every single one of these communities that uses an up/downvote mechanism to self-regulate comment quality have gone the same way: as an increasing number of users gain voting privileges, voting replaces debate. When that happens, the community becomes a simple echo-chamber for the "established" truth of that community. This increasingly keeps away quality commenters, strengthening the effect.

When voting, especially downvoting, is a frictionless, costless action, the quality of varied discourse will sooner or later die.


Again, I disagree. I think that downvoting is for comments that are bad form or don't contribute significantly to the discussion. As I mentioned before, your singular statement devalued the rest of the comment which, to me, makes it lack any contribution. You've make a very large generalization that is easy to dismiss. Why should anyone put any weight into the content that came before that?


Move fast and break things turns out to be a bad way to build cars.


I think it's weird that so many people seem to think that Tesla have made such a significant technology leap ahead of everyone else that they're comfortable with putting their lives in the hands of their Autopilot system while other manufacturers have no such thing available in production. I wonder how much of that relaxed critical thinking is down to the hype surrounding Tesla and its founder.

I also think it's strange that they're allowed to sell cars equipped with Autopilot at the moment, there's too little known about its capabilities and weaknesses.


They shouldn't be allowed to call it 'autopilot', and it should automatically (and very visibly/audibly) disable itself the second a driver takes his or her hands off the steering wheel. Why? Because apparently there are people out there who believe this is true autopilot and will move their attention elsewhere when flying down the road at 74mph.


Yep, calling it autopilot and then having marketing videos showing Musk without his hands on the wheel comes off as disingenuous to me because whenever any incident happens, they point to the disclaimer stating that your hands are never supposed to be off the wheel.


Purchasing the car is an information intensive process, there are many opportunities to educate the consumer besides seeing a marketing video, for example: before checkout process, post-purchase communication, before you using the feature the tablet can have a mandatory information session, persistent reminders later during day-to-day use, etc, etc.

Negligence is already something the courts are well suited to handle. Tesla already has a big incentive to do this well. I don't think they are necessarily negligent here for having marketing material which doesn't 100% cover real world usage. If that was ALL the information customers received I would...but I highly doubt that's the case.

I also take issue with the autopilot name as well, but I do think it's something that can be effectively communicated elsewhere with a product like this - where it really isn't that big of a deal. It is not a large barrier to communicate the immature state of the software and the limited current use-cases to each buyer beforehand regardless of branding.


but I do think it's something that can be effectively communicated elsewhere with a product like this

If Tesla is over-promising and under delivering their relative advances in autonomous driving as a sales tactic, then there is little defense of Tesla if drivers don't read enough fine print to understand the limitations of the autonomous driving systems.

In my assessment, Tesla is not being honest about capabilities. If they were, they'd have used the naming strategy that comes up with "adaptive cruise control," not "autopilot." (Sidebar: I don't care what "autopilot" actually means, the general public thinks it means computer magic makes a plane all by itself.)

Having misleading branding and marketing materials that show the CEO operating the product in an unsafe manner convey a message to viewer on how seriously to take the warning. They're giving the impression that the disclaimer is "under-cooked meats can carry food borne illness" on restaurant menus. The "don't try this at home" cover-your-ass warning while winking to try it at home.


Some people buy them used too...


Since an autopilot in an aircraft will not avoid a mid-air collision, this doesn't seem like an inappropriate term. Tesla's autopilot maintains a heading and speed, just like an aircraft autopilot, and usually behaves better in irregular circumstances.


Tesla has significantly reduced the nudge to put your hands on the wheel. I would hate it if they forced to keep my hands on the wheel all the time. There are many situations (stop and go traffic, miles of freeway with no one around) where it doesn't make sense to continually hold your wheel.


Ok, maybe that suggestion was a bit drastic since you do make a good point about being able take your hands off while stopped, but there should be all sorts of bells/whistles/lights going off if you take your hands off the steering wheel while at speed.


The nudge is pretty prominent as the whole instrument cluster beeps. It also punishes you if you ignore the warnings by disabling autopilot (I think after 3 tries).


The real question is : does Autopilot saves more lives than it claims, as most advocates like to say?

If the answer is no, then Tesla's Autopilot shouldn't be allowed as it is.

If the answer is yes, then it means that other manufacturers are too cautious.


Real life is not a trolley problem. Not least because in real life you never have perfect information about what the actual probabilities are, and where products are concerned there is a long history of companies concealing vital information about product risks.

(have we forgotten Feynman and the Challenger investigation? Doesn't have to be for-profit for people to mislead about risk)


That is a false alternative.

One very simple example from a complicated issue:

If Autopilot saves lives when used correctly, but Tesla's marketing material implicitly encourages people to use it incorrectly (e.g. video of Musk with no hands on the wheel), then the marketing is dangerous.


There should be no "when used correctly" - the question is whether it saves (net) lives or kills people when used normally as people tend to do for whatever reasons, no ifs or buts, just count the cases. Just as people violating laws by driving drunk still is a valid argument for more automation of driving, so people violating some guidelines by using automation incorrectly is a valid argument against it. If the automation can't handle the common misuse and still be safer on average, then it's not ready yet.


But Autopilot isn't an all or nothing thing. If Tesla could save lives by changing the name, requiring constant hands on the wheel, doing a better job monitoring the person is actually in control, then they are being negligent by failing to do so.

If you invent a cure for cancer, but 1/10th of your pills have rat poison that could easily have filtered out, it doesn't matter that you overall saved lives. You still killed people through negligence.


In this situation, though, the parallel would be more closely stated as "If you invent a cure for cancer, but 1/10000000th of your pills have rat poison, did you kill people through negligence if you acknowledged that you're in a trial period and that there's a small chance they may have a reaction and die and made them sign a waiver accepting that risk?"

Every single person that uses Autopilot acknowledges the risk multiple times before they can use the system.


Thanks for saying it better than I could have.


No that is not the real or the key question. That is your assertion.

Debates about autopilot in cars have many more dimensions than simply number of deaths.

There are many effects of this technology way beyond number if deaths.


What do you think are the effects of this tech that are way beyond no. of deaths?


This is a false dichotomy, because the auto-brake system can work without autopilot, so a better of both world is possible.


Marketing is an amazing phenomenon.


The AP 1 sensor was in cars for years. No one else was foolhardy enough to try and use it for more than LKA, not because the sensor couldn't do it, but because no one wanted to market an "AP" feature that didn't work 100% of the time it was available


It displays how behind the times the NTSB (and most regulatory bodies) are. The fact that "the result was a collision that, frankly, should have never happened" is entirely their fault. Were they ever in contact/collaboration with Tesla about this kind of technology? If not, why weren't they? Are we going to continue down the same years-long path of reactionary regulations where several more people have to die before the government grows a backbone on this?

I'm all for the government keeping its nose out of places it doesn't belong, but as we've learned over the past century, companies don't give 2 shits about whether or not the products they release harm consumers; it's entirely up to government to set the boundaries. You won't find me in any car with any kind of self-driving "autopilot" feature for many, many years.


The NTSB does accident investigation, not driver assistance technology regulation.


> I think it's weird that so many people seem to think that Tesla have made such a significant technology leap ahead of everyone else that they're comfortable with putting their lives in the hands of their Autopilot system while other manufacturers have no such thing available in production.

Don't they, though?

From what I understand, Tesla's "Autopilot" doesn't do much more than lanekeeping, adaptive cruise control, and emergency braking, which you can buy on a wide range of vehicles today. For example, I drive a Honda CRV that does all of this - on the highway it really does drive itself. You don't even have to buy the top trim levels, just one up from the bottom.


Other manufacturers doesn't call their system Autopilot. The name suggest that you push a button and arrive at your destination without having to do any driving yourself. I also don't see any Youtube videos showing how their Hondas are driving themselves automatically. It's mainly a marketing issue, but too many seem to have bought it.


I think it's weird that so many people seem to think that Tesla have made such a significant technology leap ahead of everyone else

The marketing worked. What so weird about that?

The most effective tool of marketing, is that people think humanity is so smart that marketing isn't effective.


The fact that the NTSB is investigating is comforting. We're moving crashes from the domain of "shit happens" to more systematic "autopilot system X suffered Y deficiency", more analogous to the investigation of plane crashes. I'm hopeful to see the number of car fatalities sharply drop over the next 20 years as autopilots improve.


Yes. Here's the actual NTSB abstract.[1] The full report isn't out yet. Main recommendation:

To manufacturers of vehicles equipped with Level 2 vehicle automation systems (Audi of America, BMW of North America, Infiniti USA, Mercedes-Benz USA, Tesla Inc., and VolvoCar USA):

5. Incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed.

6. Develop applications to more effectively sense the driver’s level of engagement and alert the driver when engagement is lacking while automated vehicle control systems are in use.

The NTSB points out that the recorded data from automatic driving systems needs to be recoverable after a crash without manufacturer involvement. For plane crashes, the NTSB gets the "black boxes" (which are orange) and reads them out at an NTSB facility. They want that capability for self-driving cars.

[1] https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-B...


That's really reassuring. Constructive and thoughtful recommendations. Even though this government agency is in a country I do not live in, I feel better about autopilot systems from reading how this is being addressed over in the USA.


The NTSB and most of other highly developed nations ' equivalents really are quite amazing. For whatever reason, I read most of the full NTSB aviation incident reports. Aside from the thoroughness of them and the engineering geekery, they are truly master classes in understanding how to do a proper postmortem.

The NTSB's ability to remain apolitical and engineering-focused is really remarkable. They way they were portrayed in "Sully" is shameful and was uncalled for.


You might be thinking of the investigation of commercial air carrier crashes. In general aviation (loosely "light aircraft") crashes, there are still a great many of them closed with "pilot lost control of aircraft for undetermined reasons" and "NTSB staff did not travel as part of the investigation but relied on other agencies who had personnel on the ground".


I have huge trust in the NTSB, both from a public good perspective, and from a standpoint of their competency to examine incidents. They also seem to be pretty open with how even the smallest incidents they investigate have demonstration videos and findings made public.

Their involvement can only be a positive in the self-driving car industry


> The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects.

Not something I have known about Tesla's autopilot before reading this.


Well, it's a glorified cruise control, so it makes some sense, even if it's not optimal. My cruise control doesn't recognize profiles or other aspects either, nor does it the read of other vehicles, because it's an old dumb system. The last time I used a lane assist system (a few months back), it had a hard time even finding the lane. These are not self driving cars, these are driver assisting cars.

You should probably have a stronger sense of safety making a long trip with a 16 year old that just got their license driving. Even then, I wouldn't feel too comfortable taking my attention from the road for too long.


Geeze, this is terrifying given that my rear probably doesn't look like that of a vehicle when I'm walking, or cycling, or motorcycling, etc.


I've seen Autopilot 2 fail to notice cyclists. But then, it's designed for use on large, fast roads, rather than city streets.

I cycle through the streets of London every day, BTW, so I'm not suggesting that it's lack of recognition ability is a good thing.


That begs the question why machine learning is underpinning AEB in the first place. It seems like a textbook problem for distance sensors.


AEB = Automatic Emergency Braking. It is one part of the Autopilot system and doesn't apply to the front radar which recognizes the whole car.


At the time, I understand that Tesla was effectively relying almost entirely on the computer-vision AEB system to prevent crashes like this one in auto mode. The radar was only used to provide more information about the distance of objects that were seen and recognised by the single monochrome camera.


It makes you wonder what else we don't know about how specific and non-general the Autopilot is in its object recognition capabilities, doesn't it?


If you use Autopilot for any length of time and believe that you don't need to be paying attention, then I'd suggest that you're responsible for whatever fate befalls you.

Autopilot 2 keeps you in lane, does traffic-aware cruise control, and can do lane changes. That's it. No more, no less.

It sometimes gets spooked by shadows. I've had it brake sharply when going under bridges on the motorway, because of the shadow on the road. I've had it mis-read lane markers (in understandable circumstances; the old lines had been removed but the marks they left still contrasted with the tarmac).

If anything, I find Autopilot 2 to require too much hand-holding - literally! I rest my right hand gently on the bottom-right of the wheel when Autopilot is engaged, resting my wrist on my knee, which is also how I drive regular cars on the motorway. This isn't enough for Autopilot to think I've got my hands on the wheel, so it's constantly bugging me, and I have to wiggle the wheel to tell it I'm there.

Apparently some Model 3s have been seen with driver-facing cameras. Eye-tracking seems like a much better solution to me.

I love Autopilot. I recently had to drive ~500 miles on motorways in a thing that explodes ancient liquidised organic matter, and it was an absolute chore. Autopilot is ace when used as intended, and great in traffic jams.


I don't understand why you're mixing autopilot with IC engines here. The two things are completely unrelated.


There is a ton of electric power available for features in a Tesla that might require an irresponsible amount of fuel in a regular car.


What?

IC cars have been shipping with all of the features currently available in Teslas for the better part of a decade. 1 horsepower = 745 watts, so we're talking about somewhere in the range of 20HP to generate as much power as the average Tesla consumption including powering the drivetrain.

I'd be surprised if all the electric features in a Tesla consume more than 5HP equivalent.


Well for instance, a Tesla can run the AC as often as it wants to keep the car cool when you are away; there's no risk of "killing" its battery like in a regular car. Every device, all of it can turn on at will and not be too concerned about power.


The same is true for any IC car whose engine is on. It's the alternator that generates the power used; it's not taken from the battery.

IC cars even have the bonus that heating the cabin is effectively free, no extra battery drain from it.


Lots of folks are saying: too early. Let me make the opposite argument.

My wife has to do regular weekend interstate travel. She also has serious RSI issues with her hands and arms. The ability to keep focus, but completely disengage from the steering wheel for a while and accelerator for the length of the trip is incredibly valuable to her. It's why we purchased the car.

Tesla is clear on the limitations. The feature is turned off by default. Turning it on presents the limitations to the user.


I understand Tesla's autopilot is now disengaged if you fail to return the hands to the wheel on time?


No matter how you cut it the failure mode trying to drive a car of known dimensions though a smaller space than those dimensions is pretty inexcusable.

>The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects

Does Tesla's software make the assumption that the lane is safely navigable unless it detects otherwise?

IMO it's pushing the envelope of recklessness to have a default software of "good to go" for a non-critical system (whereas something like a shipboard fire suppression system should default to "good to go" in order to prevent corner cases from impacting functionality).

While I understand that you don't want the car braking aggressively when it encounters a bag blowing in the wind or poor lane markings mid-corner not driving into something those are behaviors that are not unexpected from an excessively conservative human driver and preferable to running into the back of a van that's parked on the side of the road.

edit: If you're gonna down-vote you might wanna say why.


Not sure why you're getting downvoyed because it's a good point. How in the world do you let a computer vision decision override the other sensors that are saying "massive wall is moving in front of the car"? And who tf thought training the CV only on the backs of cars was good enough? The edge cases are what cause accidents, not the typical situations.


Because a lot of this is transient signals that are crap? One of our robots back in '06 would panic stop every 30 seconds or so whenever there was a dust cloud (this was an army off road formation project, so dust was a fact of life for all the followers) because occasionally there would be enough random scatter returns from the dust to convince the MMWR that there was a wall in front of it. Panic stopping from highway speeds every time your MMWR detected a flyer floating in the breeze near some dust seems like it would be pretty dangerous- especially for the cars behind you that are also traveling at highway speeds. And LADAR doesn't have the range to be useful at highway speeds (from memory, the ratio of range to safe braking distances means you can't get above 25-30kmh with LADAR being useful for you).

As for whether having shared autonomy at highway speeds is a good idea, that's a really good question that I hope Tesla is asking itself right now. But if you are trying to do that, cameras pretty much have to be your primary source of truth.


Seems that Elon Musk is taking quite an active role in showing dangers of AI.


naive take is he grossly overestimates AI capabilities ... tech today (1) cannot broadly drive a car (2) is not a near existential threat to humanity


1) he reads his SF 2) he understands exponential growth.


Why does the title claim problem was in the UX when the article claims it was faulty models:

>As NHTSA found, the Automatic Emergency Braking feature on the Tesla—in common with just about every other AEB fitted to other makes of cars—was not designed in such a way that it could have saved Brown's life. The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects


Because the user is an integral part of Tesla's "Autopilot" system. The product name implies autonomy, but in fact, the vehicle requires a human in the loop for safe operation.

Unfortunately, a poorly designed user experience lulled the driver into behaving foolishly. That's why the headline mentions UX.


"lulled the driver"

Did it? Or did the driver know what they were doing and not care? I don't think it's responsible to make such assumptions about the driver's reasoning.


What's relevant is that this wouldn't have happened with any of the other semi-autonomous systems from other brands because they force the user to be keenly aware.

Had it been the same for all systems then the argument could be made that the current concept in general is flawed in this iteration but because the others demand a lot more driver involvement the problem pivots to being a Tesla problem, not an semi-autonomous driving problem.


"The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects."

Holy crap. Who thought that was a good idea?


Level 3 autonomy is reckless. It provides next to no value when it's used as intended (i.e., a fully aware drivers instantly ready to take over) and it is dangerous under normal usage conditions (i.e., people using it to text and drive).


This is just not true. It provides immense value in stop and go traffic and long distances. I have driven 18,000 miles using Tesla Autopilot and it is a life changer if you have a tough commute. One fatal crash doesn't change that.


While this is a good point, the obvious solution is to limit it to say under 20 mph and require constant hands on the wheel.

If you want to have your hands off the wheel and your nose in your phone, then you are driving recklessly.


That will not be very useful. Stop and go traffic usually moves between 0-40 miles an hour. AP2 initial rollout was limited to 35 miles / hr. I couldn't use it even in the Bay Area as traffic would speed up to 40-45 and slow down to 0 continually. People have their nose in their phone even without autopilot. You can't change that.


You could fix the speed to whatever would allow a person to jump back when warned. But I suspect that speed is pretty low.

The answer is tough shit, you don't get to not pay attention while driving. Get an uber then.

Having adaptive cruise control that manages stop and go will absolutely change the number of people on their phones.


i agree. not useful for you at 20? bad luck. for most folks in traffic 20-30mph is plenty to help "auto" navigate through the traffic jams.

my average current driving speed on car display is something like 25mph. thanks to constant traffic. the car is rated to 160mph top speed. how useful is that.


It is not useful because it is not usable with a 20mph limit. This is from personal experience. Stop and go traffic speed changes continuously. Let's say you are driving at 10mph and traffic then starts moving at 30mph, but your Autopilot will limit you to 20mph. Then you have to disable AP and starting driving. Again when traffic slows down, you will have to re-enable it. At that point it is not very helpful


they have plenty of data from deployed cars to figure out what speed range covers 75% of the vehicles and set a slow moving car speed limit based on that.


We should limit humanpilot to those conditions as well. Or will you special plead that lives lost because of humanpilot mistakes is morally less bad somehow?


The trade off speed for the increased risk is less time on the road.

The trade off for autopilot is that you don't have to move your foot 2-3inches every 20 seconds. That's not worth the increased risk.


A humanpilot that is shown to be unable to spot a truck trailer in front of them would have its driving license revoked.


Just looked it up and cell phone use while driving leads to 1.6 million crashes each year. I'm guessing they don't all have their licenses revoked.


Fair enough, but they should :)


Could you explain more how it's a life changer? What exactly do you do in the car?

As in, are you not watching the traffic outside because of it, and concentrating on other things? If that is the case, are you not worried about something unexpected happening?


I have a 30 mile one way commute, a lot of it is in stop and go. A major stress in stop and go is actually braking and speeding up all the time. With AP you can simply let the car do the slow down and speed up while you focus on the road ahead. Also, when you are on a road trip and in rural highways with very few cars around it is easier to let the car drive as you are not continually focused on maintaining speed and keeping the car within lanes.


"it is a life changer...one fatal crash doesn't change that"

Yeah it doesn't change that fact, it verifies it.


I would say that's an accurate "elevator pitch" of Level Level 3 or below autonomy. It's reckless of Tesla or anyone else to sell it as a working commercial feature just to try to get ahead of competitors.

I'm a big fan of Tesla and Musk, and you can see that from one of my recent comments on HN, too, but I've pretty much always criticized the Autopilot for exactly this reason.

Humans can't handle Autopilot in its current form, no matter how many asterisks Tesla puts in its Terms of Service and how many prompts and alerts it adds to warn the drivers. Either it's completely worry-free "let me take a nap until destination" self-driving - or it's not, and the driver should be expected to do manual driving at all times.


Audi has a new System in the A8 for traffic jams that works only up to 60kmh, but is designed to be left alone completely. The germans seem to be starting from the bottom in a conservative way working up, while Tesla is much more aggressive. I think in this case i prefer the germans in the way they are not overpromising as much as Tesla.


I totally agree with you from the perspective of the driver, passengers and public safety.

I wonder, though, if manufacturers are going to greatly accelerate their progress to level 4 or above from the data they get having level 3 in production vehicles. They may get much more quickly to a higher quality level 4 by having what amounts to a marketing novelty feature in practice in 1000s of vehicles in real-world conditions.


I was about to make a very similar comment and have very similar positions as you. The "Autopilot" feature is in this weird no-man's land of being in between two good places. As the article points out, the ambiguity of who's doing the driving is dangerous. The whole "you must be ready to override it" caveat really ignores what many people know about human behavior: if allowed to misbehave, someone will. The first X times the drivers are allowed to ignore the caveat and not have consequences will embolden him to continue doing it until something bad happens. It seems almost reckless on Tesla's part to release the feature this way.


Tesla has L2, barely. L3 in fact allows you to not pay attention. L3 requires several seconds (10-15) warning period for you to take over the wheel.


Couldn't Level 3 autonomy still be valuable for drivers who are slightly impaired, like elderly people? That seems like a substantial market.


How would they be able to quickly take the wheel and brake if something happens? I think the opposite is true, if you need longer reaction time it's probably a bad idea to not be fully immersed in the driving until an accident is imminent.


I'm thinking of a scenario where the driver is holding the wheel, but the computer is actually doing the driving most of the time while giving just enough feedback to the driver to feel like she is in charge.


In our tests a decade ago, we found that shared autonomy was actually much more frustrating for users than either full or no autonomy. When you were telling the computer to go over there and the computer wanted to go over here, it left all of the users frustrated and complaining that the computer wouldn't let them go where they wanted. Maybe you can come up with a better adjudication system than we did, but it is very tricky to design the system so that it catches when the user does something stupid but not when they are doing the right thing- if the computer could figure out the right thing you wouldn't need the human!


Interesting, cool to hear your experience!


It's actually harder to drive in these scenarios because your reaction time needs to be faster and your takeover has to be more aggressive. Imagine an edge case where the car veers toward another car unexpectedly. This isn't guaranteed to be avoided by level 3.


The same elderly people who drive through the front of buildings because it takes them so long to recognize they hit the gas instead of brake?


I think this indicates where level 3 is a slippery concept: it looks like it could be useful in this case, if you think only of the times when it works properly, but the only reason for a system to be level 3 is that it cannot qualify for level 4, meaning explicitly that it cannot be considered safe enough without having an alert human to fall back on.

I took a look through the level descriptions for a level where the human is driving but is being monitored by the system, but they all seem to be versions of having the system drive while being monitored by a human. Maybe we can do more in the way of collision avoidance as the technology progresses towards true autonomy.


I think so, but only with substantial geofencing.


I think you need level 4 for them.


The first question should really be: "Is autopilot+human drivers safer than human drivers alone?" If it isn't, it would be arguably criminal to allow drivers access to it. As with most technological breakthroughs, there will be a learning curve (euphemism for death in a car crash in this case). As long as it is an improvement over human drivers, it is a net good, I would argue. From this perspective, even if Tesla is releasing a system with flaws, it is BETTER than the alternative: more humans dying by driving their own cars. Imagine if they waited to release a more perfect autopilot system, wouldn't that be keeping a valuable safety system off the road?

This is only true, of course, if autopilot is actually safer than a human. From what I have read, there is little data to support either conclusion. Lacking proof that such a system is actually safer, Tesla should take the conservative engineering decision and reduce or eliminate drivers' access to such a system until enough data is available to make a conclusive statement on safety.


These self driving systems need to use machine learning models more in line with the what humans see. Nobody just recognizes the rear of a car... we look at the rear of a car and we are aware of its entire three dimensional shape and the three dimensional structure of the universe around the same car. Are there any machine learning models that can achieve this level of recognition or is it all just associating names with pictures?


Did you see the article recently about recognizing stop signs? However the machine supposedly recognized stop signs, it had nothing to do with them being red, hexagonal, or saying "STOP". It was possible to deface a sign such that it was clearly still red, hexagonal, etc. but the machine didn't recognize it. So IMO the machine had no 'mental model' of what a stop sign is.


I agree that the name of the feature - "autopilot" is misleading consumers. Honestly Tesla should be sued for this and name "autopilot" recalled until it really is an autopilot!

"An autopilot - is a system used to control the trajectory of a vehicle without constant 'hands-on' control by a human operator being required."

What Tesla has is not an autopilot!


What Tesla has seems to fit that definition. In fact, what Tesla has seems pretty similar to the autopilot in airplanes — both keep the vehicle on the right course without constant control from a human, but both require a human to be ready to take control in unforeseen circumstances. If it's deceptive, it's because drivers have an overly hopeful idea of what autopilot is, not because it's a very inaccurate name.


It sounds to me like it is working as intended, intermittent human intervention of the trajectory of a vehicle. Autopilot has never meant 100% control by a robot, even airplanes still have pilots for most of the flight.

Rip Riley: Uncuff me, you idiot! Holy God, if we overshot our chance to refuel...

Sterling Archer: I thought you put it on autopilot!

Rip Riley: It just maintains course and altitude! It doesn't know how to find THE ONLY AIRSTRIP WITHIN A THOUSAND MILES SO IT CAN LAND ITSELF WHEN IT NEEDS GAS!

Sterling Archer: Then I, uh... misunderstood the concept.


Elephant in the room, why Tesla do not partner with Google for autopilot ? Google will have tons of data and Tesla will have an order of magnitude better software capabilities. Such partnership can even accelerate global automatic vehicle adoption.


Because both companies believe that owning proprietary self-driving systems will be a big competitive advantage. If one company beats the other to mainstream adoption by 12-18 months they'll have a shot at dominating the industry.


Perhaps a stupid question, but what entity tests self-driving cars before they are admitted to the streets? Is this entity partially responsible?


"Minor Role" I read.


The report found human error was equally to blame and went out of its way to say that self driving cars are the future. This article is sensationalizing.


Sure, but the human error is having too much confidence in the autopilot.


The system was warning him more or less constantly, according to the report.


I'd count that part human error, part design error. Constant warnings shouldn't be possible with this - it should disengage and refuse to function again until you start driving more responsibly.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: