Hacker News new | past | comments | ask | show | jobs | submit login
Tesla crash in September showed similarities to fatal Mountain View accident (abc7news.com)
303 points by jijojv on Apr 5, 2018 | hide | past | web | favorite | 562 comments



What is happening was clear to many since the start: Tesla incarnates the behavior of his founder, exaggerating the technology that is really available right now, and selling it as a product without the premise for it to be safe. A product that kinda drives your car but sometimes fails, and requires you to pay attention, is simply a crash that we are waiting to happen. And don't get trapped by the "data driven" analysis, like "yeah but it's a lot safer than humans" because there are at least three problems with this statement:

1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.

Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.


> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.

I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.

In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.

Bottom line, that point just doesn't work.


I think the point he's making is that it's worse if a person dies because of something they have no control over (self-driving car malfunction) than if a person dies because of their own stupid choices (driving drunk, driving too fast for conditions, running red lights, etc).

This, of course, ignores the fact that stupid choices drivers make tend to affect other people on the road who did nothing wrong, so the introduction of a self-driving car which makes less stupid decisions would reduce deaths from both categories of people here.


> I think the point he's making is that it's worse if a person dies because of something they have no control over

Perhaps. I reject this argument though. A death is a death.

And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.


>Perhaps. I reject this argument though. A death is a death.

A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.

>And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.

How many deaths has Tesla prevented, which would have happened otherwise?


> How many deaths has Tesla prevented, which would have happened otherwise?

Certainly a larger number than the number of deaths caused by Autopilot failure (which makes major news in every individual case). Have a look at YouTube for videos of Teslas automatically avoiding crashes.


I don't see a number.


> A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.

The result is the same, no? Isn't the idea to save lives? Why does one life have more value than another's simply because someone's death was caused by their own negligence?

What about the other people on the road? Do their lives matter less because of the gross negligence of the person not paying attention while they're driving?

This issue is a lot more complicated than you're making it seem.


It is unethical to ship non-essential/luxury features containing known defects which could lead to someone's death.


> because of something they have no control over

Expect in this case, the driver has 100% control.


The driver's not the only one on the road.


No, that conclusion completely ignores the fact that the kind of people who buy Teslas are generally not the kind of people who drive unsafely anyway and die in terrible wrecks.

If you give every shitty driver a self driving Tesla maybe you would do something to make roads safer, but if you’re just giving it to higher net worth individuals who place greater value on their own life, you haven’t even made a dent in traffic safety.

In fact in some cases all you’re doing is making drivers unsafer because the autopilot encourages them to not pay attention to the road no matter how much you think they are watching carefully. The men killed in Teslas could have all avoided their deaths if only they had been paying attention. If I see a Tesla on the road I stay the hell away lest it do something irrational from an error and kill us both.


Is there evidence for your better driver claim? https://wheels.blogs.nytimes.com/2013/08/12/the-rich-drive-d...

I do see some sources that claim rich divers get better insurance rates, but it is unclear to me if that is due to driving skill or a number of other factors that increase rates like liklyhood of being stolen or miles driven.


Seems like your first two paragraphs are amenable to analysis. Surely there is data out there that splits traffic accident statistics on income, or some proxy for income. Is a Tesla on autopilot more accident-prone than a BMW or Lexus? The numbers as they stand certainly seem to imply "no", but I'd be willing to read a paper.

The third paragraph though is just you flinging opinion around. You assert without evidence that a Tesla is likely to "do something irrational from an error and kill us both" (has any such crash even happened? Seems like Tesla's are great at avoiding other vehicles and where they fall down it tends to be in recognizing static/slow things like medians, bikers and turning semis and not turning to avoid them). I mean, sure. You be you and "stay the hell away" from Teslas. But that doesn't really have much bearing on public policy.


Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters... those are the demographic you have to compare the Tesla drivers to. Do people who buy Teslas engage in those kinds of dangerous activities?


> Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters.

Wait, so you are willing to share the road with all those nutjobs, yet you're "staying the hell away from" Teslas you see which you claim are NOT being driven by these people? I think you need to reevaluate your argument. Badly.

That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...


>That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...

What are you basing this on? Specifically how is it 'clear' and what data has shown this to be 'significant' ?


Elon Musk probably


I stay away from all these people. I have no illusions that a self driving Tesla is significantly safer.


I don't think a self-driving car is safe today but the fundamentals of machine learning seem sound so arguably they will become safer with every mile they drive (and every accident they cause).

My only concern is that there should be somewhat responsible people working on it (this means for example no uber, Facebook, linked in, or ford self-driving car on public roads).

But thinking about bit more, what if competitors shared data? Would that get us "there" (level 4+) at all? Would it be a distraction?


There was a recent news item about a Tesla driver who fell asleep at the wheel and who was allegedly over the legal limit:

https://duckduckgo.com/?q=tesla+driver+drunk+asleep&t=ffsb&a...


Yes, that sounds like a pretty good profile of people buying expensive high-performance cars.


yes


Well, this is pretty classist...


The reason why people buy expensive cars is because it is one of the ways in which they can quickly and quietly signal wealth or status. Cars are classicist, like it or not.

After all, how much better can a $500K supercar be compared to a $50K car? Definitely not ten times better, the speed limits are the same, seating capacity is likely smaller, there may be a marginal improvement in acceleration and a corresponding reduction in range (and increased fuel consumption).

Even having a car / not having a car is a status thing for many people (and it goes both ways, some see not having a car as being 'better' than those that have cars and vice versa, usually dependent on whether or not car ownership was decided from a position of prosperity or poverty).


>Cars are classicist, like it or not.

as a trained classicist, I take exception to the idea a car could do what I do


I meant it as an adjective, not as a noun (I'd have added 'a' in front). Is it incorrect?


You'd want "classist" if you mean prejudice against a class


Noted! Thank you.


Some of us just really like nice cars without signaling wealth. I recently bought a Hyundai Genesis even though I liked and could afford the better Mercedes Benz because I preferred to have a non-luxury brand. It's a New England thing.


I went from a two door Hyundai Getz to a Tesla because I wanted an electric car. Not to signal anything. I also couldn't wait for the Model 3 to come to Australia due to a growing family and a 4 door requirement.


Oh I don't disagree. Rather, I was responding to the assertion that wealthy people are better drivers.


>You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe"

There are plenty of people making that case. See for example this piece in The Atlantic the other day (https://www.theatlantic.com/technology/archive/2018/03/got-9...) talking about the concept of moral luck in the context of self driving cars. It puts the point eloquently.


Ethics works by consensus. If the majority of the public become convinced that they prefer to die as a result of their choices than as a result of the choices of a machine they have no control over, even if the machine kills less often, then the machine is not an ethical choice anymore. Basically, forcing people to die in one specific way stinks to high heaven.

As to why we use certain technologies, that is not so clear cut either. For instance, if I have a heart attack and someone uses a defibrilator on me- at that time, that is not necessarily my choice. I'm incapacitated and can't communicate and if I die at the end there's no way to know what I would have chosen.

Not to mention- most people are not anywhere nearly informed enough to decide what technology should be used to save or protect their lives (for instance, that's why we have vaccine deniers etc).


> Basically, forcing people to die in one specific way stinks to high heaven

Then these people should stop trying to force me to accept a less safe method of transportation, by preventing me from using new technology!

Yeah, those people shouldn't be forced to use a self driving care. As they are ALREADY not being forced to do.

Nobody is being forced to use the technology. Just don't use it if you don't like it.

It is literally the opposite. These other people are forcing ME into a more dangerous situation.


Ok, you take the red tape away from silencers and short barreled rifles and I'll take it away from your self driving cars.

I'm all for less red tape.


Might help to give the context that suppressors can save target shooters' hearing.


The technology is not yet safe, as should be evident by now. It's being promoted as safer than humans, but it's not anywhere near that, yet, mainly because to drive safely you need at least human-level intelligence. Even though humans also drive unsafely, they have the ability to drive very, very safely indeed under diverse conditions that no self-driving car can tackle with anything approaching the same degree of situational awareness and therefore, again, safety.

For the record, that's my objection to the technology: that it's not yet where the industry says it is and it risks causing a lot more harm than good.

Another point. You say nobody is being forced to use the technology. Not exactly; once people start riding self-driving cars then everyone is at risk- anyone can be run over by a self-driving car, anyone can be in an accident caused by someone else's self-driving car, etc.

So it's a bit like a smoker saying nobody is forced to smoke- yeah, but if you smoke next to me I'm forced to breathe in your smoke.


>In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.

Yes, we would also tolerate Teslas if they were critical life support technology. How many lives has it saved BTW?

>and in fact these things have non-zero (often fatal) false positive rate

What is non zero? Ten to the power -20?


The key here is that in the larger system, we can do more to ensure humans perform better. If we encouraged and enforced behaviors that improve driving statistics, perhaps with other technologies, we would yield a more difficult metric to beat than our current driving stats. I agree we should spend our time doing that, rather than accepting in an inferior level of performance.


It is definitely much safer. However it's unethical to gloss over/coverup/play down the fact that people have died because it failed to drive to car. Both can be true.


Everyone talking about statistical evidence should take a look at this NHTSA report [1]. For example, "Figure 11. Crash Rates in MY 2014-16 Tesla Model S and 2016 Model X vehicles Before and After Autosteer Installation", where they are 1.3 and 0.8 per million miles respectively.

Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.

Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.

[1] https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


I mentioned this in a reply to another comment, but the NHTSA findings have apparently raised suspicions in other researchers. Currently, the DOT/NHTSA are facing a FOIA lawsuit for not releasing data (they assert that the data reveals Tesla trade secrets) that can be used to independently verify the study's conclusion:

http://www.safetyresearch.net/blog/articles/quality-control-...

https://jalopnik.com/feds-cant-say-why-they-claim-teslas-aut...


You can add safety automated features without requiring completely insane and dangerous mode of operations (like pretending the car can drive itself except that it can not and pretending that you did not pretend in the first place and reminding that the driver should have its hand on the wheel and be alert and what is even the point then?????)

They could be more useful with -- for now -- less features. They probably won't do it because they want some sacrificial beta testers to collect some more data for their marginally less crappy next version. But given the car does not even have the necessary hardware to become a real self driving car (and that some analysts even think Tesla is gonna close soon) the people taking the risks of being sacrificed will probably not even reap the benefits of the risks they have taken, paying enormous amount of money to effectively work for that company (among other things).


That could be because the car is usually right when it thinks it's in danger; this crash happened because it incorrectly thought it was safe. Having an assistive device that emergency-brakes or steers around obstacles is great as long as there's very few false positives, as long as its assistive and not autonomous. Once it's autonomous, you need to have extremely, extremely low false negative rates as well.


4. There is another completely unknown variable - how many times would the autopilot have crashed if the human hadn't taken over. So Tesla's statistics are actually stating how safe the autopilot and human are when combined, not how safe the autopilot is by itself.


I thought the autopilot was always supposed to be combined with the human.

It’s not a self-driving car. It’s really an advanced cruise control.


From https://www.tesla.com/autopilot: "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver"

Why do we keep referring to something that we understand should require human supervision as "auto"? Stop the marketing buzzfeed and let's be real.


I'm not sure whether it's intentionally been worded that way, but that sentence makes only a statement about hardware, not software. So technically, it's correct that the hardware is more capable than that of a human ("sees" and "hears" more), but it's the software that's not up to par.


That's like "Made with 100% local organic chicken" is only pointing out that the organic chicken is local, unlike the non-organic chicken whose provenance is not guaranteed.

That would be great to live in the world of marketing people where everyone is so able to parse weasel words. That would solve the fake news problem overnight.


People still use waterproof and water resistant interchangeably. People don't get the difference. Same with HW/SW, they won't know the difference. They read this web page and they think they are buying a self driving car.


bizarre statement

is there any point in saying that a hypothetical brain-dead, comatose bodybuilder is stronger than a starving man?


Yes, if the bodybuilder can be software-upgraded remotely out of the coma.


And if the coma was remote software upgrade induced? Stretching the analogy a bit, but a lot of people hype Tesla’s remote updates without considering how many remote updates tend to brick our devices.


Then there's still point in making the claim.


That sentence is just saying that the _hardware_ (not the software) is sufficient for "full self-driving capability". The current _software_ doesn't support that.

The point being that in the future the car _could_ get "full self-driving capability" via a software update. In contrast, a car that doesn't have the necessary hardware will never be fully self-driving even if we do develop the necessary software to accomplish that in the future.


And yet that is quasi criminal (from an ethical pov) that they have worded it that way for 2 reasons:

a. When you buy a car why should you even care about that hw/sw distinction, and more importantly do you have the distinction in mind at all time, and are advertisement usually worded that way, stating that maybe the car could become self-driving one day (but without even stating the maybe explicitly, just using tricks)

b. It is extremely dubious that the car even have the necessary hardware to become a fully autonomous driving car. We will see, but I don't believe it much, and more importantly competitors and people familiar with the field also don't believe it much...

People clearly are misunderstanding what Tesla Autopilot is, but this is not, ultimately, their fault. This is Tesla's fault. The people operating the car can NOT be considered as perfect flawless robot. Yet Tesla's model consider them like that, and reject all responsibility, not even considering the responsibility that they made a terrible mistake in considering them like that. We need to act the same way as when similar cases happens for a pilot mistake in an Airplane: change the system so that the human will make less mistakes (especially if the human is required for safe operation, which is the case here). But Tesla is doing the complete opposite! By misleading buyers and drivers in the first place.

Tesla should be forced by the regulators to stop their shit: stop misleading and dangerous advertisement; stop their autosteer uncontrolled experiment.


A.) Pretty sure that statement was made to assuage fears that people would be purchasing an expensive asset that rapidly depreciates in value, only to witness it becoming obsolete in a matter of years because its hardware doesn't meet the requirements necessary for full self-driving functionality. Companies like Tesla tout over-the-air patching as a bonus to their product offering. Such a thing is useless if the hardware can't support the new software.

I think I actually sort of disagree with your reasoning in precisely the opposite direction. Specifically, you state the following: "The people operating the car can NOT be considered as perfect flawless robot.".

I agree with that statement 100%. People are not perfect robots with perfect driving skills. Far from it. Automotive accidents are a major cause of death in the United States.

What I disagree with is your takeaway. Your takeaway is that Tesla knows that people aren't perfect drivers, so it is irresponsible to sell people a a device with a feature (autopilot) that people will use incorrectly. Well, if that isn't the definition of a car, I don't know what is. Cars in and of themselves are dangerous and it takes perhaps 5 minutes of city driving to see someone doing something irresponsible with their vehicle. This is why driving and the automotive industry is so heavily (and rightly) regulated.

The knowledge that people are not save drivers, to me, is a strong argument in favor of autopilot and similar features. I suspect, as many people do, that autopilot doesn't compare favorably to a professional driver who is actively engaged in the activity of driving. But this isn't how people drive. To me, the best argument in favor of autopilot is - and I realize this sounds sort of bad - that as imperfect as it may be, its use need only result in fewer accidents, injuries, and deaths, than the human drivers who are otherwise driving unassisted.


Wow! I'm glad you pointed that out. It was subtle enough I didn't catch it. But perhaps we should consider this type of wording a fallacy, because with that level of weasel-wording, almost anything is possible! The catch is that it presupposes a non-existent piece of information, the software. And we don't know if that software will ever - or can ever - exist.

Misleading examples of the genre:

My cell phone has the right hardware to cure cancer! I just don't have the right app.

The dumbest student in my class has a good enough brain to verify the Higgs-Boson particle. He just doesn't know how.

This mill and pile of steel can make the safest bridge in the world. It just hasn't been designed yet.

Your shopping cart full of food could be used to make a healthy, delicious meal. All you need is a recipe that no one knows.

Baby, I can satisfy your needs up and down as well as any other person. I just have to... well... learn how to satisfy your needs!


All depends on how likely you think it is that self-driving car tech will become good enough for consumer use within the next several years.

If we were well on the way to completing a cure for cancer that uses a certain type of cell phone hardware, maybe that first statement wouldn't sound so ridiculous.


Yes, but of course the only thing that matters is whether or not the car can do it. That it requires hardware and software is important to techies but a non-issue to regular drivers. They buy cars, not 'hardware and software'.

And if by some chance it turns out that more hardware was required after all they'll try to shoehorn the functionality into the available package. If only to save some $ but also not to look bad from a PR point of view. That that compromises safety is a given, you can't know today what exactly it will take to produce this feature until you've done so and there is a fair chance that it will in fact require more sensors, a faster CPU, more memory or a special purpose co-processor.


I agree that since that statement is at the top of the /autopilot page may insinuate that that's what Autopilot is, but that statement is describing the hardware on the cars rather than the software. I think that's intended to be read as "if you buy a new Tesla, you'll be able to add the Full Self-Driving Capability with a future software update; no hardware updates required." It could be made more clear, though.


People will differ about whether the statement is worded clearly enough, but it is a bizarre thing to put on the very top of the page. It is completely aspirational, and there is no factual basis for it either. No company has yet achieved full self-driving capability, so how can Tesla claim their current vehicles have all the hardware necessary? Even if it's true that future software updates will eventually get good enough, what if the computational hardware isn't adequate for running that software (e.g. older iPhones becoming increasingly untenable to use with each iOS update).


The autopilot page needs to start with a description of what autopilot is, and then farther down the page, the point about not having to buy new hardware for "full" self driving could be made. This probably still needs a disclaimer that that is the belief of the company, not a proven concept.

But that's not going to happen, because Tesla wants to deceive some people into believing that autopilot is "full self driving" so they will buy the car.


That's what Tesla says but that's not how people are using it - and as people grow more comfortable with the autopilot, the less vigilant they'll become. I have this picture in my head where people are trying to recoup their commute time as though they're using public transport. I suspect we'll get there some day but today is not that day.

While the Tesla spokespeople are good at saying it's driver assist, their marketing people haven't heard - https://www.tesla.com/autopilot/. That page states "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." but as I noted above, we don't know how much of that safety should be attributed to the human. Tesla apparently knows when the driver's hands are on the steering wheel and I presume they can also tell when the car brakes, so they may have the data to separate those statistics. At a minimum, their engineers should be looking at every case where the autopilot is engaged but the human intervenes. They should probably also slam on the brakes (okay ... more gently) if the driver is alerted to take over but doesn't.

As an aside, just the name "Autopilot" implies autonomy.


"All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

This is perhaps a case of purposely confusing marketing. All vehicles have the hardware for full self-driving capability but not yet the software. The full self-driving is to be enabled later on through an over-the-air software update.


They can't even honestly claim they've got the hardware, when a "full safe driving capability at a safety level substantially greater than that of a human driver" is at this stage something in the realms of hypothesis, and the software which gets closest to doing so relies on far more hardware than a Tesla to achieve this.

It's not merely purposely confusing. It's at best, not an outright lie only because they hope it's true.


They don't even have the hardware equivalent of a human driver.

People's eyes can detect individual photons, at much higher resolution and dynamic range than a typical camera.


It's amazing how many "basics" Tesla is missing with the statement that their cars have all the tech to be fully self driving. I have an Infiniti QX80 with the surround camera system, lane stay, and collision warning system. Unlike Tesla, they've not gone as far as to implement an autopilot feature and from what I can tell, for good reason. In the less than a year I've owned it, the side warning collision sensor / camera combination have falsely identified phantom threats due to road dust and debris and entirely missed real threats. The sensors simply don't have a self cleaning mechanism like our eyes which is one fairly obvious problem that's lead to a few issues with my QX80. In looking through Tesla's marketing white papers and what-not I see no mention of how they keep their sensor system clean. It seems like that should be a pretty basic concern.


I'm not even sure it's fair to describe it as purposefully confusing. If I'm in the market for a Tesla I want to make sure the expensive car I'm buying will be getting all the updates I'm excited about. Otherwise I might hold off buying for a few years.


Even HN readers are misinterpreting it, and there is nothing to prevent confusion from Tesla side on these marketing pages.

The question of purposefulness is mostly irrelevant. It is their responsibility to avoid ambiguity in this domain because this could be dangerous. They are not doing it => they are putting people in danger. Now a posteriori if somebody manages to sue Tesla after a death a bad injury, maybe the purposefulness will be studied (but it will be hard to decide), but a priori it does not matter much as for the consequences of their in any case mislead attempts (even if it was only for an advertising harmless reason in the mind of the people who wrote it like that in the first place).

To finish, that they are carefully choosing their words to be technically true over and over, yet understood in an other way by most of the people, is at least distasteful. That they are doing it over and over through all existing channel makes it more and more probable that this is even on purpose, of course there is no proof but when we reach a certain point, we can be sufficiently convinced without a formal proof (hell even math proofs are rarely checked formally...)


The statistics also includes all the cases where drivers are not paying attention as they should, and it's still safer than the average car (at least according to Tesla).


> it's still safer than the average car (at least according to Tesla).

This is Tesla's big lie.

In all their marketing, Tesla is comparing crash rates of their passenger cars to ALL vehicles, including trucks and motorcycles, which have higher fatality rates. Motorcycles are about 10x-50x more dangerous than cars.

Not only that, they aren't controlling for variances in driver demographics - younger and old people have higher accident rates than middle-aged Tesla drivers - as well as environmental factors - rural roads have higher fatalities than highways. Never-mind the obvious "accidents in cars with Autopilot" vs "accidents in cars with Autopilot on".

If you do a proper comparison, Tesla's Autopilot is probably 100x more fatal than other cars. It's a god-dammed death trap.

And remember, there are several extremely normal cars with ZERO fatalities: https://www.nbcnews.com/business/autos/record-9-models-have-...

This is not a problem that will be solved without fundamental infrastructure changes in the roads themselves. Anyone that believes in self-driving cars should never be employed, since they don't know what they're talking about.


I agree that the comparison with all motor vehicle deaths is misleading, and that we ought to be looking at accident rates for Tesla cars with Autopilot on versus off. That Tesla hasn't answered the latter despite having the data to do so is concerning.

However, I don't see the evidence for the claim that "Tesla's Autopilot is probably 100x more fatal than other cars". The flip side of the complaint that Tesla hasn't released information to know how safe Autopilot really is, is that we don't know how unsafe it really is either. If this is merely to say "I think Autopilot is likely very unsafe" then just say so, rather than giving a faux numerical value.

As for the claim that self-driving cars can't work without "fundamental infrastructure changes" and everybody working on self-driving cars should be fired, I think you're talking way beyond your domain of expertise.


You're complaining about one side using wildly misleading and baseless stats, but then turn around and throw out a completely baseless and fairly absurd claim with no attempt to even back or source it, and then claim that because some cars have 0 fatalities that means something.

The truth is somewhere in between Tesla's marketing and your wildly absurd 100x more fatal claim, but its much closer to Tesla than you.

Tesla's statistics (i.e. real numbers, but context means everything) do involve a whole whack of unrelated comparisons (buses, 18-wheelers, motorcycles) that all server to skew the stats in various ways, we can ignore them claiming to be slightly safer than regular cars.

However comparing more like to like, IIHS numbers for passenger cars driver deaths on highways puts Tesla at 3.2x more likely than all other cars to be involved in a fatal crash (1 death/428,000,000 miles driven vs tesla's 1 death/133,000,000 miles driven).

Of course this too is an unfair comparison. A 133hp econobox/prius vs a sports car in terms of performance is considered equal in that comparison. If one was really interested in accuracy, a comparison of high power AWD cars in similar price ranges driven on the same roads by the same demographics would be needed.

So by even standards clearly biased against Tesla, they are no where near 100x more fatal than other cars. Tesla's own numbers claim autopilot reduces accidents, and supposedly NHS numbers back them up.

Its important and critical to not believe marketing hype and lazy statistics. If you want people to take you seriously, countering hype and bad stats with equal or worse hype and worse counter stats is not the way to do it.


Technically Tesla cars have an infinite higher death rate than the cars with zero fatalities.

Since Tesla is comparing their crash rates against motorcycles, the 100x number isn't so absurd.


I bet Tesla's are still safer without Autosteer, because the average car includes a lot of old cars and more young drivers.


What is the catchment range for new (inexperienced) drivers? 18-25? There are a lot of people who have been driving for a long time who are not good at driving at all and lack all kind of self awareness. For example, following the car in front too close.


Young people are also less risk averse.


I would also think that the average Tesla owner is less likely to experience constant stress, long commute hours, tiredness and possible mental health issues that can contribute to car accidents.


Drivers not paying attention in non-autopilot vehicles is an increasing problem with the prevalence of smart phones. In places where it's illegal to text and drive I don't think you're going to get out of a ticket by telling the police officer "it's okay because Tesla was driving".

I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash. But you're talking about the autopilot and it's statistically incorrect to say it's safer than the average car. It's merely safer than a driver alone - this should be no surprise as you'll find that cars with backup cameras and alarms don't hit have as many accidents while in reverse as cars without them.


I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash.

How does a Tesla get such good range? There's still a lot of energy in those batteries, and damaging them is far easier to cause a fire than leaking fuel --- the former can self-ignite just from dissipating its own energy into an internal short, while the latter needs an ignition source. In addition, the batteries are under the entire vehicle and thus more likely to be damaged; a fuel tank has a smaller area and is concentrated in one place.

It is extremely rare for fuel tanks to explode in a crash.


I see how that is intuitively true, but it isn't really true in practice. Post crash fires with ICE are unusual, but not extremely rare. Similarly, post-repair car fires (leaky fuel lines) are not as unusual as most people think.

So far, experiential evidence with Tesla seems to be showing a lower than average risk of fires, though the breadth and nature of the battery leads to challenges in managing the fire itself.

All cases that I'm aware of proceeded at a slow enough pace to allow evacuation of the vehicle.


It is the opposite, LiIon/LiPo batteries are inherently dangerous and can cause chemical fire for various reasons (overcharging, undercharging, puncture, high temperature, etc.). These things are monitored/controlled in any modern application in normal usage, but in a crash you have to remember that you are literally few inches away from a massive stored up potential chemical energy. The fire burns very hot, the smoke is toxic and assuming somebody gets to you in time, it can only be extinguished reliably using special dry powder fire extinguishers (Class D)...


Fire needs oxygen, fuel and an ignition source. A battery provides the latter two in very close proximity to each other.

How often does a damaged and leaking fuel tank start a fire?

How often does a lithium battery that has been structurally compromised start a fire?

Fire safety is a major negative for lithium batteries. That much electrical energy in that form factor can only be so safe.


That's true but is against any good sense: cars don't allow you to play a video while driving, and allow you to have the false sense of security that somebody else is driving while instead you have to pay constant attention?


This morning I saw a minivan drifting all over the road. As we passed him, my passenger noticed that he had CNN playing on a phone attached by suction cup to the middle of the windshield.


That doesn't sound like something that has been extensively studied -- it's strange to me how Tesla keeps citing [0] miles driven by "vehicles equipped with Autopilot hardware", as if it couldn't estimate the subset of miles in which Autopilot was activated -- and it seems like something very hard to measure anyway. How can Tesla or the driver know whether an accident was bound to happen if the accident was prevented?

However, I would think that testing Autopilot-alone seems impractical. It's been asserted that AP has no ability to react to or even detect stationary objects in front of it. Can't we assume that in all those cases, non-driver intervention would result in a crash?

[0] https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...


A lot of human-driven car accident victims have done nothing wrong at all.

Almost every driver thinks they're better than average.

Even when it's a stupid person dying from their stupidity, it's still a tragedy.

I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.

Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.


>I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.

What this ignores is that self-driving cars will by and large massively reduce the number of 'stupid' drivers dying (the ones who are texting and driving, drinking and driving, or just simply bad drivers) but may cause the death of more 'good' drivers/innocent pedestrians.

So the total number could go down, but the people who are dying instead didn't necessarily do anything deserving of an accident or death.

I say this as someone who believes self driving cars will eventually take over and we need to pass laws allowing a certain percentage of deaths (so that one at case of the software being at fault doesn't cause a company to go under), but undeserved deaths are something people will likely have to deal with somewhere down the line with self driving cars. At the very least, since they're run by software they should never make the same mistake twice, while with humans you see the same deadly mistakes being made every day.


OK but if by saving 1000 lives a year required as a side-effect that you personally be among the fatalities, would that be OK for you? I hope not. Think of this as a technical corner case; so, the question is the soundness of the analysis—for example, the distribution of deaths and what that means for safety—and not letting various facile logic get in the way of that work.


Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.

I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.

Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.


>> Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.

You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?

So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.

Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?


It saves 2001 (not 1000 as you said, or perhaps said differently, I'm exchanging the lives of 1001 to preserve the lives of the 2001).

It kills 1001.

Net lives saved = 1000.

> How is it that their opinion doesn't matter?

The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?

It's a trolley problem[1]. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.

[1] - https://en.wikipedia.org/wiki/Trolley_problem


>> It kills 1001. >> Net lives saved = 1000.

Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.

>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?

Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.

Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).

>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.

An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.


> And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We [exchanged] their lives.

No. Net lives lost = -1000. Gross lives lost = 1001.

We killed 1001 people to let 2001 live.


I guess we're not really communicating very well.

Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).

Do you think such a vaccine would be considered successful?

I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.


That's not really a very good argument. If you change parameters in a complex system then the odds are that you are going to find pathologies in new places.

People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair ([0]). I still think it's safer to drive a car with a seatbelt rather than without.

[0] http://www.againstseatbeltcompulsion.org/victims.htm


The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside. Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random. Run over, rear-ended, T-crashed at an intersection, and could not reasonably have done anything to prevent it.


>> Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random.

But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.

Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?


So you'd rather 1000 other people die, rather than yourself? What kind of logical exercise is this?


There is a simple solution for this. If you think you are an above average driver, don't use autopilot.


Until self driving cars start crashing in to you


As long as your car doesn't look like an broken road barrier you should be good.


US National Safety Council estimated that there were 40,100 fatal car crashes in 2017 https://www.usatoday.com/story/money/cars/2018/02/15/nationa...


Designing for safety means that you take into account human behavior at every level and engineer the product to avoid those mistakes.

We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.

The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.

The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road. You can't have a driving assistant that can be used as an autopilot.


I agree with most of your points however I'm not convinced by your problem number 2:

>you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

I think you overestimate the rationality of human beings. I commute to work by motorcycle every day and I've noticed that I tend to ride more dangerously (faster, "dirtier" passes etc...) when I'm tired, which rationally is stupid. I know that but I still instinctively do it, probably because I'm in a rush to get home or because I feel like the adrenaline keeps me awake.

This is an advantage of autonomous vehicles, they can be reasonable when I'm not. I expect that few drivers (and especially everyday commuters like me) constantly drive "slowly and with a lot of care". A good enough autonomous vehicle would.


How can anyone take a company serios that sells features such as "Bio-Weapon Defense Mode". It almost sounds like snake oil and not far from something like the ADE 651.

[1] https://en.wikipedia.org/wiki/ADE_651


"Not only did the vehicle system completely scrub the cabin air, but in the ensuing minutes, it began to vacuum the air outside the car as well, reducing PM2.5 levels by 40%. In other words, Bioweapon Defense Mode is not a marketing statement, it is real. You can literally survive a military grade bio attack by sitting in your car."

https://www.tesla.com/blog/putting-tesla-hepa-filter-and-bio...


Here is my reply from the discussion 2 years ago.

https://news.ycombinator.com/item?id=11617945

I think they are not testing with small enough particles. In the article, they test with PM 2.5 particles which would be around 2.5 micro meters. However, if you look at the table on page 11 of

http://multimedia.3m.com/mws/media/409903O/respiratory-prote...

Potential bio weapons such as smallpox, anthrax, influenza and the hemorrhagic viruses are far smaller than 2.5 microns.

Also, there are probably issues with the sensitivity of the detection equipment. If you look at the table on page 8 of

https://www.ll.mit.edu/publications/journal/pdf/vol12_no1/12...

And at the table at

http://www.siumed.edu/medicine/id/current_issues/bioTable2.p...

You will see that some of the biological agents can cause infection with as few as 10 particles. I doubt that the Tesla equipment could detect a concentration of 10 particles of these sizes.

This article is basically the biological equivalent of the I can't break my own crypto article.

>Bioweapon Defense Mode is not a marketing statement, it is real.

is false. Extraordinary claims require extraordinary evidence, and the evidence of Bioweapons Defense Mode working is entirely lacking


This claim is still wrong.

HEPA filters capture particulates. PM2.5 means particles above 2.5 micrometers in diameters. Good 0.2 - 0.3 micrometer HEPA filter is fine enough to catch bacteria like anthrax. Smallpox, influenza virus are smaller. You need carbon absorber to be safe.


> a military grade bio attack

This is like thinking you will survive a storm in the middle of the ocean because you have a life jacket on you.


"extreme levels of pollution" is not the same as a bio-chemical weapon.


Can you expand on this and provide evidence to support your claim? I would imagine Telsa would have throughly vetted this statement however I'm curious to hear how bio-chemical weapons differ from extreme pollution (they do also mention viruses)


Had to look it up [1]

>"We’re trying to be a leader in apocalyptic defense scenarios," Musk continued.

Is this guy serious?

[1] https://www.theverge.com/2015/9/30/9421719/tesla-model-x-bio...


Scenarios, indeed. Hollywood-grade threat models - riveting yet improbable. As opposed to such mundane threats such as "not driving into massive stationary objects".


While the marketing fuzz might have the tone of self driving car, I doubt the legal material in a Tesla model S as it relates to 'autopilot' has such language - its advanced steering assist and cruise control, its not an entirely autonomous car - but its marketed as an autonomous car.

Functionality that can _most_ of the time drive itself without human intervention and occasionally drives itself into a divider on the highway seems like a callous disregard for human life & how such functionality will be used.

Sure, everything is avoidable, if there's some expectation it needs to be avoided.

The whole point of autopilot is to avoid focusing on the road all of the time. So setting up circumstances under which humans perceive the functionality (autopilot) behaving as expected most of the time, then its highly likely they'll treat it as such & will succumb to a false sense of security.

My point: when a feature is life threatening your marketing fluff shouldn't deviate significantly from your legal language.


> you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care

You raise an interesting point, accidents are not normally distributed throughout the driver's day, or even the population. Your likelihood of having a crash with injuries is highly correlated with whether or not you've had one before. A substantial number involve alcohol, consumed by drivers previously cited for DUIs.

We keep using average crash statistics for humans as a baseline, but that may be misleading. Some drivers may actually be increasing their risk by moving to first gen self driving technology, even while others reduce their risk.

On the other hand, we do face a real Hindenburg threat here. Zeppelins flew thousands of miles safely, and even on that disaster, many survived. But all people could think of when boarding an airship after that was Herbert Morrison's voice and flames.

I have already heard people who don't work in technology mumbling about how they think computers are far more dangerous than humans (not because of your nuanced distinction, but simply ignoring or unaware of any statistics).

I worry we're only a few high profile accidents away from total bans on driverless cars, at least in some states. Especially if every one of these is going to make great press, while almost no pure human accidents will. The availability heuristic alone will confuse people.


> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

I'm not sure I follow you here. Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?

Those two statements don't connect for me.

I suppose that's partly because I am an automation engineer, and I deal a lot with inspecting operator-assembled and machine-assembled parts. If the machine builds parts that also pass the requirements, it's good.

It's nice if it's faster or more consistent, and sure we can build machines that produce parts with unnecessarily tight tolerances, but not meeting those possibilities doesn't feel like an ethical failing to me.


> Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?

Yes, not infallible, but I believe that to take our lives in the hands of machines, the technology must be at least on par with the best drivers. To be better than average, but for instance to be more fallible than a good driver, is IMHO still not a good standard to be ethically ok to sell self driving cars, even if you have 5% less death per year compared to everybody driving while, like, writing text messages in their phones. If instead the machine will be even a bit better than an high standard driver, driving with care, at that point it makes sense because you are not going to care about the distribution of deaths based on the behaviors.


If a survey is conducted in almost any part of the world, I'd guess that most people (including me) would prefer to be hit by a human rather than an autonomous car. I'm not sure why this is, but having someone to blame and the fact that being a victim of an emotionless lifeless machine is subjectively way more horrifying than being a victim of a person are some that I can think of.


I'm in central Europe, switched from a train to a car. Reduced commute time, increased comfort and reliability. Costs several times more though.


What about the fact you cannot do anything useful other than podcasts / music while driving? Btw your point sounds to me just that public transport needs to get better, not that in general is not a good idea to move towards this model.


> What about the fact you cannot do anything useful other than podcasts / music while driving?

To be fair I rarely was doing anything useful on my 40 min train commute either :) Mostly reading Hacked News on my phone. Now I'm at least looking into the distance, taking some strain from the eyes.

I totally agree that public transport has to get better, it's just that there always has to be a mixture of transportation options.


> Reduced commute time, increased comfort and reliability.

Can the reason for these be that many other people use public transport instead and thus the roads are way less busier?


I'm sure that's a big contributor. I myself relied solely on public transport for years we lived close to the subway. Now that we moved outside of the city driving makes more sense.


What makes the "self driving" functionally that all the other brands are marketing different though? Is it only that Tesla was first? Volvo literally calls their technology Autopilot too.


Saying “When you drive you are in control” is fine, but you’re not always the only person impacted when you crash.

As we have no control over how others drive, statistics are more relevant.

As a pedistrian, I care about cars being on average less likely to kill me. If it means I’m safer I would rather the driver wasn’t in control of their own safety.

The ethical solution is the one with the least overall harm.

Of course being safer overall while taking control from the driver is unlikely to drive sales.


> Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

Do these car companies test their software by just leaving it on all the time in a passive mode and measuring the deviance from the human driver?

I'd think that at least in MV case, you'd see a higher incidence of lane following difference at this spot and it would warrant an investigation.

Something like this isn't easy but for a company as ambitious as Tesla it doesn't sound unreasonable. Such a crash with such an easy repro should have been known about a long long time ago.


4. This statistics must be conditioned. Not all drivers are the same. Some are careful, some are not. So in fact, the careful drivers will make their life miserable by using autopilot.


> A product that kinda drives your car but sometimes fails, and requires you to pay attention

So, cruise control? If people got confused and thought that cruise control was more than it really was in the 80s, or whenever it came out, what would we have done?


That is a long description of a problem that is much deeper than just marketing, IMO. The biggest issue that I see is that the AutoPilot system does not have particularly strong alertness monitoring.


Also, it's my belief that the "miles driven" statistic they always point at (wrt safety) should be reset with each software release.


I have now talked with two people who have autopilot in their model S's and both said the problem with autopilot is that it is "too good". Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail. If it beeps then to tell you to take over, you have to mentally get back into the situational awareness of the road and then decide what to do about it. If that lag time is longer then the time the car has given you to respond, you would likely crash.

Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.


Relevant quote from an article about the Air France 447 crash:

> To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.

https://www.vanityfair.com/news/business/2014/10/air-france-... (linked to from another HN post today: https://news.ycombinator.com/item?id=16757343 )


Airline pilot training is designed to handle this, as well as they have a co-pilot who prevents them from being distracted and is able to take over if they are distracted.

Tesla is just handing it out to anyone who can afford to buy a Model S/X (and now Model 3) with the absolute minimum of warnings that they can get away with.


I would not be surprised if Musk coldly responded that this is a known problem with no solution, but autopilot still lowers the death rate on average. In other words, for the proponents it's a trade-off.

And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers. Which of course will snowball when reliance on automation keeps making drivers worse than better, and at some point you're past the point of no return.

Whether that's right data must show.


A high rate of road deaths isn't a fait accompli. Musk would have us believe that technology is the only answer.

The UK has the 2nd-lowest rate of road deaths in the world (after Sweden).

The roads in the UK are not intrinsically safe, they are very narrow both in urban and rural areas which means there are more hazards and less time to avoid them.

However, the UK has strict driver education programme. It is not easy to pass the driving test, with some people failing multiple times. It means that people only get a license when they are ready for it. Drink-driving will also get you a prison sentence and a driving ban.


Just a note. Switzerland ranks better than the UK. By inhabitants: Switzerland (2.6), Sweden (2.8) and UK (2.9). By motor vehicles: Switzerland (3.6), Finland (4.4), Sweden (4.7) and UK (5.1). By number of kilometres driven: Sweden (3.5), Switzerland (3.6) and UK (3.6).

I'd also note that most European countries are hot on the heels of the UK, Sweden and Switzerland by the above measures. By comparison, the US numbers are 10.6, 12.9 and 7.1, respectively. Most European countries are well below those numbers.

Particularly in Western European and Nordic countries, the driving tests are very strict. Even for all the stereotypes, France's numbers of 5.1, 7.6 and 5.8 are quite good, and they are moving in the right direction.

Sources:

* http://www.who.int/violence_injury_prevention/road_safety_st...

* https://www.oecd-ilibrary.org/transport/road-safety-annual-r...

Notes:

I use death rate, not incidents/accidents rate.

I ignored "smaller" countries for the above listing, such as San Marino and Kiribati.

All numbers are from 2015, and they are also presented in the Wikipedia article:

https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...


As someone that has been caught speeding, it's also worth mentioning that one of the big reasons why the UK has improved its road safety statistics is a reasonably new initiative where you get an option on your first offence to either take the points on your license or to attend a safety workshop.

IIRC, the workshop was about three hours, but it was surprisingly useful. The instructors treated you like adults and not children or criminals, and they gave fairly useful tips on driving and looking out for things like lights suddenly changing, ensuring you are in the right gear, how you're supposed to react if an emergency vehicle wants you to go forward when you're by a set of traffic lights with a camera, etc.

However, on the drink driving front, given the news with Ant from Ant and Dec I think it's safe to assume that not everyone gets a prison sentence for drink driving.


Out of curiosity, how are you supposed to react if an emergency vehicle wants you to go forward when you're by a set of traffic lights with a camera?

I would think to look carefully at all directions and, if visibility allows, pass the red light, then contest the fine with an "emergrncy vehicle passing through" defence. But what is the official position?


I am not sure the UK has traffic light cameras, but they do some places in Germany. And the official position in Germany, and in most of Europe I think, is that emergency vehicle decisions trumps everything else. If a police officer directs you to do something that would break the law, then you should do it, as a police officer's decision trumps regular traffic laws.

At least, that's how it works in Germany and Denmark. But I don't think Denmark has traffic light cameras. I've never seen them anyway. But I've seen them in Germany.

Of course, this is assuming you don't actually cross the entire junction, but rather just moves out into the junction, so the emergency vehicle can get through.


It's illegal to cross through a red light, even if there is an emergency vehicle behind you that wants to get through.


Yep, that's what we were told. It doesn't matter if you're doing the right thing by getting out of someone's way, you'll get a fine/points if you cross the line.

Although, if you are at a set of traffic lights and an emergency vehicle tries to get you to cross the line, what you should do is write down the registration plate and contact the relevant service to report the driver. The instructor on this course was ex-police, and according to him police, paramedics, and firefighters in the UK are taught to not do this under any circumstance, and if they are caught trying to persuade someone to cross a red traffic light then they can get in a lot of trouble.


AFAIK:

The only case that trumps a red traffic light is when given a signal by an authorised person (e.g., police officers, traffic officers, etc).

I think under a literal interpretation of the law you are obliged to commit an offence if you are beckoned on across a stop line at a red traffic light; you can either refuse the instruction to be beckoned on (an offence) or you can cross the stop line (an offence). That said, there's plenty of habit of the beckoning taking precedent over the lights.

Basically the only time you see any police officer instructing traffic from a vehicle is when on a motorbike, typically when they're part of an escort.


That's the way I think it works - I had to do that on a set of lights I thought had a camera (turns out it mustn't have been on as nothing came of it), but quickly weighed it up in my head of "several hours of BS arguing it for me" vs "someone might die".

Police cars will have dash cams, not sure on ambulances or fire engines.

That being said, scariest thing I did on the road was going through a red light to let an ambulance through at a motorway off-ramp. You better hope everyone else has heard those sirens.


Drink driving rarely attracts a prison sentence. In the vast majority of cases it attracts a driving ban along with a significant fine. The sentencing guidelines have imprisonment as an option for blowing over 120 where the limit is 35 (in England and Wales, it is lower in Scotland now).

The UK went through a major cultural change relating to drink driving several decades ago, it isn't viewed as acceptable, the police get tip offs on a regular basis.

https://www.sentencingcouncil.org.uk/offences/item/excess-al...


It's not too common to head to prison for a single DD incident. It's also worth noting that England&Wales and Scotland have different drink driving laws.

In Scotland, the BAC limit is lower than in England and the punishment is a 12 month driving ban and fine for being over the limit - no grey areas or points or getting away with it.

In England a fine and penalty points are common, repeat offenders can be suspended and jailed. The severity of the punishment can often depend on how far over the limit you are and other factors.


> However, on the drink driving front, given the news with Ant from Ant and Dec I think it's safe to assume that not everyone gets a prison sentence for drink driving.

Has he been sentenced?


Nope, I think his court case has been moved back. The court wouldn't say why, but it's believed to be because they want him to ensure he gets the most out of his time back in rehab.


Other innovations include an off road "hazard perception test" I'd be pleasantly surprised if derivatives of self driving software could reliably pass.


> The roads in the UK are not intrinsically safe, they are very narrow both in urban and rural areas which means there are more hazards and less time to avoid them.

Actually, paradoxically that means they are actually safer. People drive slower on narrower roads, which means that accidents are within the safe energy envelope that modern cars can absorb.

Very, very few people will ever die as a passenger or driver in a car accident at 25 mph / 40 kph. At 65mph / 100kph, the story is completely different.


You say that but people will happily drive at 50+ down a narrow country road. I think the "narrow = slower" only works for a limited period of time before people get normalised to it.


Had to thread a van through a temporary concrete width restriction the other day - when it's that narrow, even the Uber behind me wasn't giving me grief for going that slowly!

The country roads one has always dumbfounded me though - why some of those have national speed limits I will never know.


As far as I'm aware, they're national speed limits because they don't have the resource to work out the limit, or police them. I learnt on country roads and my instructor was very clear that although I could go at 60mph, I should drive to the conditions of the road.


Growing up driving in country roads in the UK you learn some tricks (dumb tricks you shouldn't do). One example is at night time you can take corners more quickly by driving on the wrong side of the road. If you can't see another cars headlights, then there are none coming.

The thought of doing this now scares me and I don't do this and suggest that no one else does either. But I know many people still drive like this.


> The country roads one has always dumbfounded me though - why some of those have national speed limits I will never know.

Why not? Even roads with lower speed limits you're required to drive at a speed appropriate for the road, the conditions, and your vehicle; the speed limit merely sets an upper-bound, and it's not really relevant whether it's achievable. Just look at the Isle of Man where there is no national speed limits: most roads outside of towns have no speed limit, regardless of whether they're a narrow single-lane road or one of the largest roads on the island.


If you set a limit, some people will drive it regardless. Even if you're supposed to drive to the road and conditions, there are enough utter morons out there who'll take a blind narrow corner at 60.


The UK also has a lot of roundabouts for road junctions. It means less 'run red light' collisions.


> It is not easy to pass the driving test, with some people failing multiple times. It means that people only get a license when they are ready for it.

And that's the way it should be. The driving test may not be easy, but it's not any more difficult than driving is. People should be held to a high standard when controlling high speed hunks of metal.


At the moment Tesla haven't shown that auto-pilot does lower the death rate. They've only released statistics about auto-pilot enabled cars rather than statistics for when auto-pilot was in control.

Any statistics released by Tesla should be compared against similar statistics from say modern Audis with lane-assist and collision detection.


Also, the cohort that purchases Tesla vehicles may be a lower-risk group of drivers than average.

This could happen because Tesla vehicles are more expensive than comparable conventional vehicles, less attractive to those with risky lifestyles, inconvenient for people who don't have regular driving patterns that allow charging to be planned, or more attractive to older consumers who wish to signal different markers of status than the young go-fast crowd.

You'd possibly want to compare versus non-auto-pilot Tesla drivers on the same roads in similar conditions, but the problem remains that the situations where auto-pilot is engaged may be different from those when the driver maintains control.

In sum, it's hard to mitigate the potential confounding variables.


> Also, the cohort that purchases Tesla vehicles may be a lower-risk group of drivers than average.

Yes - that's why I suggest comparing it to modern Audi drivers. Basically any of the BMW/Mercedes/Audi drivers are where Tesla is getting most of its customers. Those companies all have similar albeit less extensive safety systems.


Thats because all autopilot enabled cars, have the safety part enabled by default. Automatic Emergency Brakeing. Side collision avoidance, Lane detection etc.


Those systems are the great ones at the moment. But almost all modern cars have those.

They're brilliant because they augment humans by leaving humans to do all the driving and staying focused on that but then taking over when the driver gets distracted and is about to hit something.

Auto-pilot does it the opposite way round. It does the driving but not as well as the human but then the human can't stay as alert as the car so isn't ready to take over.


Actually despite your skepticism Tesla's PR spin has already beaten you.

"but autopilot still lowers the death rate on average"

That's not what they said, they said the death rate was lower than the average. And yet you can't help hearing that it lowered the death rate. I think it's very likely turning on autopilot massively increases the rate of death for Tesla drivers, but they've managed to deflect from that so skilfully.


I don't get it. These two are the same thing.


The comment above you supposes that people who drive teslas have fewer accidents on average, even without autopilot. Saying that autopilot "lowers the average" would mean that autopilot lowers the amount of accidents for tesla drivers, while "lower than average" could mean that while a tesla with autopilot is safer than the average, it is less safe than a tesla without autopilot. Pretty complicated


I don't think that's the correct answer. Flight autopilots have lowered accident rates tenfold, but every time there is a crash while on autopilot Boeing/Airbus will ground every single plane of that type and won't allow them to fly until the problem is found and fixed. If I know that my car's autopilot is statistically less likely to kill me than I am to kill myself, I would still much much rather drive myself.


If Tesla were to respond in a similar vein - that is turn off auto-pilot each time there was a fatal crash until the cause was fully investigated and fixed then I'd feel a lot more comfortable.

From the videos in the article it's clear that auto-pilot should be disabled when there is bright low-light sunshine. Tesla should be prepared to tell customers there's certain times of the day when the reliability of the software is not high enough and turn it off.

These are all 'beta' testers after all so they shouldn't complain too much.


> Tesla should be prepared to tell customers there's certain times of the day when the reliability of the software is not high enough and turn it off.

Imho, they should detect that situation (by using cameras and/or the current time + GPS) and not allow you to switch it on. You should not give drivers the choice between safety and lazy (from which I assume that auto-pilot driving when currently feasible it SAFER than manual -- which I assumes, but read elsewhere in the thread is not yet properly demonstrated).


That's really not how it works in aviation. Aircraft manufacturers and the FAA don't automatically ground every plane when a crash occurs when autopilot was in use. They conduct an investigation first. And even if the investigation finds an autopilot fault they're likely to just issue an airworthiness directive warning pilots about the failure mode rather than immediately grounding everything.


> but autopilot still lowers the death rate on average.

I think the question of using self driving vehicles comes down to this. Do you want to be part of a statistics that you can control, or do you want to be part of one that you cannot?

When we go in trains or planes, we are already being part of statistics that we cannot control. But those things also are extremely reliable.

So there seems to be a threshold, where someone should opt for being part of a statistics that you cannot control.

The people who are pushing SDV's, by some sleight of hand, seem to hide this aspect, and have successfully showcased raw, (projected) statistics that implicitly assumes the rate of progress with SDV's, also assume that SDV ll continue to progress until they reach that capability.....

>And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers.

But there is always are risk of catastrophic regressions [1] with each update, right?

[1] https://en.wikipedia.org/wiki/Catastrophic_interference


> be part of a statistics that you can control

well, even if you are in control with your hands on the wheel, you can still get heart (and even die) in a car accident where you have no responsibility at all. Sure you can control your car, but you cannot control others'...

(this is even more obvious from a cyclist point of view)


Also people vastly overestimate their driving competence.


That is certainly not the norm. People does not "vastly" overestimate their driving competence..


Discussions of driving among lay people are generally useless and interminable, due largely to the supercharged Dunning-Kruger effect that driving for whatever reason produces. No matter where you are or who's talking, the speaker is always a good, careful driver, other drivers are reckless morons, and the city in which they all live has the worst drivers in the country.

Everyone everywhere says the same thing. It's information-free discourse.


I am Sorry. Not having control of a lot of stuff that happen in the world does not mean that you don't have some influence on if you will be in an accident or not, if you are driving the car yourself?

With half baked self driving tech, you have absolutely no control..


> And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers. Which of course will snowball when reliance on automation keeps making drivers worse than better, and at some point you're past the point of no return.

I really hope it does get to the point were all drivers are automatic and interconnected. Cars could cooperate to a much higher extent and traffic could potentially be much more efficient and safe.


"Hello, fellow robot car, I am also a robot car, not a wifi pineapple notatall nooosir, and all lanes ahead are blocked. But hey, there's a shortcut, just turn sharp to the right. Yes, your map says you're on a bridge, but don't worry, I have just driven through there, trust me." Nothing could possibly go wrong and the idea of evil actors in such a network is absurd - right? Your idea reminds me of the 1990s Internet - cf today's, where every other node is potentially malicious.


There's no reason such a system would have to be engineered to be totally trusting. The car, in that situation, would likely be set up to say "nope, my sensors say that's not safe" and hand off to the driver or reject the input.

Use it just for additional input to err on the side of safety. If the car ahead says "hey, I'm braking at maximum" and my car's sensors show it's starting to slow down, I can apply more immediate braking power than if I'd just detected the initial slight slowdown.

Or, "I'm a car, don't hit me!" pings might help where my car thinks it's a radar artifact, like the Tesla crash last year where it hit the semi crossing the highway.


>The car, in that situation, would likely be set up to say "nope, my sensors say that's not safe" and hand off to the driver or reject the input.

You realize you say that in a thread about a car's sensors failing to understand it was in an unsafe condition (again), right?


You realize that the comments in a thread may head off in a related but distinct direction, right?

We're discussing car-to-car mesh networks being used to provide self-driving cars with more decision-making input.


That last part is actually scariest: blame the victim, yay. Unless they can prove they had a functional, powered, up to date, compatible I'm-a-car responder, it's their own damn fault for being invisible to the SDV. How about "I'm a human, don't hit me"? Does that also sound like a good idea (RIP Elaine Herzberg)?

In other words, "all other road users should accommodate my needs just to make life a bit easier for me" is a terrible idea.


I think that's a needlessly uncharitable interpretation.

Having cars communicate information to each other has the potential to be an additional safety measure. It's like adding reflectors to bikes - adding them wasn't victim blaming, it was just an additional thing that could be added to reduce accidents.


Sure, I understand that you're proposing it as an improvement , and it would even be an improvement - but using this for scapegoating will happen, as long as there are multiple parties to any accident; we have seen this in the last Uber crash ("find anything pointing anywhere but at Uber"), or in any bike crash ("yeah, the truck has smashed into him at 60 MPH and spread him over two city blocks, but he's at fault for not wearing a helmet - it would have saved him!!!").


Also, what's to stop evil actors right now? There is a road near me with the bridge out. Anybody could just go remove the warning signs. Why don't they?


Obvious to casual observers, non-targettable, hard to set up/tear down quickly.


>I really hope it does get to the point were all drivers are automatic and interconnected. Cars could cooperate to a much higher extent and traffic could potentially be much more efficient and safe.

I think you can make some conclusions from the current software industry and see how many defects are deployed daily, I see Tesla and Uber have same defects as any SV startup and not as NASA, having this starups controlling all the cars on the road sounds a terrible idea.


> all drivers are automatic and interconnected. Cars could cooperate

Say, like computers on the Internet (information highway)?

Rings a bell.


> Airline pilot training is designed to handle this

It is certainly designed to do that, but even for airline pilots there are limits to what is possible.

You cannot train a human to react within 1ms; that's just physiologically impossible. Nor can you train a human to fully comprehend a situation within x ms, where x depends on the situation.

So the autopilot would have to warn a human say 2x ms before an event that requires attention, where it can of course not yet know of the event, so that amounts to 'any time there could possibly be an event 2x ms in the projected future'. Which is probably: most of the time. Making the autopilot useless.


The other big difference is that in a plane high up in the sky and relatively far away from any others, even if the autopilot demands the human take over, there is still a lot of time to react. Many seconds to even minutes, depending on the situation.

In a car, the requirement is fractions of a second.


I don't remember the source, but I have read somewhere that it is very lightly enforced in a plane and many pilots sleep in the plane for a long time.


I think you misunderstand the GP - he is stating that - even if pilots with their rigorous training can make such disastrous mistakes, with car autopilots it will be way worse.


AF447 really was not about automation. The plane’s controls have a crazy user interface. Most of the time you can pull on the stick as hard as you want and the plane won’t stall, but that feature was disabled due to some weird circumstances. This would be like if sometimes a car’s accelerator would cut out before you rear-ended a car in front of you but 1% of the time that feature was off. As an added bit of UI failure, once they were stalling badly enough, the stall alarm actually turned itself off, which may have made the first officer think he was doing the right thing. This was a UI failure.


And in the current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology", I think machine driving cars will have a hard time being accepted. Imagine the autopilot is 100 times safer than human drivers. Full implementation would mean about a fatality a day in the US. It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided. Maybe we should use our education system to help heal us from the safety first (do nothing) culture we seems to gotten our selves into in the US.

This is a similar situation to nuclear power. With nuclear, all your waste can be kept and stored and nuclear has almost zero deaths per year. Contrast to the crazy abomination that is coal burning.


Nuclear power plants, plane crashes, terrorism, school shootings, even some rare diseases (the mad cow disease led to the slaughter of millions of cattle, having killed in the UK less than 200 people cumulatively, whereas the flu can easily kill 10,000 people in a single year just in the UK).

I don’t think it has anything to do with technology and automation. People are just very bad at reasoning with probabilities. This is why lotteries are so popular. And it is fuelled by unscrupulous (and even less apt at grasping probabilities) media trying to make money out of sensionalist headlines. How many times a week am I told that x will cause the end of the world, replace x by flu pandemics, Ebola, global warming, cancer as a result of eating meat or using smartphones, etc.


Funny, Nassim Taleb, a worldwide (if controversial) risk probability expert, would say that you are the one which is being very bad at reasoning with probabilities, because you are putting Ebola and terrorism in the same class of risk as falling off a ladder:

https://www.youtube.com/watch?v=9dKiLclupUM


I would agree with flu (which is a fat tail risk, eg the spanish flu), but not terrorism (these are small local events) and ebola (ebola has a very low risk of transmission, it is just that it is very deadly once transmitted).

But classifying risks is one thing. Worrying in your daily life to the point of opposing a public policy is another. An asteroid wiping out life on earth is an even fatter tail risk, but the probability is so low that it is not worth worrying about. People have not rational reason to worry about ebola, terrorism, or plane crashes.


I think he/she meant something different: it's not about how big/small are the chances, but about how much we know on the probability distribution.

You assume you know the probability distributions, you compare them and determine that nuclear will kill less than coal burning.

Except that you might have never observed a black swan event before (eg. massive sabotage of nuclear plants).

So the probability distributions you inferred from past events might be wrong.

The case of a lottery is different as the probability distribution of outcomes can be determined with certainty.


Does it currently really reduce fatalities if you control for car cost, age group and area driven? I‘m unconvinced of the current statistics.


Biggest factor to control for is drunk driving, which is ~30% of all deaths IIRC.


It's 28% in 2016 according to NHTSA [0], 10,497 deaths.

Certainly auto-pilot is worth it if the driver is drunk. I wish that all cars were fitted with alcohol measuring devices so that the cars won't start if you're over the limit.

Drink driving is still a massive problem in Belgium where I live. Although you get severely fined (thousands of euros) it's up to a judge to decide if you should have your licence taken away [1]. Typically you have to sit by the side of the road for a few hours then you're allowed to continue. In the UK it is a minimum 1 year ban.

[0]: https://www.nhtsa.gov/risky-driving/drunk-driving

[1]: http://archive.etsc.eu/documents/C.pdf


Good point. I wonder whether the ratio of drunks behind autopilot wheels is different than average. Could be lower (because richer) or higher (because richer).


There could easily be more than just a wealth differential, e.g. people who are more are of their emissions impact might also be be more aware of intoxication impact.

I'm any case, "statistically more safe" is a weak argument, e.g. we would be terrified of boarding a plane if they were merely statistically more safe than driving (by a small margin).


>e.g. we would be terrified of boarding a plane if they were merely statistically more safe than driving

Don't know why you're treating that as a hypothetical. Many people are terrified of flying and it is safer than driving.


What that commenter is getting at is that the term "statistically more safe" is meaningless because it can be such a wide net of meaning, e.g. a very small margin (1%) or a large margin (50%).


Currently it is probably worse than the average driver. But here is a trolley problem for you. What is the ratio of people killed who volunteer to test out an auto-drive to the number of lives saved by that early testing? 1:1, 1:10, 1:100, 1:1000, 1:1,000,000? A million people die in vehicle wrecks each year.


The problem is that it's not just the drivers who are the volunteers getting killed. When you kill pedestrians as Uber did, they did not volunteer to be killed to test your software.

Yes automation is good, but Tesla is being needlessly reckless. They can easily be much more strict with the settings for auto-pilot but they're not.


If you are talking about the world a large part of that million are killed on the roads where no self-driving car would be able to drive anyway. Also "normal" drivers do not usually get killed in crashes—unless it involves reckless driving. So probably self-driving is safer than drunk driving, reckless driving or driving on the roads with the extremely poor infrastructure, but compared to responsible driving? Also, why is this so black-and white? The safest current option is technology assisted driving, but no much talk about it.


Those are clearly not the only two options. Waymo appears to be far and away the leader in this space, and they've killed zero people.


Yea, most self driving miles are simulated, not done in meat space.


5 million are in meat space, too few to derive anything about safety since on average a fatal crash happens every 100Mmiles

Can't find how many of these miles were driven on roads with a posted limit above 35mph


I'm just throwing something at the wall here: Just start with a purely assistive package like most big manufacturers have. Let buyers opt-in to a test program (giving them, say, a 3% reduction on the list price), where all their inputs and outputs are recorded and sent to the manufacturer. Manufacturer can now assimilate that data into a model of car and road and test their fully autonomous software on that model and search for situations where the software gave much different input compared to the human driver. Check these situations in an automated way and/or manually to see whether it's a software error or improvement. IMO this approach would have had a high chance to find the bugs that caused the two recent fatalities, as I'm convinced that most human drivers would have done better there.

After years of doing this (which seems to be close to what Waymo is doing if I understand correctly), the autonmous software should be way better than what is being pushed out now by Tesla and Uber (and probably a bunch of others).


Sort of assuming Tesla is doing this already.


If they do I'm surprised of the latest accident tbh.


I think you could have just stopped that sentence after does it currently really reduce fatalities?.


why? got better data than me? please share.


Considering that tesla w/ autopilot enabled is less safe than tesla w/ autopilot disabled, I'm going to go with "no, if you control for those factors it doesn't reduce fatalities".


Oh is that the case? How long is this known? In that case I find it should be taken off the market by the government.


Source?


Anyone driving with it for an hour will probably encounter a situation they have to take over. Seems pretty obvious. It's a cool tech demo with bugs that can kill you.


That proves that a Tesla is dangerous if you enable the autopilot and then take a nap. It doesn't demonstrate that it's more dangerous if used as intended.


I haven't ever seen it save anyone, only introduce more problems for the inattentive driver.


and how did you reach that conclusion?


I'm not reading a position in what you've said, but there is an observation here to draw out for the crowd. It doesn't matter all that much if after controlling for those variables it is relatively less safe; in absolute terms it just needs to be as safe as the worst human drivers.

That is, we have an accepted safety standard - by definition, the worst human driver with a license. If a Tesla is safer than that, the rest is theoretically preferences and economics.

I'm not saying that the regulators or consumers will accept that logic - if airlines are any example to go buy, they'll take the opportunity to push standards higher - but I think the point is interesting and important. It is easy to smother progress if we don't acknowledge that the world is a dangerous place and everything has elements of risk.


I don't agree with this position. Autopilot is a new thing that governments need to decide whether it's allowed or not (since it is operating dangerous equipment, so the state has an interest in doing the right thing). If improved safety is not a demonstrably valid argument for allowing it, it IMO doesn't pass the bar for what should be allowed. At that point it's purely a money-grabber from corporations that want the cookie (first to market with fully autonomous vehicles) without paying for it (heavy investment in testing).


What about if it could pass a driving test?


Since it would be one software being used many times over, I'd be fine if it could pass a large number of driving test with very low failure rate, considering driving tests are quite random in how they are applied. The question is however which driving test. My experience in Switzerland was that the practical driving test was quite hard to pass, I had to train a lot to satisfy them completely and they route you for one hour quite randomly around a European medieval town and its outskirts, i.e. a place that's quite hard to navigate correctly as it's not built for cars.


> Similar situation to nuclear power.

I think the issue with nuclear power is more the size of the blast radius. For example, the U.S. population within 50 miles of the Indian Point Energy Center is >17,000,000 including New York City [1]. The blast radius of a self-driving car is a few dozen people at best. It's not evident that nuclear technology is uniformly better than solar or wind, but it's probably expected-value positive.

[1] https://en.wikipedia.org/wiki/Limerick_Generating_Station


> And in the current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology", I think machine driving cars will have a hard time being accepted.

Exactly this. People will not accept being hurt in accidents involving AI, but will accept being in accidents caused by human error. There is no way this will change, and car manufacturers should also have realized this by now.

As far as I can see, the only viable solution to the "not quite an autopilot" problem that tesla (and maybe others) has created for themselves is this: Just make the car sound an alarm and slow down as soon as the driver takes his eyes off the road or his hands off the wheel. The first car that doesn't do this should be one that is so good it doesn't have a driver fallback requirement at all - which I think is two decades out.


> It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided.

The flawed assumption behind this thinking is that all accidents have "accidental" factors. Alcohol and drugs and the poor decisions of young men are _huge_ components of that statistic. You also have to consider that motorcycles make up a not insignificant portion (12%) of those numbers.

It also completely ignores the pedestrian component (16%) which is due to a huge number of factors, of which, driver attentiveness is only one. So many pedestrians die, at night, on the sides class B highways that it's silly.

EDIT: To add, Texas has more fatalities on the roadway than California. Not per capita, _total_. This is clearly a multi-component problem.


And you also have older cars with less safety features, not sure how is in US but in other countries you can stil see old cars that are missing airbags.

So in fatalities statistics the fair thing is to compare Tesla with similar cars.


That’s a good point. Not just airbags. It starts to be common to have cars that apply the brakes automatically when they anticipate a collision. I think some research has been done on cars being designed to do less damage to a pedestrian in a collision, I don’t know if it has lead to design changes. Self driving cars may end up being only marginally safer than other new cars.

But safety is not in my opinion the main rationale for self driving cars. Convenience and maximisation of traffic are.


> I think some research has been done on cars being designed to do less damage to a pedestrian in a collision

European safety tests rank pedestrian protection in addition to passenger protection https://www.euroncap.com/en/vehicle-safety/the-ratings-expla...

Outfitted Volvos will actually lift the hood in an impact to protect pedestrians https://support.volvocars.com/uk/cars/Pages/owners-manual.as...


>>Convenience and maximisation of traffic are

I think the convenience will mean more traffic since many who now use public transport will use their self driving car instead. So in total autopilots could be worse for the environment and require even more roads


I agree with the traffic increasing, though as we are moving toward electric cars, not sure this is an environment problem. At least not an environment problem in big cities.


Electric cars still impose a huge environmental cost since they need to be manufactured in the first place. And the batteries need rare earths which are, well, rare.

While there is ongoing research into novel types of batteries made from more common materials, I wouldn't be surprised if the next war is about lithium instead of oil.


Agree, I am not convinced it is a net benefit for the overall environment. Plus they will consume more energy because of inefficiencies in transport and storage.

But there is still an immense environmental benefit: they will pollute in places (industrial areas, mines) that are not places where people live (big cities). So the population will benefit a lot from moving where the pollution takes place. And it's not just air. Noise pollution, dirty buildings, etc.


> Plus they will consume more energy because of inefficiencies in transport and storage.

Is that so? The larger engines in fossil power plants are more efficient than a fleet of small ICEs, but I don't know how this added efficiency compares against those inefficiencies along the way that you mentioned (and of course, a complete picture also needs to take into account the energy cost of distributing gas to cars).


If we do not push for safety then the manufacturers would invent some bad statistics and cheap out on sensors and engineering after they pass the bad statistics percentage.

The traffic issue it won't be actually solved with this kind of self driving cars, you would need new infrastructure, like modern metros/trains.

The dream of fast moving self driving cars, like a sword, I don't think is possible without a huge infrastructure change and I think the sword of cars would need a central point of control.


You will increase traffic without material change to infrastructures if you have no car parked in the street (particularly in Europe where streets are typically narrow) and you have better traffic flow management (coordinating self driving cars), variable speed limits, etc.


What is the minimum time you guess it would take to have a city with only self driven cars that could just drive by following lines and other specific signals?

I do not think this will happen in an existing city for at least 20 years.

I can imagine scenarios where a glitch or something else could cause tons of traffic issues in a city with only self driving cars.

I am not against self driving cars, I don't like the fact that this startups arrived and push alpha quality stuff on the public roads, it will create a bad image for the entire field.


If all newly produced cars are self-driving cars in 10 years, I think it is reasonable to ban non self-driving cars 10 years after that. So 20 years sounds about right. It's not that long for such a fundamental change.

And yes it will create its own problems. Like if these cars get hacked, they will cause serious damages.


So if you think the timeline is 20 years at best then we need that present self driving cars be able to drive on present roads with human driven cars, pedestrians, bikes, holes in the road, some bad marked roads, roads without markings, roads with a bit of snow, heavy rain,fog.

Adding some extra markings, special electronic markings for this cars is not a solution, and in this 20 years since the traffic is mixed you can't consider lifting the speed limits or changing the intersection rules, so I don't see how it helps the traffic (except maybe on small streets if you consider that people won't want to own their car and will use any random car. this random cars should be super clean and cheap to make people not own their own)


Or maybe having a population with no hard challenges in their youth, like learning to drive and surviving. Seems to lead to people incapable of handling adulthood or developing judgement about risk.


> current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology"

The "current climate" is a backlash to the cavalier "move fast and break things" externality-disregarding culture of the last 10 years or so. We should be extremely conservative when it comes to tools that can potentially maim or kill other people. Not valuing human life should be an aberration.


> Imagine the autopilot is 100 times safer than human drivers.

Is it? Can it be proved controlling for all variables?


No it is not. But might be someday. Will that be good enough? My dad would spend 100k on a car that was 10 times less safe, if it let him keep using a car. I'm sure others would as well.


The proper way to prove "it's 100 times more safe" isn't to let it cause some number of deaths and then go "welp, we tried our best but we were wrong, turns out it's less safe. Shucks". But that's exactly what Tesla, Uber, etc. all seem to be doing. "We'll compare our statistics once the death tallies are in and we'll see which is safer".

The most we have to go on for a rough approximation of safety is the nebulous and ill-defined "disengagements" in the public CA reports. From what I can tell, there's no strong algorithmic or safety analysis of these self-driving systems at all.

The climate about these things is sour because the self-driving car technology companies seem to want to spin the narrative and blame anybody but themselves for the deaths they were causing, and just praying they'll cause less of them once this tech goes global.


> the nebulous and ill-defined "disengagements" in the public CA reports.

The usual description of disengagements: "Disengage for a perception discrepancy" or "Precautionary takeover to address perception".

Except when referring to others, then its plain English: "Disengage for a recklessly behaving road user" or "Other road user behaving poorly".


For clarity on this point, "disengagement" has a specific meaning to the California DMV[0]:

> For the purposes of this section, “disengagement” means a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.

However, some self-driving car manufacturers have been testing the rules quite a lot by choosing which disengagements to report[1]. Waymo reportedly "runs a simulation" to figure out whether to include the disengagement in its report, but there's no mention of what the simulation is or how it might fail in similar ways the technology inside the car did! Thus, the numbers in the reports are likely deflated from being actually every single disengagement.

[0] https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e... [1] https://spectrum.ieee.org/cars-that-think/transportation/sel...


And even this pathetic and tootheless regulation was enough to drive Uber from California for a while.

With all the games they play with the numbers, Waymo still reports 63 safety-related disengagements for a mere 352,000 miles. This doesn't sound like an acceptable level of safety.

The surprising part is that it appears that Waymo is planning to start deploying their system in 2018. How can they even consider it with this amount of disengagements?


> No it is not. But might be someday. Will that be good enough?

I'm cynical enough to want to wait for it to actually be that good before bestowing the marketing goodwill on the company.

"Might be someday" doesn't really cut it for me any more.

Let's wait until we get there, then evaluate the situation.


Your dad would be happy to get a car that was 10X less safe, but the rest of us on the road will be less so.

These are public roads, driving a dangerous cars put other people at risk.


What fraction of current drivers are 10x less safe? Probably all 16-20 year olds.


Probably around 3-4x[1], but point taken.

Inexperienced drivers cannot be avoided, but many states try mitigate the risk by limiting teenage drivers.

But my point is that whether to allow a vehicle that is not safer than a reasonable human driver should be not left to the car owner alone - there are other stakeholders whose interests must be taken into account.

[1] http://www.iihs.org/iihs/topics/t/teenagers/fatalityfacts/te...


You’re arguing from a horrible stance. Autopilot isn’t 100 safer than human drivers that is why people are concerned about safety.

It’s like if you said “I don’t get why people are concerned with every kid bringing a gun to school, if that would make the schools 100 times safer then shouldn’t we do it? Even if there’s an interim “learning period” where they are much less safe, doesn’t the end justify the means?”

I really can’t fathom the mindset of people who honestly believe that the only way forwards with self-driving tech is to put it into the market prematurely and kill people. In 10-15 years we’ll have safe robust systems in either case, let’s not be lenient with companies who kill costumers by trying to move to fast and “win the race”. Arguing from statistics about deaths without taking responsibilities into account is absurd. Would it be ok for a cigarette company to sell a product that eliminated cancer risks but randomly killed costumers as long as they do the math and show that the total death count decreases? Hell no.


> Imagine the autopilot is 100 times safer than human drivers. Full implementation would mean about a fatality a day in the US. It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided.

Your opinion will be irrelevant--the insurance companies get the final say. When that safety point arrives, your insurance rates will change to make manual driving unaffordable.


>your insurance rates will change to make manual driving unaffordable

I see people saying that. Why would insurance be more expensive than today--implying that manual driving becomes more dangerous once self-driving cars become common? It's already a common pattern that assistive safety (and anti-theft) systems tend to carry an insurance discount versus not having them.


Because the "manual" driver will almost always be at fault and the automated drivers will have the telemetry to prove it. For example, I recently got hit while backing out of my parking space--I'm legally at fault because of how the collision occurred. The fact that the other driver had tremendously excessive speed (a center mass hit on a Prius that literally lifted it onto the SUV sitting next to it) wasn't proveable by me--a self-driving car will not have that problem.

This means that the insurance companies with more "manual" drivers will be paying out more often and will adjust the insurance rates to compensate.


My hope is that the testing to obtain a driver's license becomes much harder to get and easy to lose. Like a airplane pilots license.


Brad Templeton has some very relevant discussion of this in https://www.templetons.com/brad/robocars/levels.html

including

> There is additional controversy, it should be noted, about the proposed level 2. In level 2, the system drives, but it is not trusted, and so a human must always be watching and may need to intervene on very short notice -- sub-second -- to correct things if it makes a mistake. [...] As such, while some believe this level is a natural first step on the path to robocars, others believe it should not be attempted, and that only a car able to operate unsupervised without needing urgent attention should be sold.


i don’t see why it’s controversial. just look at drivers now. many are already on human autopilot checking their phone, eating, etc. if you give any amount of automation but that which still requires checking in from the human, humans will almost universally check out and crashes will be prevalent. autonomous driving will simply not work without full, unsupervised driving. i don’t understand why this is even a discussion amongst people.


Indeed. Since the SDV controversy started, I do catch myself at non-driving behind the wheel: "was I watching the traffic at all, or did I divert all my attention to the radio for several seconds? Was I too busy with the kids' fighting to watch the road just now?" And that's with me trying to actually drive the car safely, to be aware of its operating envelope, other traffic, hazards, navigation etc.


One solution would be to make the autopilot worse within the realm of safe driving. Make it so that it continuously makes safe errors that the driver has to correct. Like steering off into an empty lane. Slowing down. Etc.

Defining safe errors might be a challenge though.


That's what most "driving assistance" does. It's limited in what it offers and it refuses to keep going unattended for long periods.

Tesla has been criticised all along for both its marketing and allowing long periods of unattended driving assistance.


Make the steering wheel vibrate and beep every few seconds and constantly switch with the driver, like Mercedes is doing with theirs.


That behavior merely guarantees that I will never buy a Mercedes.


They market their assistant as one that is not intended to drive on its own. So if you let go of the steering wheel, it magically continues to drive and will alert you if you check out for too long.


Or the car could regularly ask a question about e.g. the colour of the car behind you.


Or "Are we there yet?" ;)


As far as I can tell there is no way to resolve this that will be effective. Google spoke about this problem publicly and why they directly targeted level 4.

Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.

Personally I think that autopilot should only take over when the driver is unwell or that it thinks there will be a crash.

Even when driving full time drivers get distracted. So when the car is doing the driving humans will get bored and won't be able to respond in time.

Autopilot however is always on and never distracted but doesn't work in some cases.


> they have a lot of equipment that forces them to stay alert.

As does Cadillac's Super Cruise system which is not getting the respect it should -- including by GM itself.


Maybe force users back in either speeds above 30, crowded places, complex topologies.


According to these videos, auto-pilot should be turning itself off if it detects bright sunlight obscuring it's vision.


That's good but this is based on device limitation only, not user attention and broad probability.


> Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.

Except they sometimes fall asleep: http://www.bbc.co.uk/news/uk-england-london-39457148


or play with their smartphone:

> http://www.spiegel.de/panorama/justiz/zugunglueck-von-bad-ai... (german only)


Yeah - that's the best you can hope for. So regular drivers will be much worse.


Uncanny valley of autonomy I guess. Google has noticed this early on, and their answer was to remove the steering wheel completely from their campus-only autonomous vehicles. Either it’s very close to 100% level 5 (you can pay no attention whatsoever and trust the car 100%), or it’s explicitly level 2 (advanced ADAS essentially), there’s nothing in between that’s not inherently dangerous.


You have to learn how to use it properly and pay attention. I use it a lot and it can drive from San Francisco to LA pretty much without stopping. But every once in a while it does mess up and you just need to make sure you're watching and ready to take over quickly. I agree that it's good enough that people might stop paying attention, but they just need to realize that they have to hold the software's hand in these initial stages. As a matter of fact, being in the drivers seat able to take control makes me much more comfortable in a self driving Tesla than in the back seat of a much more advanced Waymo self driving car


It you have to pay attention as much as if you were driving, then, what's the point of autopilot?


Trust me, it's still a huge relief. A day and night difference. Try it on a long drive, like LA to San Francisco and you'll see what I mean. It kind of feels like you're in the passenger seat rather than the drivers seat, but you can still grab the controls if you need to.


If it allows the driver to e.g. reply to a text message on his phone, then it's dangerous.

Either it requires the driver to occasionally take over and handle a situation on short notice, then the driver should have hands on the wheel and eyes forward (and the car should ensure this by alerting and slowing down immediately if the driver isn't paying attention).

Or the driver isn't required to pay attention and take over occasionally, and then it's fine to reply to that email on the phone while driving. I see no future in which there is a possible middle ground between these two levels of autonomy.


Regular manual automotive controls, in practice, "allow[] the driver to...reply to a text message". Illegal or not, it's done by millions of people every day, and for the most part, they don't crash. You'd have to actually add control instability to the car to force the kind of attention you want out of drivers, and nobody would buy a car with this deliberately annoying handling.


There is a double standard here: cars with autonomous features are held to the new standard, meat drivers are held to the old standard.

Just like we will probably never accept (neither socially nor legally) autonomous drivers that aren't at least an order of magnitude safer than human drivers, we will likely continue to accept drivers not paying attention to their manual controls - but we do not accept drivers not paying enough attention to take over after their AI driver.

> nobody would buy a car with this deliberately annoying handling.

Exactly. And since this is the only way of making a reasonably safe level 3 car, this is also why many car manufacturers have actively chosen to not develop level 3 autonomous cars (because they are either not safe OR annoying - and either way it's a tough sale)


This isn't a black and white scenario. I use AP for probably 50 miles of my 60 mile round trip commute. Here's how it breaks down for me:

In the morning, I leave at 5:30am. There is constant moving traffic at 60-80mph on the first leg of my trip (6 miles). I drive manually, or with AP, and I pay full attention, 100% of this time. If I'm using AP, it's because it's at least as reliable as I am staying in lanes.

I then change interstates. I do this in manual mode almost every time. AP isn't great at dealing with changing multiple lanes quickly and tightly like traffic often requires. Once I'm on the new interstate, I get into my intended lane (second from the left), put the car in AP, and we have what you'd consider stop and go traffic for a few miles. At this point, I open the can of soda I brought. I keep a hand on the wheel, and I pay attention, but I'm more relaxed than I was earlier, when I was doing 60-80mph. At this point, the only thing that I need to do is to respond to someone jumping into my lane and cutting me off (which the AP deals with, but I have more faith in my ability to slam the brakes), or road debris, which at this speed, is not a problem that needs less than a second of response time.

There's a slow steady 40mph drive that I'm in full AP for, drinking my soda and paying attention, and then the traffic thins out, and I notice that I'm starting to lag behind cars, because my AP has a set max of 70MPH from back where the speed limit was 65mph (even though I was only going 20-40). At this point, both hands on the wheel, full attention, and I increase the AP max to 75 or 80, depending on how much traffic I'm in. I switch it back over to manual to make the lane changes necessary to hit my exit, and I'm manual until I park my car at work.

On the way home, I'm in stop and go traffic for an hour. When I'm 'going', I'm going 5-10mph. I am in full AP mode 95% of this time, and I could take a nap at this point, and it wouldn't actually be unsafe. I'm safer with AP than manual at this point, because my attention fades if not and I could drift lanes, or bump the car in front of me. Which I've seen happen to other people countless times, and which just increases the amount of time everyone else is in traffic, too.

Even the most egregious lane jumper can't get into my lane too fast for AP at this stage of my commute. I just set the follow distance to 2, and listen to audio books while I browse twitter or facebook. I look at what's going on out the windshield, but it's virtually unchanging. Like the thousands of people surrounding me, I'm slowly creeping forward, waiting on the 20-30 miles to pass. For an hour and a half.


Newsflash: People are replying to text messages with and without AutoPilot. AutoPilot just makes doing so safer


This is the same non-argument - people don’t want safer (in that case a crappy autopilot that’s only slightly better than a human driver would be a viable product) - they just don’t accept any notion of unsafety in new tech.

The bottom line is that people don’t accept any risk at all involving autonomous driving - regardless of whether the alternative/old tech was worse. So, to put it very bluntly, people accept being hit by a texting person not paying attention for 1 second. People don’t accept being hit by a person in a level 3 autonomous vehicle not paying attention for 10 seconds - and that’s regardless of the relative safety of the two systems.

Obviously if you use Autopilot in bumper to bumper traffic this is an improvement, and a huge improvement over texting while manually driving. But texting at highway speed is thankfully rare when manually driving and should be just as outlawed with AP.


You could achieve the same level of comfort today on many common cars with adaptive cruise control. Autopilot gives you a false sense of security when it works great 90% of the time.


Again, you're making the same mistake op talked about - that the Autopilot works 80-90% so you'll keep letting it drive you while you relax, which means it's just a matter of time until you'll crash.


Actually... I notice my right hip and leg hurts in a pretty unique way after long drives, particularly if there's a lot of speed variance.

That might not be worth the full cost of autopilot (particularly compared with lane keepers), but it's a thing.


So, physical fatigue. I didn't consider that; it's not really what I'm waiting autopilot for, but it's indeed something useful :)


I don't trust you to be able to snap out of a distracted state and take control of a car in a situation you may or may not have been paying attention to. I don't trust you to do that, I don't trust the driver behind me to do that, and I don't trust the drivers to the left and to the right of me to do that.

I should not need to trust you -- if I trusted you, why would we need autopilot at all? Humans are either qualified to drive cars or they aren't. If they are qualified, we don't need autopilot. If they aren't qualified, autopilot should never require human intervention. What we have now is a half-measure that assumes that neither humans nor autopilots can be trusted, but that some combination of those two untrustworthy parties can somehow be trusted. It doesn't make any sense.


Please. You should see me when the car isn't on AutoPilot. I speed excessively, weave through LA traffic switching lanes frequently and generally exhibit unsafe driving behavior. I can't help it, it's like I get bored or something. I'm a decent driver and have never gotten in a crash but have gotten a lot of tickets. When I use AutoPilot, the car automatically maintains a reasonable distance at a fixed speed. You need to trust people driving other cars today, just as you always have -- that hasn't changed yet. AutoPilot related accidents are new, but accidents aren't. So yes people will die using AutoPilot but the solution is to know the systems limits and also maybe enhanced attention detection systems in the car, not banning AutoPilot or anything like that. I do believe AutoPilot can already reduce the number of crashes today.


The 3 Autopilot deaths so far involved 3 relatively young men, both in relatively elite professions: former Navy SEAL and an Apple engineer -- the third was a son of a Chinese business owner [0]. They fit the profile of men fairly confident about driving and tech, perhaps too confident. 3 fatalities over 320 million miles for Teslas equipped with Autopilot hardware is not much better than the 1.16/100 million miles fatality rate of all American drivers and vehicles.

[0] https://jalopnik.com/two-years-on-a-father-is-still-fighting...

edit: corrected a typo


I know, it definitely gives me pause to see people dying and is definitely a reminder to be safe. I remember when I was using AutoPilot on the 101 in the bay area this weekend, I noticed I was in the far left lane and decided to switch to a middle lane. So it's not that I don't think AutoPilot could kill me, as a matter of fact there have been several times when AutoPilot was definitely about to kill me but I took control in time to course correct. I think I've used the system enough to get a sense of what it can and can't do and feel that I'm less likely to get in a collision due to not paying attention while AutoPilot is on than I am to get into a collision because I was driving.

Also, in response to the statistics you cite: I wouldn't expect the statistics to be that far off the average because AutoPilot is limited in it's possible use cases at the moment. Even in cars equipped with AutoPilot people are always still driving the car for at least some part of the trip. Therefore I wouldn't expect the impact of AutoPilot to lead to a significant deviation from the average. Plus, I'd speculate that the crash rate per mile is probably higher on a Tesla then an average car -- I'm thinking of like a Honda Civic. Faster cars probably get in more crashes, right? Maybe not, who knows. Regardless it should be possible to control for this and assess the effect of AutoPilot on per mile crash rates by comparing rates for Teslas with and without AutoPilot. This is somewhat complicated by the fact that even Teslas without AutoPilot have automatic collision avoidance, but should shed some light on whether AutoPilot is making people crash more or less.


What about non-fatal accidents though? Where Teslas (and other assisted driving vehicles) prevent stuff like hitting pedestrians, random bikers, animals, parked cars etc.?


Tesla hasn't provided those numbers.


1.3 million people a year die in car crashes. Humans are woefully unqualified to pilot heavy machinery on a daily basis. Tesla’s Autopilot reduces crashes by 40% according to the NHTSA, so I can’t agree with your binary argument.

Whether you trust others is immaterial; statistics will be the final arbitor. If Autopilot still causes fatal accidents, but fewer fatal accidents than humans alone, how could you argue against such a safety system? What of the lives saved that wouldn’t have been if we demand an entirely fault proof system prior to implementation? Who are you (not you specially, but the aggregate) to take those lives away because of irrationality?


> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA

This claim was immediately called into suspicion when it was first published, and the NHTSA is currently facing a FOIA lawsuit for refusing to release data to independent researchers.

http://www.safetyresearch.net/blog/articles/quality-control-...

https://jalopnik.com/feds-cant-say-why-they-claim-teslas-aut...


As noted in your citation, Tesla requested the data they provided to be confidential (which is not an uncommon request), and the NHTSA granted the request. Whether the statement from the regulatory agency can be independently verified is immaterial.


> Whether the statement from the regulatory agency can be independently verified is immaterial

You believe independent verification of study results are "immaterial"? OK.


Because I do not have access to raw data does not make the resulting facts any less true (see: proprietary hurricane models).

The NHTSA made a determination and I’m using it as a data point.


OK, and all I said in my comment was that the NHTSA was asked for elaboration and proof -- because its findings seemed curious with respect to other study results -- and so far they have declined further explanation. This may be relevant information for anyone who sees you using the NHTSA's claim as a premise.


Fair point. I upvoted your posts; thinking about it further, it’s a legitimate line of inquiry.


I think I normally would have given Tesla the benefit of the doubt. But after the misleading, weasely-worded data they discussed to defend AutoPilot in light of the recent fatal accident [0], I think the onus is now on them to provide more concrete proof.

[0] https://news.ycombinator.com/item?id=16722500


>> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA

So does any car with automatic emergency braking and/or or forward collision warning (links below).

The problem here is, this Tesla drove head-on into the gore point. And previously, it drove right into the side of a huge truck. So... can it be trusted? Your call.

https://www.consumerreports.org/car-safety/automatic-emergen...

" IIHS data show rear-end collisions are cut by 50 percent on vehicles with AEB and FCW. "


"drove head-on into the gore point" "drove right into the side of a huge truck"

A lot of human-led accidents sound just as silly when stated this simply, so it's not relevant to the comparison.

More to the point, I think each of the deaths would be exactly as tragic, no more no less, if the sequence of events was complicated.


I am human, and any product built for me will have to take into account my idiosyncrasies. This includes my unwillingness to drive on the same road with a car that might at any moment swerve into me because of weird software.

It may be irrational, but I can forgive a human, I cannot forgive an AI.


Wasn't that with the old, better, auto pilot?


> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.

I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.


I honestly don't understand why, if the driver does not take control when the car has sensed an issue, it does not just throw on hazards and roll slow or stop. Seems like the safest way to keep people from dying.


Can you think of any reasons or situations where that logic fails? I can think of tons.

The very challenging, and somewhat unusual given the life/death stakes, aspect of this domain in general is that general 'easy' logic works for 99% of the scenarios. But in the remaining 1% -- and I mean every single possible last scenario, whether probable or not -- the "what to do" is so much more difficult. Enough time to do it? Can be done safely? Etc.

The reason the car turns control back over to the human at all is because the vehicle presumably does NOT have any maneuvers left which are high-confidence -- whether that's because the sensing equipment failed, the software failed or failed to interpret the data satisfactorily, or there's just no "good move" left, or any other number of reasons, etc...


slowing down should be the first thing a car does in these scenarios. the confidence factor matters less as speed approaches zero.

seriously, i slow down when i am unsure of the conditions, i surely have never evaluated every last scenario, and i have never run into a concrete barrier.


I'm guessing that you don't drive much on e.g., 6-lane freeways during rush hour. Slowing down in confusion is pretty dangerous behavior on a freeway.


OK so you have a speeding loaded truck behind you that would not be able to break for whatever reason still want to slow down?


> OK so you have a speeding loaded truck behind you that would not be able to break for whatever reason still want to slow down?

Mental experiment: which is preferable - colliding with a static barrier at an unchecked 70MPH or getting rammed from behind by a truck at a relative speed of 30MPH (assuming that's how much you've braked by)?

That said - slowing down =/= emergency stop - that truck behind you should leave enough room in front of it to come to a complete stop - unless you've cut in front of it. I'm probably ranting, but I'll repeat this as I've witnessed it far too many times: do not cut in front of loaded semi-trucks. The gap in front of them is intentional and it wasn't meant for you - it's their braking distance - they have ridiculous momentum and they can squash you!


My Subaru has a "lane assist" feature which helps keep you in your lane while driving. It only gives a little bit of pressure in the direction you should be moving to help you out and if you don't do anything the car will still move out of the current lane. This certainly helps me stay more aware as I can't ever rely on the car to fully steer, but if my concentration lacks during a turn on the freeway I get a reminder that I need to turn if I'm not paying as much attention when I come upon a curve in the road or just drift to one side a bit. So it definitely helps me remember that I'm always in control.

OTOH, the car has probably saved me from at least one accident where traffic ahead of me suddenly slowed down right as my attention relaxed. The _only_ correct way of using these systems is to treat them as an extra level of safety on top of your responsibilities as a driver. An "extra set of eyes" to help you avoid an accident while leaving you as the primary driver of the vehicle.


I use the Subaru adaptive cruise control quite a bit. It turns off when it is unable to see well enough, which usually only happens in scenarios where I shouldn't be driving and can't see (like during a blizzard).

Human+automation is very powerful! However, I don't think we are far from self-driving beating human alone.


Best solution is to keep the driver engaged, IE still holding the wheel and showing they are paying attention. It's how all the other cars with lane assist do it. Cant go more than 30 seconds of not touching the wheel before it complains.

It's not perfect but it sure beats the driver being so used to it they start doing other things like watching movies on their laptop.


No that's not the best solution. The best solution is to not have it at all until it's actually good enough to self-drive you.

30 seconds is a HUGE amount of time to react on a highway. You'll be long dead by the time another 25 seconds pass if the accident happens just 5 seconds after you took your hands off the wheel. And this is exactly what happened in the recent crash.


Not going to ever be able to prevent someone from taking their eyes/hands off the road. The point is to keep reminding them to pay attention.


This. I don't know I feel ambivalent about autopilot right now, as most on HN I always crave for new technologies, but the fact that something very dangerous can be very good most of the time and very bad occasionally make me weary of using it altogether. I know that's irrational because most often then not it probably saves you from your own mistakes.

Also, if Tesla intends to shield itself behind a beta status for their autopilot system until it is perfect, I think this beta status will remain even longer then the time GMail was in beta. At least, in this case they should own this problem and somewhat hardcode a meaningful warning at such intersections or do something.


It's not irrational at all. I drive our family 2018 Honda Odyssey, and it's already "saving me from my own mistakes" by beeping angrily when I signal for a lane change and there's someone in my blind spot. Works really well. It also has a notification light on each side of the car to indicate blind spot object presence, so I have to miss that first.

The question seems to be which is better:

1) A car very intelligently helping a human drive better.

2) Or a car mostly driving well itself but needing humans to very rarely override fatal crashes.

I prefer 1)

I'm not sure there are any studies comparing these things, since "human driving but lots of extra tech help" is harder to quantify and doesn't have all the data going back to the mothership like Tesla.


I think that 1 and 2 still can entail the same problems though. If it helps you check your blind spot you might occasionally stop checking yourself and rely on it to check for you. Then you implicitly start relying on that feature to work. It's different degrees of handing over control though.


For me, you nailed it. Just more 1 above please.


Why are they allowed to put beta software on public roads where third parties that never consented to their EULA are put at risk.

If they want beta software on a private track, that's their (and their insurance company's) problem.


Because no one is stopping them from. Even here, in HN, which is supposed to be filled with tech people, this is rarely asked. Then it is not surprising the common people and non-tech savvy authorities are complacent...


Oh, it is asked all right. But then the business side interrupts with time-to-market and diminishing returns and the cost of settling out of court vs. probability thereof - and voila, "move fast and kill people".


You just described every modern car on the road.


> Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail.

Different environment (and stakes), but I observed the same thing happening a couple of times during IT incident response. The automation crashed/failed/got into a situation it could not automatically fix, and the occurrence was rare enough that people actually panicked and it took them quite a while to fix what in the end was a simple issue. They just didn't have the "manual skill" (in one case, the tools to actually go and solve the issue without manually manipulating databases) anymore.


The problem is that Autopilot is simultaneously too good and not good enough.


The problem is that Tesla is releasing Enhanced driving and calling it Autopilot.

I dont trust anything from Elon anymore, big talker and lots of hype.


My first drive with auto pilot tried to follow into the back of a stopped car in a turning lane and randomly had phantom breaking issues like going under overpasses.

Not to mention there are things most people do defensively while driving that autopilot doesn't: anticipate a vehicle coming into my lane by both looking at the wheel position and the lateral movement. Autopilot ignores those pieces of information and data until it's in basically in your lane.

Personally I feel I have to be more on guard and attentive with it on because I know their are fatal bugs.


Yep, it degrades badly, like digital cellular calls which drop out suddenly.


As someone with a car that has a much weaker system (ProPilot), I can see where that would be a problem. Its not really tempting to let ProPilot do it on its on, as it regularly wants to take exit ramps and occasionally pushes towards the middle of the road too much.

It seems like AutoPilot users take their hands off the wheel regularly, and I just don't see how that is safe with this type of system.


If people are using it, not paying attention, with the expectation that it will beep to tell you to take over that's a big problem. In situations like this divider issue it won't beep, it thinks everything is fine right up until it rams you into a stationary object. I think people may not be fully aware of all the potential failure modes of this tech?


> said the problem with autopilot is that it is "too good".

Isn't it true that the thing cannot detect a stationary object in its path if the vehicle is traveling at above 50kph? If that is true, then I think this is an extremely dangerous situation that the owners of these vehicles are in...


Owners are in their armored cocoon. I'd be worried about whatever is besides/behind/in/at/ that stationary object.


"too good" -> "completely correctly like 80 - 90%". I'm unable to make the two sentences make sense in the same statement, since 85% looks extremely low to be considered too good, given that the outcome is to crash if you do not pay attention in the 15%.


I expect it is the nature of the problem. Autopilot fails when the road conditions are such that it cannot function correctly. Imagine that you drive your commute from point A to point B and use autopilot all the time and it has never failed you. Then you use it when driving from point A to point C and because there are road conditions on that route that it cannot handle it tries to drive you into a ditch. Your experience may be giving you more confidence in its ability than it deserves. This is made worse when it has worked fine from point A to point B until one day a painter had a bad day and one of the cans of paint they forgot to secure fell off the back of their truck and put a big diagonal splash on the lane. And on that day your autopilot drives you into the car next to you.


Some people do it. Think of it as 0-100 scale, 0 useless, 100 making no mistake ever, and not as some percent of correct action.


IMHO this is kinda of a mental trick. How good it is an hammer that allows you to place a nail 90% of times? Percentage of failures is not usefulness, or a value of perfection, like: I'm 80% handsome, that is a lot handsome. Something that has 20% of failures rate instead is terrible.


Indeed. If I have to be sober and paying attention when riding in a robot car, I might as well just drive it myself just to keep me from falling asleep. At which point, I'll just not bother with the robot car at all.


The lag time is for it to know you're distracted is ~ every 1min when you have to jiggle the wheel.


"Jiggle the wheel" and "have situational awareness of the road around you" are two very different things.


Are you kidding? 80-90% seems awfully low to trust it.


GM's well reviewed SuperCruise focuses on monitoring the driver so that the driver is paying attention to the road.

That's what 'too good' is supposed to be.


> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.

The solution is easy, but probably not what many would like to hear: ban all "self-driving" or "near-self-driving" solutions from being activated by the driver, and only allow the ones that have gone through very rigorous and extensive government testing (in a standardized test).

When the government certifies the car for Level 4 or Level 5, and assuming the standardized test has strict requirements like 1 intervention at 100,000 miles or whatever is something around 10-100x better than our best drivers, then the car can be allowed on the road.

Any lesser technology can still be used in other cars, bit instead of being a "self-driving mode" it should actually just assist you in the sense of auto-braking when an imminent accident is about to happen, or maybe just warn you that an imminent accident is about to happen, which could still significantly improve driver safety on the road until we figure out Level 5 cars.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: