1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.
Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.
This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.
I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.
In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.
Bottom line, that point just doesn't work.
This, of course, ignores the fact that stupid choices drivers make tend to affect other people on the road who did nothing wrong, so the introduction of a self-driving car which makes less stupid decisions would reduce deaths from both categories of people here.
Perhaps. I reject this argument though. A death is a death.
And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.
A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.
>And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.
How many deaths has Tesla prevented, which would have happened otherwise?
Certainly a larger number than the number of deaths caused by Autopilot failure (which makes major news in every individual case). Have a look at YouTube for videos of Teslas automatically avoiding crashes.
The result is the same, no? Isn't the idea to save lives? Why does one life have more value than another's simply because someone's death was caused by their own negligence?
What about the other people on the road? Do their lives matter less because of the gross negligence of the person not paying attention while they're driving?
This issue is a lot more complicated than you're making it seem.
Expect in this case, the driver has 100% control.
If you give every shitty driver a self driving Tesla maybe you would do something to make roads safer, but if you’re just giving it to higher net worth individuals who place greater value on their own life, you haven’t even made a dent in traffic safety.
In fact in some cases all you’re doing is making drivers unsafer because the autopilot encourages them to not pay attention to the road no matter how much you think they are watching carefully. The men killed in Teslas could have all avoided their deaths if only they had been paying attention. If I see a Tesla on the road I stay the hell away lest it do something irrational from an error and kill us both.
I do see some sources that claim rich divers get better insurance rates, but it is unclear to me if that is due to driving skill or a number of other factors that increase rates like liklyhood of being stolen or miles driven.
The third paragraph though is just you flinging opinion around. You assert without evidence that a Tesla is likely to "do something irrational from an error and kill us both" (has any such crash even happened? Seems like Tesla's are great at avoiding other vehicles and where they fall down it tends to be in recognizing static/slow things like medians, bikers and turning semis and not turning to avoid them). I mean, sure. You be you and "stay the hell away" from Teslas. But that doesn't really have much bearing on public policy.
Wait, so you are willing to share the road with all those nutjobs, yet you're "staying the hell away from" Teslas you see which you claim are NOT being driven by these people? I think you need to reevaluate your argument. Badly.
That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...
What are you basing this on? Specifically how is it 'clear' and what data has shown this to be 'significant' ?
My only concern is that there should be somewhat responsible people working on it (this means for example no uber, Facebook, linked in, or ford self-driving car on public roads).
But thinking about bit more, what if competitors shared data? Would that get us "there" (level 4+) at all? Would it be a distraction?
After all, how much better can a $500K supercar be compared to a $50K car? Definitely not ten times better, the speed limits are the same, seating capacity is likely smaller, there may be a marginal improvement in acceleration and a corresponding reduction in range (and increased fuel consumption).
Even having a car / not having a car is a status thing for many people (and it goes both ways, some see not having a car as being 'better' than those that have cars and vice versa, usually dependent on whether or not car ownership was decided from a position of prosperity or poverty).
as a trained classicist, I take exception to the idea a car could do what I do
There are plenty of people making that case. See for example this piece in The Atlantic the other day (https://www.theatlantic.com/technology/archive/2018/03/got-9...) talking about the concept of moral luck in the context of self driving cars. It puts the point eloquently.
As to why we use certain technologies, that is not so clear cut either. For instance, if I have a heart attack and someone uses a defibrilator on me- at that time, that is not necessarily my choice. I'm incapacitated and can't communicate and if I die at the end there's no way to know what I would have chosen.
Not to mention- most people are not anywhere nearly informed enough to decide what technology should be used to save or protect their lives (for instance, that's why we have vaccine deniers etc).
Then these people should stop trying to force me to accept a less safe method of transportation, by preventing me from using new technology!
Yeah, those people shouldn't be forced to use a self driving care. As they are ALREADY not being forced to do.
Nobody is being forced to use the technology. Just don't use it if you don't like it.
It is literally the opposite. These other people are forcing ME into a more dangerous situation.
I'm all for less red tape.
For the record, that's my objection to the technology: that it's not yet where the industry says it is and it risks causing a lot more harm than good.
Another point. You say nobody is being forced to use the technology. Not exactly; once people start riding self-driving cars then everyone is at risk- anyone can be run over by a self-driving car, anyone can be in an accident caused by someone else's self-driving car, etc.
So it's a bit like a smoker saying nobody is forced to smoke- yeah, but if you smoke next to me I'm forced to breathe in your smoke.
Yes, we would also tolerate Teslas if they were critical life support technology. How many lives has it saved BTW?
>and in fact these things have non-zero (often fatal) false positive rate
What is non zero? Ten to the power -20?
Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.
Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.
They could be more useful with -- for now -- less features. They probably won't do it because they want some sacrificial beta testers to collect some more data for their marginally less crappy next version. But given the car does not even have the necessary hardware to become a real self driving car (and that some analysts even think Tesla is gonna close soon) the people taking the risks of being sacrificed will probably not even reap the benefits of the risks they have taken, paying enormous amount of money to effectively work for that company (among other things).
It’s not a self-driving car. It’s really an advanced cruise control.
Why do we keep referring to something that we understand should require human supervision as "auto"? Stop the marketing buzzfeed and let's be real.
That would be great to live in the world of marketing people where everyone is so able to parse weasel words. That would solve the fake news problem overnight.
is there any point in saying that a hypothetical brain-dead, comatose bodybuilder is stronger than a starving man?
The point being that in the future the car _could_ get "full self-driving capability" via a software update. In contrast, a car that doesn't have the necessary hardware will never be fully self-driving even if we do develop the necessary software to accomplish that in the future.
a. When you buy a car why should you even care about that hw/sw distinction, and more importantly do you have the distinction in mind at all time, and are advertisement usually worded that way, stating that maybe the car could become self-driving one day (but without even stating the maybe explicitly, just using tricks)
b. It is extremely dubious that the car even have the necessary hardware to become a fully autonomous driving car. We will see, but I don't believe it much, and more importantly competitors and people familiar with the field also don't believe it much...
People clearly are misunderstanding what Tesla Autopilot is, but this is not, ultimately, their fault. This is Tesla's fault. The people operating the car can NOT be considered as perfect flawless robot. Yet Tesla's model consider them like that, and reject all responsibility, not even considering the responsibility that they made a terrible mistake in considering them like that. We need to act the same way as when similar cases happens for a pilot mistake in an Airplane: change the system so that the human will make less mistakes (especially if the human is required for safe operation, which is the case here). But Tesla is doing the complete opposite! By misleading buyers and drivers in the first place.
Tesla should be forced by the regulators to stop their shit: stop misleading and dangerous advertisement; stop their autosteer uncontrolled experiment.
I think I actually sort of disagree with your reasoning in precisely the opposite direction. Specifically, you state the following: "The people operating the car can NOT be considered as perfect flawless robot.".
I agree with that statement 100%. People are not perfect robots with perfect driving skills. Far from it. Automotive accidents are a major cause of death in the United States.
What I disagree with is your takeaway. Your takeaway is that Tesla knows that people aren't perfect drivers, so it is irresponsible to sell people a a device with a feature (autopilot) that people will use incorrectly. Well, if that isn't the definition of a car, I don't know what is. Cars in and of themselves are dangerous and it takes perhaps 5 minutes of city driving to see someone doing something irresponsible with their vehicle. This is why driving and the automotive industry is so heavily (and rightly) regulated.
The knowledge that people are not save drivers, to me, is a strong argument in favor of autopilot and similar features. I suspect, as many people do, that autopilot doesn't compare favorably to a professional driver who is actively engaged in the activity of driving. But this isn't how people drive. To me, the best argument in favor of autopilot is - and I realize this sounds sort of bad - that as imperfect as it may be, its use need only result in fewer accidents, injuries, and deaths, than the human drivers who are otherwise driving unassisted.
Misleading examples of the genre:
My cell phone has the right hardware to cure cancer! I just don't have the right app.
The dumbest student in my class has a good enough brain to verify the Higgs-Boson particle. He just doesn't know how.
This mill and pile of steel can make the safest bridge in the world. It just hasn't been designed yet.
Your shopping cart full of food could be used to make a healthy, delicious meal. All you need is a recipe that no one knows.
Baby, I can satisfy your needs up and down as well as any other person. I just have to... well... learn how to satisfy your needs!
If we were well on the way to completing a cure for cancer that uses a certain type of cell phone hardware, maybe that first statement wouldn't sound so ridiculous.
And if by some chance it turns out that more hardware was required after all they'll try to shoehorn the functionality into the available package. If only to save some $ but also not to look bad from a PR point of view. That that compromises safety is a given, you can't know today what exactly it will take to produce this feature until you've done so and there is a fair chance that it will in fact require more sensors, a faster CPU, more memory or a special purpose co-processor.
But that's not going to happen, because Tesla wants to deceive some people into believing that autopilot is "full self driving" so they will buy the car.
While the Tesla spokespeople are good at saying it's driver assist, their marketing people haven't heard - https://www.tesla.com/autopilot/. That page states "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." but as I noted above, we don't know how much of that safety should be attributed to the human. Tesla apparently knows when the driver's hands are on the steering wheel and I presume they can also tell when the car brakes, so they may have the data to separate those statistics. At a minimum, their engineers should be looking at every case where the autopilot is engaged but the human intervenes. They should probably also slam on the brakes (okay ... more gently) if the driver is alerted to take over but doesn't.
As an aside, just the name "Autopilot" implies autonomy.
This is perhaps a case of purposely confusing marketing. All vehicles have the hardware for full self-driving capability but not yet the software. The full self-driving is to be enabled later on through an over-the-air software update.
It's not merely purposely confusing. It's at best, not an outright lie only because they hope it's true.
People's eyes can detect individual photons, at much higher resolution and dynamic range than a typical camera.
The question of purposefulness is mostly irrelevant. It is their responsibility to avoid ambiguity in this domain because this could be dangerous. They are not doing it => they are putting people in danger. Now a posteriori if somebody manages to sue Tesla after a death a bad injury, maybe the purposefulness will be studied (but it will be hard to decide), but a priori it does not matter much as for the consequences of their in any case mislead attempts (even if it was only for an advertising harmless reason in the mind of the people who wrote it like that in the first place).
To finish, that they are carefully choosing their words to be technically true over and over, yet understood in an other way by most of the people, is at least distasteful. That they are doing it over and over through all existing channel makes it more and more probable that this is even on purpose, of course there is no proof but when we reach a certain point, we can be sufficiently convinced without a formal proof (hell even math proofs are rarely checked formally...)
This is Tesla's big lie.
In all their marketing, Tesla is comparing crash rates of their passenger cars to ALL vehicles, including trucks and motorcycles, which have higher fatality rates. Motorcycles are about 10x-50x more dangerous than cars.
Not only that, they aren't controlling for variances in driver demographics - younger and old people have higher accident rates than middle-aged Tesla drivers - as well as environmental factors - rural roads have higher fatalities than highways. Never-mind the obvious "accidents in cars with Autopilot" vs "accidents in cars with Autopilot on".
If you do a proper comparison, Tesla's Autopilot is probably 100x more fatal than other cars. It's a god-dammed death trap.
And remember, there are several extremely normal cars with ZERO fatalities: https://www.nbcnews.com/business/autos/record-9-models-have-...
This is not a problem that will be solved without fundamental infrastructure changes in the roads themselves. Anyone that believes in self-driving cars should never be employed, since they don't know what they're talking about.
However, I don't see the evidence for the claim that "Tesla's Autopilot is probably 100x more fatal than other cars". The flip side of the complaint that Tesla hasn't released information to know how safe Autopilot really is, is that we don't know how unsafe it really is either. If this is merely to say "I think Autopilot is likely very unsafe" then just say so, rather than giving a faux numerical value.
As for the claim that self-driving cars can't work without "fundamental infrastructure changes" and everybody working on self-driving cars should be fired, I think you're talking way beyond your domain of expertise.
The truth is somewhere in between Tesla's marketing and your wildly absurd 100x more fatal claim, but its much closer to Tesla than you.
Tesla's statistics (i.e. real numbers, but context means everything) do involve a whole whack of unrelated comparisons (buses, 18-wheelers, motorcycles) that all server to skew the stats in various ways, we can ignore them claiming to be slightly safer than regular cars.
However comparing more like to like, IIHS numbers for passenger cars driver deaths on highways puts Tesla at 3.2x more likely than all other cars to be involved in a fatal crash (1 death/428,000,000 miles driven vs tesla's 1 death/133,000,000 miles driven).
Of course this too is an unfair comparison. A 133hp econobox/prius vs a sports car in terms of performance is considered equal in that comparison. If one was really interested in accuracy, a comparison of high power AWD cars in similar price ranges driven on the same roads by the same demographics would be needed.
So by even standards clearly biased against Tesla, they are no where near 100x more fatal than other cars. Tesla's own numbers claim autopilot reduces accidents, and supposedly NHS numbers back them up.
Its important and critical to not believe marketing hype and lazy statistics. If you want people to take you seriously, countering hype and bad stats with equal or worse hype and worse counter stats is not the way to do it.
Since Tesla is comparing their crash rates against motorcycles, the 100x number isn't so absurd.
I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash. But you're talking about the autopilot and it's statistically incorrect to say it's safer than the average car. It's merely safer than a driver alone - this should be no surprise as you'll find that cars with backup cameras and alarms don't hit have as many accidents while in reverse as cars without them.
How does a Tesla get such good range? There's still a lot of energy in those batteries, and damaging them is far easier to cause a fire than leaking fuel --- the former can self-ignite just from dissipating its own energy into an internal short, while the latter needs an ignition source. In addition, the batteries are under the entire vehicle and thus more likely to be damaged; a fuel tank has a smaller area and is concentrated in one place.
It is extremely rare for fuel tanks to explode in a crash.
So far, experiential evidence with Tesla seems to be showing a lower than average risk of fires, though the breadth and nature of the battery leads to challenges in managing the fire itself.
All cases that I'm aware of proceeded at a slow enough pace to allow evacuation of the vehicle.
How often does a damaged and leaking fuel tank start a fire?
How often does a lithium battery that has been structurally compromised start a fire?
Fire safety is a major negative for lithium batteries. That much electrical energy in that form factor can only be so safe.
However, I would think that testing Autopilot-alone seems impractical. It's been asserted that AP has no ability to react to or even detect stationary objects in front of it. Can't we assume that in all those cases, non-driver intervention would result in a crash?
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.
What this ignores is that self-driving cars will by and large massively reduce the number of 'stupid' drivers dying (the ones who are texting and driving, drinking and driving, or just simply bad drivers) but may cause the death of more 'good' drivers/innocent pedestrians.
So the total number could go down, but the people who are dying instead didn't necessarily do anything deserving of an accident or death.
I say this as someone who believes self driving cars will eventually take over and we need to pass laws allowing a certain percentage of deaths (so that one at case of the software being at fault doesn't cause a company to go under), but undeserved deaths are something people will likely have to deal with somewhere down the line with self driving cars. At the very least, since they're run by software they should never make the same mistake twice, while with humans you see the same deadly mistakes being made every day.
I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.
Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.
You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?
So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.
Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?
It kills 1001.
Net lives saved = 1000.
> How is it that their opinion doesn't matter?
The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
It's a trolley problem. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.
 - https://en.wikipedia.org/wiki/Trolley_problem
Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.
>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.
Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).
>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.
An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.
No. Net lives lost = -1000. Gross lives lost = 1001.
We killed 1001 people to let 2001 live.
Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).
Do you think such a vaccine would be considered successful?
I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.
People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair (). I still think it's safer to drive a car with a seatbelt rather than without.
But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.
Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?
We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.
The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.
The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road.
You can't have a driving assistant that can be used as an autopilot.
>you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
I think you overestimate the rationality of human beings. I commute to work by motorcycle every day and I've noticed that I tend to ride more dangerously (faster, "dirtier" passes etc...) when I'm tired, which rationally is stupid. I know that but I still instinctively do it, probably because I'm in a rush to get home or because I feel like the adrenaline keeps me awake.
This is an advantage of autonomous vehicles, they can be reasonable when I'm not. I expect that few drivers (and especially everyday commuters like me) constantly drive "slowly and with a lot of care". A good enough autonomous vehicle would.
I think they are not testing with small enough particles. In the article, they test with PM 2.5 particles which would be around 2.5 micro meters. However, if you look at the table on page 11 of
Potential bio weapons such as smallpox, anthrax, influenza and the hemorrhagic viruses are far smaller than 2.5 microns.
Also, there are probably issues with the sensitivity of the detection equipment. If you look at the table on page 8 of
And at the table at
You will see that some of the biological agents can cause infection with as few as 10 particles. I doubt that the Tesla equipment could detect a concentration of 10 particles of these sizes.
This article is basically the biological equivalent of the I can't break my own crypto article.
>Bioweapon Defense Mode is not a marketing statement, it is real.
is false. Extraordinary claims require extraordinary evidence, and the evidence of Bioweapons Defense Mode working is entirely lacking
HEPA filters capture particulates. PM2.5 means particles above 2.5 micrometers in diameters. Good 0.2 - 0.3 micrometer HEPA filter is fine enough to catch bacteria like anthrax. Smallpox, influenza virus are smaller. You need carbon absorber to be safe.
This is like thinking you will survive a storm in the middle of the ocean because you have a life jacket on you.
>"We’re trying to be a leader in apocalyptic defense scenarios," Musk continued.
Is this guy serious?
Functionality that can _most_ of the time drive itself without human intervention and occasionally drives itself into a divider on the highway seems like a callous disregard for human life & how such functionality will be used.
Sure, everything is avoidable, if there's some expectation it needs to be avoided.
The whole point of autopilot is to avoid focusing on the road all of the time. So setting up circumstances under which humans perceive the functionality (autopilot) behaving as expected most of the time, then its highly likely they'll treat it as such & will succumb to a false sense of security.
My point: when a feature is life threatening your marketing fluff shouldn't deviate significantly from your legal language.
You raise an interesting point, accidents are not normally distributed throughout the driver's day, or even the population. Your likelihood of having a crash with injuries is highly correlated with whether or not you've had one before. A substantial number involve alcohol, consumed by drivers previously cited for DUIs.
We keep using average crash statistics for humans as a baseline, but that may be misleading. Some drivers may actually be increasing their risk by moving to first gen self driving technology, even while others reduce their risk.
On the other hand, we do face a real Hindenburg threat here. Zeppelins flew thousands of miles safely, and even on that disaster, many survived. But all people could think of when boarding an airship after that was Herbert Morrison's voice and flames.
I have already heard people who don't work in technology mumbling about how they think computers are far more dangerous than humans (not because of your nuanced distinction, but simply ignoring or unaware of any statistics).
I worry we're only a few high profile accidents away from total bans on driverless cars, at least in some states. Especially if every one of these is going to make great press, while almost no pure human accidents will. The availability heuristic alone will confuse people.
I'm not sure I follow you here. Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?
Those two statements don't connect for me.
I suppose that's partly because I am an automation engineer, and I deal a lot with inspecting operator-assembled and machine-assembled parts. If the machine builds parts that also pass the requirements, it's good.
It's nice if it's faster or more consistent, and sure we can build machines that produce parts with unnecessarily tight tolerances, but not meeting those possibilities doesn't feel like an ethical failing to me.
Yes, not infallible, but I believe that to take our lives in the hands of machines, the technology must be at least on par with the best drivers. To be better than average, but for instance to be more fallible than a good driver, is IMHO still not a good standard to be ethically ok to sell self driving cars, even if you have 5% less death per year compared to everybody driving while, like, writing text messages in their phones. If instead the machine will be even a bit better than an high standard driver, driving with care, at that point it makes sense because you are not going to care about the distribution of deaths based on the behaviors.
To be fair I rarely was doing anything useful on my 40 min train commute either :) Mostly reading Hacked News on my phone. Now I'm at least looking into the distance, taking some strain from the eyes.
I totally agree that public transport has to get better, it's just that there always has to be a mixture of transportation options.
Can the reason for these be that many other people use public transport instead and thus the roads are way less busier?
As we have no control over how others drive, statistics are more relevant.
As a pedistrian, I care about cars being on average less likely to kill me. If it means I’m safer I would rather the driver wasn’t in control of their own safety.
The ethical solution is the one with the least overall harm.
Of course being safer overall while taking control from the driver is unlikely to drive sales.
Do these car companies test their software by just leaving it on all the time in a passive mode and measuring the deviance from the human driver?
I'd think that at least in MV case, you'd see a higher incidence of lane following difference at this spot and it would warrant an investigation.
Something like this isn't easy but for a company as ambitious as Tesla it doesn't sound unreasonable. Such a crash with such an easy repro should have been known about a long long time ago.
So, cruise control? If people got confused and thought that cruise control was more than it really was in the 80s, or whenever it came out, what would we have done?
Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
> To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
https://www.vanityfair.com/news/business/2014/10/air-france-... (linked to from another HN post today: https://news.ycombinator.com/item?id=16757343 )
Tesla is just handing it out to anyone who can afford to buy a Model S/X (and now Model 3) with the absolute minimum of warnings that they can get away with.
And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers. Which of course will snowball when reliance on automation keeps making drivers worse than better, and at some point you're past the point of no return.
Whether that's right data must show.
The UK has the 2nd-lowest rate of road deaths in the world (after Sweden).
The roads in the UK are not intrinsically safe, they are very narrow both in urban and rural areas which means there are more hazards and less time to avoid them.
However, the UK has strict driver education programme. It is not easy to pass the driving test, with some people failing multiple times. It means that people only get a license when they are ready for it. Drink-driving will also get you a prison sentence and a driving ban.
I'd also note that most European countries are hot on the heels of the UK, Sweden and Switzerland by the above measures. By comparison, the US numbers are 10.6, 12.9 and 7.1, respectively. Most European countries are well below those numbers.
Particularly in Western European and Nordic countries, the driving tests are very strict. Even for all the stereotypes, France's numbers of 5.1, 7.6 and 5.8 are quite good, and they are moving in the right direction.
I use death rate, not incidents/accidents rate.
I ignored "smaller" countries for the above listing, such as San Marino and Kiribati.
All numbers are from 2015, and they are also presented in the Wikipedia article:
IIRC, the workshop was about three hours, but it was surprisingly useful. The instructors treated you like adults and not children or criminals, and they gave fairly useful tips on driving and looking out for things like lights suddenly changing, ensuring you are in the right gear, how you're supposed to react if an emergency vehicle wants you to go forward when you're by a set of traffic lights with a camera, etc.
However, on the drink driving front, given the news with Ant from Ant and Dec I think it's safe to assume that not everyone gets a prison sentence for drink driving.
I would think to look carefully at all directions and, if visibility allows, pass the red light, then contest the fine with an "emergrncy vehicle passing through" defence. But what is the official position?
At least, that's how it works in Germany and Denmark. But I don't think Denmark has traffic light cameras. I've never seen them anyway. But I've seen them in Germany.
Of course, this is assuming you don't actually cross the entire junction, but rather just moves out into the junction, so the emergency vehicle can get through.
Although, if you are at a set of traffic lights and an emergency vehicle tries to get you to cross the line, what you should do is write down the registration plate and contact the relevant service to report the driver. The instructor on this course was ex-police, and according to him police, paramedics, and firefighters in the UK are taught to not do this under any circumstance, and if they are caught trying to persuade someone to cross a red traffic light then they can get in a lot of trouble.
The only case that trumps a red traffic light is when given a signal by an authorised person (e.g., police officers, traffic officers, etc).
I think under a literal interpretation of the law you are obliged to commit an offence if you are beckoned on across a stop line at a red traffic light; you can either refuse the instruction to be beckoned on (an offence) or you can cross the stop line (an offence). That said, there's plenty of habit of the beckoning taking precedent over the lights.
Basically the only time you see any police officer instructing traffic from a vehicle is when on a motorbike, typically when they're part of an escort.
Police cars will have dash cams, not sure on ambulances or fire engines.
That being said, scariest thing I did on the road was going through a red light to let an ambulance through at a motorway off-ramp. You better hope everyone else has heard those sirens.
The UK went through a major cultural change relating to drink driving several decades ago, it isn't viewed as acceptable, the police get tip offs on a regular basis.
In Scotland, the BAC limit is lower than in England and the punishment is a 12 month driving ban and fine for being over the limit - no grey areas or points or getting away with it.
In England a fine and penalty points are common, repeat offenders can be suspended and jailed. The severity of the punishment can often depend on how far over the limit you are and other factors.
Has he been sentenced?
Actually, paradoxically that means they are actually safer. People drive slower on narrower roads, which means that accidents are within the safe energy envelope that modern cars can absorb.
Very, very few people will ever die as a passenger or driver in a car accident at 25 mph / 40 kph. At 65mph / 100kph, the story is completely different.
The country roads one has always dumbfounded me though - why some of those have national speed limits I will never know.
The thought of doing this now scares me and I don't do this and suggest that no one else does either. But I know many people still drive like this.
Why not? Even roads with lower speed limits you're required to drive at a speed appropriate for the road, the conditions, and your vehicle; the speed limit merely sets an upper-bound, and it's not really relevant whether it's achievable. Just look at the Isle of Man where there is no national speed limits: most roads outside of towns have no speed limit, regardless of whether they're a narrow single-lane road or one of the largest roads on the island.
And that's the way it should be. The driving test may not be easy, but it's not any more difficult than driving is. People should be held to a high standard when controlling high speed hunks of metal.
Any statistics released by Tesla should be compared against similar statistics from say modern Audis with lane-assist and collision detection.
This could happen because Tesla vehicles are more expensive than comparable conventional vehicles, less attractive to those with risky lifestyles, inconvenient for people who don't have regular driving patterns that allow charging to be planned, or more attractive to older consumers who wish to signal different markers of status than the young go-fast crowd.
You'd possibly want to compare versus non-auto-pilot Tesla drivers on the same roads in similar conditions, but the problem remains that the situations where auto-pilot is engaged may be different from those when the driver maintains control.
In sum, it's hard to mitigate the potential confounding variables.
Yes - that's why I suggest comparing it to modern Audi drivers. Basically any of the BMW/Mercedes/Audi drivers are where Tesla is getting most of its customers. Those companies all have similar albeit less extensive safety systems.
They're brilliant because they augment humans by leaving humans to do all the driving and staying focused on that but then taking over when the driver gets distracted and is about to hit something.
Auto-pilot does it the opposite way round. It does the driving but not as well as the human but then the human can't stay as alert as the car so isn't ready to take over.
"but autopilot still lowers the death rate on average"
That's not what they said, they said the death rate was lower than the average. And yet you can't help hearing that it lowered the death rate. I think it's very likely turning on autopilot massively increases the rate of death for Tesla drivers, but they've managed to deflect from that so skilfully.
From the videos in the article it's clear that auto-pilot should be disabled when there is bright low-light sunshine. Tesla should be prepared to tell customers there's certain times of the day when the reliability of the software is not high enough and turn it off.
These are all 'beta' testers after all so they shouldn't complain too much.
Imho, they should detect that situation (by using cameras and/or the current time + GPS) and not allow you to switch it on. You should not give drivers the choice between safety and lazy (from which I assume that auto-pilot driving when currently feasible it SAFER than manual -- which I assumes, but read elsewhere in the thread is not yet properly demonstrated).
I think the question of using self driving vehicles comes down to this. Do you want to be part of a statistics that you can control, or do you want to be part of one that you cannot?
When we go in trains or planes, we are already being part of statistics that we cannot control. But those things also are extremely reliable.
So there seems to be a threshold, where someone should opt for being part of a statistics that you cannot control.
The people who are pushing SDV's, by some sleight of hand, seem to hide this aspect, and have successfully showcased raw, (projected) statistics that implicitly assumes the rate of progress with SDV's, also assume that SDV ll continue to progress until they reach that capability.....
>And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers.
But there is always are risk of catastrophic regressions  with each update, right?
well, even if you are in control with your hands on the wheel, you can still get heart (and even die) in a car accident where you have no responsibility at all. Sure you can control your car, but you cannot control others'...
(this is even more obvious from a cyclist point of view)
Everyone everywhere says the same thing. It's information-free discourse.
With half baked self driving tech, you have absolutely no control..
I really hope it does get to the point were all drivers are automatic and interconnected. Cars could cooperate to a much higher extent and traffic could potentially be much more efficient and safe.
Use it just for additional input to err on the side of safety. If the car ahead says "hey, I'm braking at maximum" and my car's sensors show it's starting to slow down, I can apply more immediate braking power than if I'd just detected the initial slight slowdown.
Or, "I'm a car, don't hit me!" pings might help where my car thinks it's a radar artifact, like the Tesla crash last year where it hit the semi crossing the highway.
You realize you say that in a thread about a car's sensors failing to understand it was in an unsafe condition (again), right?
We're discussing car-to-car mesh networks being used to provide self-driving cars with more decision-making input.
In other words, "all other road users should accommodate my needs just to make life a bit easier for me" is a terrible idea.
Having cars communicate information to each other has the potential to be an additional safety measure. It's like adding reflectors to bikes - adding them wasn't victim blaming, it was just an additional thing that could be added to reduce accidents.
I think you can make some conclusions from the current software industry and see how many defects are deployed daily, I see Tesla and Uber have same defects as any SV startup and not as NASA, having this starups controlling all the cars on the road sounds a terrible idea.
Say, like computers on the Internet (information highway)?
Rings a bell.
It is certainly designed to do that, but even for airline pilots there are limits to what is possible.
You cannot train a human to react within 1ms; that's just physiologically impossible. Nor can you train a human to fully comprehend a situation within x ms, where x depends on the situation.
So the autopilot would have to warn a human say 2x ms before an event that requires attention, where it can of course not yet know of the event, so that amounts to 'any time there could possibly be an event 2x ms in the projected future'. Which is probably: most of the time. Making the autopilot useless.
In a car, the requirement is fractions of a second.
This is a similar situation to nuclear power. With nuclear, all your waste can be kept and stored and nuclear has almost zero deaths per year. Contrast to the crazy abomination that is coal burning.
I don’t think it has anything to do with technology and automation. People are just very bad at reasoning with probabilities. This is why lotteries are so popular. And it is fuelled by unscrupulous (and even less apt at grasping probabilities) media trying to make money out of sensionalist headlines. How many times a week am I told that x will cause the end of the world, replace x by flu pandemics, Ebola, global warming, cancer as a result of eating meat or using smartphones, etc.
But classifying risks is one thing. Worrying in your daily life to the point of opposing a public policy is another. An asteroid wiping out life on earth is an even fatter tail risk, but the probability is so low that it is not worth worrying about. People have not rational reason to worry about ebola, terrorism, or plane crashes.
You assume you know the probability distributions, you compare them and determine that nuclear will kill less than coal burning.
Except that you might have never observed a black swan event before (eg. massive sabotage of nuclear plants).
So the probability distributions you inferred from past events might be wrong.
The case of a lottery is different as the probability distribution of outcomes can be determined with certainty.
Certainly auto-pilot is worth it if the driver is drunk. I wish that all cars were fitted with alcohol measuring devices so that the cars won't start if you're over the limit.
Drink driving is still a massive problem in Belgium where I live. Although you get severely fined (thousands of euros) it's up to a judge to decide if you should have your licence taken away . Typically you have to sit by the side of the road for a few hours then you're allowed to continue. In the UK it is a minimum 1 year ban.
I'm any case, "statistically more safe" is a weak argument, e.g. we would be terrified of boarding a plane if they were merely statistically more safe than driving (by a small margin).
Don't know why you're treating that as a hypothetical. Many people are terrified of flying and it is safer than driving.
Yes automation is good, but Tesla is being needlessly reckless. They can easily be much more strict with the settings for auto-pilot but they're not.
Can't find how many of these miles were driven on roads with a posted limit above 35mph
After years of doing this (which seems to be close to what Waymo is doing if I understand correctly), the autonmous software should be way better than what is being pushed out now by Tesla and Uber (and probably a bunch of others).
That is, we have an accepted safety standard - by definition, the worst human driver with a license. If a Tesla is safer than that, the rest is theoretically preferences and economics.
I'm not saying that the regulators or consumers will accept that logic - if airlines are any example to go buy, they'll take the opportunity to push standards higher - but I think the point is interesting and important. It is easy to smother progress if we don't acknowledge that the world is a dangerous place and everything has elements of risk.
I think the issue with nuclear power is more the size of the blast radius. For example, the U.S. population within 50 miles of the Indian Point Energy Center is >17,000,000 including New York City . The blast radius of a self-driving car is a few dozen people at best. It's not evident that nuclear technology is uniformly better than solar or wind, but it's probably expected-value positive.
Exactly this. People will not accept being hurt in accidents involving AI, but will accept being in accidents caused by human error. There is no way this will change, and car manufacturers should also have realized this by now.
As far as I can see, the only viable solution to the "not quite an autopilot" problem that tesla (and maybe others) has created for themselves is this: Just make the car sound an alarm and slow down as soon as the driver takes his eyes off the road or his hands off the wheel. The first car that doesn't do this should be one that is so good it doesn't have a driver fallback requirement at all - which I think is two decades out.
The flawed assumption behind this thinking is that all accidents have "accidental" factors. Alcohol and drugs and the poor decisions of young men are _huge_ components of that statistic. You also have to consider that motorcycles make up a not insignificant portion (12%) of those numbers.
It also completely ignores the pedestrian component (16%) which is due to a huge number of factors, of which, driver attentiveness is only one. So many pedestrians die, at night, on the sides class B highways that it's silly.
EDIT: To add, Texas has more fatalities on the roadway than California. Not per capita, _total_. This is clearly a multi-component problem.
So in fatalities statistics the fair thing is to compare Tesla with similar cars.
But safety is not in my opinion the main rationale for self driving cars. Convenience and maximisation of traffic are.
European safety tests rank pedestrian protection in addition to passenger protection https://www.euroncap.com/en/vehicle-safety/the-ratings-expla...
Outfitted Volvos will actually lift the hood in an impact to protect pedestrians https://support.volvocars.com/uk/cars/Pages/owners-manual.as...
I think the convenience will mean more traffic since many who now use public transport will use their self driving car instead. So in total autopilots could be worse for the environment and require even more roads
While there is ongoing research into novel types of batteries made from more common materials, I wouldn't be surprised if the next war is about lithium instead of oil.
But there is still an immense environmental benefit: they will pollute in places (industrial areas, mines) that are not places where people live (big cities). So the population will benefit a lot from moving where the pollution takes place. And it's not just air. Noise pollution, dirty buildings, etc.
Is that so? The larger engines in fossil power plants are more efficient than a fleet of small ICEs, but I don't know how this added efficiency compares against those inefficiencies along the way that you mentioned (and of course, a complete picture also needs to take into account the energy cost of distributing gas to cars).
The traffic issue it won't be actually solved with this kind of self driving cars, you would need new infrastructure, like modern metros/trains.
The dream of fast moving self driving cars, like a sword, I don't think is possible without a huge infrastructure change and I think the sword of cars would need a central point of control.
I do not think this will happen in an existing city for at least 20 years.
I can imagine scenarios where a glitch or something else could cause tons of traffic issues in a city with only self driving cars.
I am not against self driving cars, I don't like the fact that this startups arrived and push alpha quality stuff on the public roads, it will create a bad image for the entire field.
And yes it will create its own problems. Like if these cars get hacked, they will cause serious damages.
Adding some extra markings, special electronic markings for this cars is not a solution, and in this 20 years since the traffic is mixed you can't consider lifting the speed limits or changing the intersection rules, so I don't see how it helps the traffic (except maybe on small streets if you consider that people won't want to own their car and will use any random car. this random cars should be super clean and cheap to make people not own their own)
The "current climate" is a backlash to the cavalier "move fast and break things" externality-disregarding culture of the last 10 years or so. We should be extremely conservative when it comes to tools that can potentially maim or kill other people. Not valuing human life should be an aberration.
Is it? Can it be proved controlling for all variables?
The most we have to go on for a rough approximation of safety is the nebulous and ill-defined "disengagements" in the public CA reports. From what I can tell, there's no strong algorithmic or safety analysis of these self-driving systems at all.
The climate about these things is sour because the self-driving car technology companies seem to want to spin the narrative and blame anybody but themselves for the deaths they were causing, and just praying they'll cause less of them once this tech goes global.
The usual description of disengagements: "Disengage for a perception discrepancy" or "Precautionary takeover to address perception".
Except when referring to others, then its plain English: "Disengage for a recklessly behaving road user" or "Other road user behaving poorly".
> For the purposes of this section, “disengagement” means a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.
However, some self-driving car manufacturers have been testing the rules quite a lot by choosing which disengagements to report. Waymo reportedly "runs a simulation" to figure out whether to include the disengagement in its report, but there's no mention of what the simulation is or how it might fail in similar ways the technology inside the car did! Thus, the numbers in the reports are likely deflated from being actually every single disengagement.
With all the games they play with the numbers, Waymo still reports 63 safety-related disengagements for a mere 352,000 miles. This doesn't sound like an acceptable level of safety.
The surprising part is that it appears that Waymo is planning to start deploying their system in 2018. How can they even consider it with this amount of disengagements?
I'm cynical enough to want to wait for it to actually be that good before bestowing the marketing goodwill on the company.
"Might be someday" doesn't really cut it for me any more.
Let's wait until we get there, then evaluate the situation.
These are public roads, driving a dangerous cars put other people at risk.
Inexperienced drivers cannot be avoided, but many states try mitigate the risk by limiting teenage drivers.
But my point is that whether to allow a vehicle that is not safer than a reasonable human driver should be not left to the car owner alone - there are other stakeholders whose interests must be taken into account.
It’s like if you said “I don’t get why people are concerned with every kid bringing a gun to school, if that would make the schools 100 times safer then shouldn’t we do it? Even if there’s an interim “learning period” where they are much less safe, doesn’t the end justify the means?”
I really can’t fathom the mindset of people who honestly believe that the only way forwards with self-driving tech is to put it into the market prematurely and kill people. In 10-15 years we’ll have safe robust systems in either case, let’s not be lenient with companies who kill costumers by trying to move to fast and “win the race”. Arguing from statistics about deaths without taking responsibilities into account is absurd. Would it be ok for a cigarette company to sell a product that eliminated cancer risks but randomly killed costumers as long as they do the math and show that the total death count decreases? Hell no.
Your opinion will be irrelevant--the insurance companies get the final say. When that safety point arrives, your insurance rates will change to make manual driving unaffordable.
I see people saying that. Why would insurance be more expensive than today--implying that manual driving becomes more dangerous once self-driving cars become common? It's already a common pattern that assistive safety (and anti-theft) systems tend to carry an insurance discount versus not having them.
This means that the insurance companies with more "manual" drivers will be paying out more often and will adjust the insurance rates to compensate.
> There is additional controversy, it should be noted, about the proposed level 2. In level 2, the system drives, but it is not trusted, and so a human must always be watching and may need to intervene on very short notice -- sub-second -- to correct things if it makes a mistake. [...] As such, while some believe this level is a natural first step on the path to robocars, others believe it should not be attempted, and that only a car able to operate unsupervised without needing urgent attention should be sold.
Defining safe errors might be a challenge though.
Tesla has been criticised all along for both its marketing and allowing long periods of unattended driving assistance.
Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.
Personally I think that autopilot should only take over when the driver is unwell or that it thinks there will be a crash.
Even when driving full time drivers get distracted. So when the car is doing the driving humans will get bored and won't be able to respond in time.
Autopilot however is always on and never distracted but doesn't work in some cases.
As does Cadillac's Super Cruise system which is not getting the respect it should -- including by GM itself.
Except they sometimes fall asleep:
Either it requires the driver to occasionally take over and handle a situation on short notice, then the driver should have hands on the wheel and eyes forward (and the car should ensure this by alerting and slowing down immediately if the driver isn't paying attention).
Or the driver isn't required to pay attention and take over occasionally, and then it's fine to reply to that email on the phone while driving. I see no future in which there is a possible middle ground between these two levels of autonomy.
Just like we will probably never accept (neither socially nor legally) autonomous drivers that aren't at least an order of magnitude safer than human drivers, we will likely continue to accept drivers not paying attention to their manual controls - but we do not accept drivers not paying enough attention to take over after their AI driver.
> nobody would buy a car with this deliberately annoying handling.
Exactly. And since this is the only way of making a reasonably safe level 3 car, this is also why many car manufacturers have actively chosen to not develop level 3 autonomous cars (because they are either not safe OR annoying - and either way it's a tough sale)
In the morning, I leave at 5:30am. There is constant moving traffic at 60-80mph on the first leg of my trip (6 miles). I drive manually, or with AP, and I pay full attention, 100% of this time. If I'm using AP, it's because it's at least as reliable as I am staying in lanes.
I then change interstates. I do this in manual mode almost every time. AP isn't great at dealing with changing multiple lanes quickly and tightly like traffic often requires. Once I'm on the new interstate, I get into my intended lane (second from the left), put the car in AP, and we have what you'd consider stop and go traffic for a few miles. At this point, I open the can of soda I brought. I keep a hand on the wheel, and I pay attention, but I'm more relaxed than I was earlier, when I was doing 60-80mph. At this point, the only thing that I need to do is to respond to someone jumping into my lane and cutting me off (which the AP deals with, but I have more faith in my ability to slam the brakes), or road debris, which at this speed, is not a problem that needs less than a second of response time.
There's a slow steady 40mph drive that I'm in full AP for, drinking my soda and paying attention, and then the traffic thins out, and I notice that I'm starting to lag behind cars, because my AP has a set max of 70MPH from back where the speed limit was 65mph (even though I was only going 20-40). At this point, both hands on the wheel, full attention, and I increase the AP max to 75 or 80, depending on how much traffic I'm in. I switch it back over to manual to make the lane changes necessary to hit my exit, and I'm manual until I park my car at work.
On the way home, I'm in stop and go traffic for an hour. When I'm 'going', I'm going 5-10mph. I am in full AP mode 95% of this time, and I could take a nap at this point, and it wouldn't actually be unsafe. I'm safer with AP than manual at this point, because my attention fades if not and I could drift lanes, or bump the car in front of me. Which I've seen happen to other people countless times, and which just increases the amount of time everyone else is in traffic, too.
Even the most egregious lane jumper can't get into my lane too fast for AP at this stage of my commute. I just set the follow distance to 2, and listen to audio books while I browse twitter or facebook. I look at what's going on out the windshield, but it's virtually unchanging. Like the thousands of people surrounding me, I'm slowly creeping forward, waiting on the 20-30 miles to pass. For an hour and a half.
The bottom line is that people don’t accept any risk at all involving autonomous driving - regardless of whether the alternative/old tech was worse. So, to put it very bluntly, people accept being hit by a texting person not paying attention for 1 second. People don’t accept being hit by a person in a level 3 autonomous vehicle not paying attention for 10 seconds - and that’s regardless of the relative safety of the two systems.
Obviously if you use Autopilot in bumper to bumper traffic this is an improvement, and a huge improvement over texting while manually driving. But texting at highway speed is thankfully rare when manually driving and should be just as outlawed with AP.
That might not be worth the full cost of autopilot (particularly compared with lane keepers), but it's a thing.
I should not need to trust you -- if I trusted you, why would we need autopilot at all? Humans are either qualified to drive cars or they aren't. If they are qualified, we don't need autopilot. If they aren't qualified, autopilot should never require human intervention. What we have now is a half-measure that assumes that neither humans nor autopilots can be trusted, but that some combination of those two untrustworthy parties can somehow be trusted. It doesn't make any sense.
edit: corrected a typo
Also, in response to the statistics you cite: I wouldn't expect the statistics to be that far off the average because AutoPilot is limited in it's possible use cases at the moment. Even in cars equipped with AutoPilot people are always still driving the car for at least some part of the trip. Therefore I wouldn't expect the impact of AutoPilot to lead to a significant deviation from the average. Plus, I'd speculate that the crash rate per mile is probably higher on a Tesla then an average car -- I'm thinking of like a Honda Civic. Faster cars probably get in more crashes, right? Maybe not, who knows. Regardless it should be possible to control for this and assess the effect of AutoPilot on per mile crash rates by comparing rates for Teslas with and without AutoPilot. This is somewhat complicated by the fact that even Teslas without AutoPilot have automatic collision avoidance, but should shed some light on whether AutoPilot is making people crash more or less.
Whether you trust others is immaterial; statistics will be the final arbitor. If Autopilot still causes fatal accidents, but fewer fatal accidents than humans alone, how could you argue against such a safety system? What of the lives saved that wouldn’t have been if we demand an entirely fault proof system prior to implementation? Who are you (not you specially, but the aggregate) to take those lives away because of irrationality?
This claim was immediately called into suspicion when it was first published, and the NHTSA is currently facing a FOIA lawsuit for refusing to release data to independent researchers.
You believe independent verification of study results are "immaterial"? OK.
The NHTSA made a determination and I’m using it as a data point.
So does any car with automatic emergency braking and/or or forward collision warning (links below).
The problem here is, this Tesla drove head-on into the gore point. And previously, it drove right into the side of a huge truck.
So... can it be trusted? Your call.
IIHS data show rear-end collisions are cut by 50 percent on vehicles with AEB and FCW.
A lot of human-led accidents sound just as silly when stated this simply, so it's not relevant to the comparison.
More to the point, I think each of the deaths would be exactly as tragic, no more no less, if the sequence of events was complicated.
It may be irrational, but I can forgive a human, I cannot forgive an AI.
I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.
The very challenging, and somewhat unusual given the life/death stakes, aspect of this domain in general is that general 'easy' logic works for 99% of the scenarios. But in the remaining 1% -- and I mean every single possible last scenario, whether probable or not -- the "what to do" is so much more difficult. Enough time to do it? Can be done safely? Etc.
The reason the car turns control back over to the human at all is because the vehicle presumably does NOT have any maneuvers left which are high-confidence -- whether that's because the sensing equipment failed, the software failed or failed to interpret the data satisfactorily, or there's just no "good move" left, or any other number of reasons, etc...
seriously, i slow down when i am unsure of the conditions, i surely have never evaluated every last scenario, and i have never run into a concrete barrier.
Mental experiment: which is preferable - colliding with a static barrier at an unchecked 70MPH or getting rammed from behind by a truck at a relative speed of 30MPH (assuming that's how much you've braked by)?
That said - slowing down =/= emergency stop - that truck behind you should leave enough room in front of it to come to a complete stop - unless you've cut in front of it. I'm probably ranting, but I'll repeat this as I've witnessed it far too many times: do not cut in front of loaded semi-trucks. The gap in front of them is intentional and it wasn't meant for you - it's their braking distance - they have ridiculous momentum and they can squash you!
OTOH, the car has probably saved me from at least one accident where traffic ahead of me suddenly slowed down right as my attention relaxed. The _only_ correct way of using these systems is to treat them as an extra level of safety on top of your responsibilities as a driver. An "extra set of eyes" to help you avoid an accident while leaving you as the primary driver of the vehicle.
Human+automation is very powerful! However, I don't think we are far from self-driving beating human alone.
It's not perfect but it sure beats the driver being so used to it they start doing other things like watching movies on their laptop.
30 seconds is a HUGE amount of time to react on a highway. You'll be long dead by the time another 25 seconds pass if the accident happens just 5 seconds after you took your hands off the wheel. And this is exactly what happened in the recent crash.
Also, if Tesla intends to shield itself behind a beta status for their autopilot system until it is perfect, I think this beta status will remain even longer then the time GMail was in beta. At least, in this case they should own this problem and somewhat hardcode a meaningful warning at such intersections or do something.
The question seems to be which is better:
1) A car very intelligently helping a human drive better.
2) Or a car mostly driving well itself but needing humans to very rarely override fatal crashes.
I prefer 1)
I'm not sure there are any studies comparing these things, since "human driving but lots of extra tech help" is harder to quantify and doesn't have all the data going back to the mothership like Tesla.
If they want beta software on a private track, that's their (and their insurance company's) problem.
Different environment (and stakes), but I observed the same thing happening a couple of times during IT incident response. The automation crashed/failed/got into a situation it could not automatically fix, and the occurrence was rare enough that people actually panicked and it took them quite a while to fix what in the end was a simple issue. They just didn't have the "manual skill" (in one case, the tools to actually go and solve the issue without manually manipulating databases) anymore.
I dont trust anything from Elon anymore, big talker and lots of hype.
Not to mention there are things most people do defensively while driving that autopilot doesn't: anticipate a vehicle coming into my lane by both looking at the wheel position and the lateral movement. Autopilot ignores those pieces of information and data until it's in basically in your lane.
Personally I feel I have to be more on guard and attentive with it on because I know their are fatal bugs.
It seems like AutoPilot users take their hands off the wheel regularly, and I just don't see how that is safe with this type of system.
Isn't it true that the thing cannot detect a stationary object in its path if the vehicle is traveling at above 50kph? If that is true, then I think this is an extremely dangerous situation that the owners of these vehicles are in...
That's what 'too good' is supposed to be.
The solution is easy, but probably not what many would like to hear: ban all "self-driving" or "near-self-driving" solutions from being activated by the driver, and only allow the ones that have gone through very rigorous and extensive government testing (in a standardized test).
When the government certifies the car for Level 4 or Level 5, and assuming the standardized test has strict requirements like 1 intervention at 100,000 miles or whatever is something around 10-100x better than our best drivers, then the car can be allowed on the road.
Any lesser technology can still be used in other cars, bit instead of being a "self-driving mode" it should actually just assist you in the sense of auto-braking when an imminent accident is about to happen, or maybe just warn you that an imminent accident is about to happen, which could still significantly improve driver safety on the road until we figure out Level 5 cars.