In 2017 there were zero reported accidentally deaths . Compared to 1972, one of the most dangerous years to fly, where more than 2,300 accidental deaths occurred on commercial flights.
How did we go from thousands of deaths to zero? Primarily through crashing. Then working to evaluate what went wrong and improve the design, processes, and programs around commercial aviation.
As one pilot explains :
"One of the ways we’ve become so safe was to realize that our efforts were never going to be good enough."
So yes, I think technologies like autopilot will save lives. But it won't be able to do it magically, unfortunately some harm will occur because we—in all our human wisdom—just can never accurately predict all possibilities. It's the same problem most self-driving companies are facing today, and the reason promises of self-driving being here today have fallen short. The world is far, far more complex than we realize... especially when we put a computer out into it and tell it to "go forth."
If we look at the history of airline safety, going back further, we can see that it took many years and numerous crashes to get to the point where it became well regulated and crashes were independently investigated. Before government stepped in, private companies weren't doing a great job and their biases (inc. bad headlines/liability/cost) got in the way of improving passenger safety.
So let's absolutely keep with the airline analogy and regulate Tesla's AutoPilot like the airline industry.
We need to have the appropriate balance.
No, the reduction in deaths was through not crashing.
The airline industry would not have got safer faster by deliberately doing more crashing. "Okay, we need to increase safety, so let's crash as much as we can!" is something they didn't say, because that wouldn't have made any sense.
What you've done here, and the reason you've been upvoted to the top spot, is you've applied the Silicon Valley mantra of "failure is good" to airlines, and by extension, to autonomous cars. That mantra isn't really even true for software - it's not that failures are good, it's that learning is good - but it just about works in a world where people turn can turn free VC dollars into Medium posts and acqui-hires by failing, and hence where you might as well learn that way, rather than proceeding slowly with painstaking care; this approach doesn't really work in worlds where failures are underlined in blood.
Airplanes never had airbags, crumple zones, etc. The odds of surviving a car crash are much higher than odds of surviving a plane crash. It seems therefore that testing in cars can be done with lower risk of loss of life. Yes, there will be fender benders and some deaths.
NASA is horrible at keeping astronauts alive. Worse than the Soviets/Russians, as far as I know.
Apollo 1 and the two Shuttle disasters do not paint a good picture for NASA's safety in practice.
Here's an example of one of their famous investigative reports on a Tesla Model S crash while operating under AutoPilot back in 2016:
Scenarios where a plane would refuse to takeoff or divest to another airport... are conditions where employers would still expect their employees to get to work, or their deliveries.
That said, I'm totally fine with a self-driving car that gives up when it can't handle something. As long as it can safely remove itself from traffic and come to a stop (e.g. pull over to the side of the road), then alert me that I need to take over, that's fine.
If NHTSA mandates driver attention management systems in all cars then this calculus changes. If the driver attention management system is only required while AutoPilot is on, then the requirement is actually counter-productive to overall safety, e.g. results in increased deaths.
Extreme but (hopefully) relevant counter-example: was the killing of tens of thousands of people (soldiers and civilians) by American citizens in Europe during World War II justified? This is a very complex question, and any reasonable answer will be even more complex and nuanced, so I'm not really asking anyone to directly answer it in this venue.
To be clear, I'm not talking about Tesla here, only the first sentence in your response, in isolation.
I think there are some circumstances where some deaths as payment for far fewer deaths that are morally and ethically sound and defensible.
If we can agree to that, then we can move on and analyze what Tesla is trying to do.
But this is not a war in any way - there's already options in place to save lives - better policing, stricter speed limits, safer vehicles/roads/crossings. Allowing new technology on the road without the proper safeguards, (and the in my opinion abhorent justification that a few unlucky people might just get run over for the greater good) is not one of these options.
The "dangerous self driving cars on the road" option is only helping the pockets of Tesla's investors, not the public.
I'm not trying to invoke Godwin here: just that war happens be somewhat hard to compare - "people die, ergo same thing" seems a rather forced comparison.
Perhaps the only fitting parallel seems "history will be written by the victors."
I think everyone wants better transportation and fewer deaths. But 0 deaths with an emerging driving technology sounds really hard. There are just so many edge cases and physics is pretty unforgiving on our bodies past 40mph.
There's a lot of subtle interaction that I have yet to see how ai driving cars will replicate, or at least account for.
Also, many states have regulations surrounding which colors of lights can go where on a car for these sorts of reasons - unambiguous and predictable communication is key to driving, and putting what looks like a brake light on the front of your car is confusing.
I'd change "don't need to" with "shouldn't have to".
There are a lot of dead pedestrians who didn't look before crossing a street because they knew they had the right of way.
It's just another form of "trust but verify". I know when I'm crossing a busy intersection, even if the cross-traffic has a red light and I have a walk sign, I'm closely watching the on-coming cars to make sure none of them are running the red light. I'm extra vigilant for the people making a right turn on red.
Edit: I'm not saying ai/ml will be racist (although it hasn't shown a great track record so far). I'm saying public sentiment, given the current status of media alarmism, is going to jump at the first hint of any sort of detection bias, whether it actually exists or not.
Imagine that a car swerves around a pedestrian A who happens to be in light blue jeans and a white jacket and strikes an undetected pedestrian B who happens to be in black pants and a black jacket. It seems easy to imagine how that could happen to any driver, right?
Now, imagine that it's an autonomous car, pedestrian A happens to be white, and pedestrian B happens to be black. Boom, there's your scandalous story: an autonomous car just swerved to avoid a white pedestrian and struck a black pedestrian.
But there is a bit more to it. The A or B side-by-side is the most simple comparison to make the point.
Some of the same dangers that are in play now might still be risks as AI cars become more common. For instance, minority neighborhoods tend to not have as many street lights at night. The cars might be smart enough to compensate. But if they aren't, AI has the potential of falling victim to statistical correlation.
Headline by click bait reporter: "Studies show self-driving cars kill minorities at higher than average rates"
It doesn't matter if the real cause was street lights.
The implication will suffice, in light of the public/media's understanding of ml / ai. They will think the car "chose" which person to hit.
While I don’t think autonomous cars are a particularly good investment for the future of transportation, does anyone doubt that with massive, publicly-funded investment we could not achieve them? And what’s more on terms where whatever “costs” were associated with them were subject to democratic checks and debate?
The question shouldn’t be whether autonomous cars will kill people, but whether they’ll kill more people than human drivers. If they'll kill fewer people, then on net they're saving lives, not "sacrificing" them.
Is it ethical to build something knowing it will definitely kill random people until you perfect the technology? Will auto makers be held accountable for any wrongful deaths caused by bugs?
If your wife or child was mowed down by a normal car, would you be more supportive of autonomous ones?
If your wife or child wasn’t mowed down because the car was autonomous rather than manual (or vice-versa), would you even notice?
Yet, which of human or AI drivers get banned by 2040 depends on convincing people like you — assuming I am inferring correctly that your question is rhetorical and the answer is “no, obviously”.
I don't know if that will be enough to make the difference, but I can certainly see how it could be.
You're still making a prediction about the future. Some predictions are safer than others: I predict I will see the sun rise tomorrow. Some predictions have little rational basis in reality: I predict you will agree with me. Your prediction about automomous cars being safer than human drivers? That falls somewhere in the middle.
From my perspective, all computer vision systems are abysmal crap and spend orders of magnitude more power to accomplish results many orders of magnitude less impressive than the human brain is capable of. The human brain uses something like 10-20 watts, how much CV/ML can you do with 10-20 watts? Not much at all. There is clearly a huge disconnect between our current approach to solving the problem technologically and how the evolutionary wetware solves the problem. That gap may be bridged sometime in the future, but I'm not holding my breath. I think the problem is being approached from the wrong direction, like trying to drive a screw with a hammer. Maybe we'll still get the job done, but I do not consider that a foregone conclusion.
I just hope that, if the time comes, the decision is based on comparative improvement, instead of hysterics about whether self driving cars will ever kill anyone ever.
* A car can spare 100x that many watts.
* Why would we even want cars to calculate the way humans do? It's a very clearly flawed method.
> Why would we even want cars to calculate the way humans do?
The fact of the matter is that humans are orders of magnitude better at object recognition than computers. You may believe that will change in the near future, but that's not yet been demonstrated.
Don’t get me wrong, I am aware that current generation AI is fundamentally worse than a human brain, because if it was already as good as a human then the fact Tesla’s AI has done more miles than the average person has had heartbeats would make it essentially perfect. It isn’t perfect, so I know the AI is flawed.
But my question is, does it seem to you that this approach cannot ever become the equal of the best human (if that best human had 360 degree vision, sonar, and radar, and their nerve impulse conduction speed was way faster), and if not, why not?
Yeah, its an interesting dichotomy. I've been to funerals for people mowed down by "normal" cars. I know the stats. I'm more supportive of autonomous cars.
I also know that if the situation were reversed and the people closest to me were affected by autonomous vehicle accidents I might have an strong emotional reaction in the other direction.
The human mind just isn't rational when it comes to these things. It will take real leadership and risk taking on the part of manufacturers to get past this impasse.
Watch the show, it's great.
There are ~1.25 million car deaths per year (20-50 million injured or disabled), a lot of room for "progress" to save countless lives.
The vast majority of those occur in areas that will not see autonomous vehicles for a very long time, and have a lot of environmental challenges that make it much harder for an autonomous vehicle to outperform a human.
It’s true that the vast majority of cars won’t be autonomous for “a very long time” so the vast majority of deaths won’t be prevented. But what’s your point? That if the vast majority of the problem can’t be addressed quickly it’s not worth exploring?
On a quick search, 30% of fatalities involve alcohol. 7% involve rain (the most common condition that causes crashes).
Cars don’t actually need to be better than humans, or even as good as the average human. They just need to be better than the bottom 5th percentile, prefer safe driving, and not get distracted/drunk.
My point is that trotting out the 1.25M number when we are very far away from dealing with the vast majority of it is distracting at best. We should instead be talking about the numbers of injuries and fatalities on the countries where autonomous vehicles are most likely to be deployed, and how many of those people could have been saved.
> Cars don’t actually need to be better than humans, or even as good as the average human. They just need to be better than the bottom 5th percentile, prefer safe driving, and not get distracted/drunk.
That's very true, but orthogonal to the point of using realistic numbers.
I did not say or imply that.
> The point is thousands of people die per day at the hands of human drivers.
Yes, and thousands will die every day for quite a long time. You make it sound like autonomous vehicles are going to make a significant dent in that number.
> The sooner we can transition to autonomous vehicles, the sooner they'll start saving lives.
This is quite true. It will just be limited to lives in developed nations and a few select other major cities.
Personally, and I tend to believe it’s not very different than average, I don’t want less chance to be hit. I want more control and awareness, which can reduce risk fundamentally. I know that accidents happen, but I also know that chances grow by orders of magnitude if I fail to detect dangers and/or make predictions. Malfunctioning traffic lights can do much more harm than your average drunk driver. Not seeing someones intent (or steering/braking problem) does that too. If autopilots provide me a way to reduce risks by clear means on my side, I would not mind [with a great share of egoism, but that’s how it works] if on average it could harm crazy leapers even more than human drivers do. Because the road is a source of danger (due to natural physical/perception failures) and by skipping safety checks they enter a potential chaos which is un-manageable by definition. Heck, I’d better coexist with a tech that never gets tired to send me clear “fuck off” messages, than with human drivers who e.g. think they are smart enough to not brake, because “our current inertial frames render a collision unlikely anyway”, or whatever bs they believe when doing this. But it’s a false negative for me, idk for sure if they see me at all or estimate their reaction time correctly.
> By this last estimate, Autopilot has about the same to 35% lower fatal crash rates than any conventional vehicle at this time.
We've got very accurate counts. To say countless is hyperbole.
>If they'll kill fewer people
Everyone assumes ML / AI will be better drivers than humans, but I think that's just blind hope at this point.
I've seen no proof or studies that show any better logic than fusion power being about 20 years away (indefinitely).
> We've got very accurate counts.
Pointless rebuttal, the exact number isn't relevant, nor easily quoted as the number of deaths per day, month, year is fluid, evident as you've made no attempt to quantify an "accurate count" yourself.
> Everyone assumes ML / AI will be better drivers than humans, but I think that's just blind hope at this point.
You call someone out for non precision then follow up with a baseless supposition that you think people who believe autonomous drivers are safer then human drivers are hopelessly blind.
You can include the NHTSA and the U.S. Department of Transportation in your blind hopefuls who are keen on moving towards an automated driving future:
NHTSA is no different than the DOE stating their (thusfar inaccurate) expectations for fusion. Government agencies can also express hope.
>The continuing evolution of automotive technology aims to deliver even greater safety benefits
They have a very well thought out timeline with goals and expectations. But there's no real evidence of why we should expect to hit level 5 driving in a given time frame.
It's still just a goal they hope (aim) to hit.
Unless there's a Moore's law of AI driving ability that I'm unaware of.
I'm sorry if I caused any confusion. I meant "countless" simply as in "more than a person could reasonably count". The usage I'm familiar with appears to be different from yours.
It's not a hyperbolic to use "countless" when it's one of the top 10 causes of deaths.
> NHTSA is no different than the DOE stating...
You're really into your inaccurate strawmans
2) I'm not the one making the claim that a revolutionary technology that will save millions of lives is just around the corner.
Will progress be made? Probably. Will an army of driverless cars be safer than the same number of humans driving in X years? Thats a bold statement that requires an explanation beyond "Because technology progesses."
I think the comparison to fusion is quite adept at this point. We can re-evaluate in 20-30 years.
Using the word "countless" to describe one of the leading causes of deaths is not close to a valid example of hyberbolic exaggeration.
> 2) I'm not the one making the claim that a revolutionary technology that will save millions of lives is just around the corner.
You're the only one making proclamations, hypocritical of others for not using exact numbers you wont quote yourself and proclaiming some of the smartest engineers in the world and motor vehicle institutions must be blind hopefuls for investing their efforts in automatic driving solutions to save lives.
> I think the comparison to fusion is quite adept at this point. We can re-evaluate in 20-30 years.
Says a lot that you think unproven feasible future technology is somehow an adept strawman to actual technology that's driven over 1B miles.
Tesla's AutoPilot results for the latest Q2 2019 shows it had 1 accident per 3.27 million miles vs NTHSA's recent statistics showing 1 accident per 498k miles: https://www.tesla.com/VehicleSafetyReport
The state of automation is meaningful today, it's not blind hope to presume actively developed technology will improve even more with time, it would be naive to think otherwise and suggest it needs another 20-30 years to evaluate FSD's for viability.
Not that level 5 is necessary. A good level 3 would make a huge difference. If it can self-park too then it's basically everything I want.
It seems like even Tesla themselves have Trouble forecasting more than +/- 6 months out.
Predicting technological progress is hard, both in terms of timing and absolute ability. And this is before you even get into predicting the market success of a shippable product.
Then you should be _very_ sceptical of the current state of affairs
This is a fairly nuanced Bloomberg article. You're making it sound like Elon himself is proclaiming that some sacrifices must be made for the sake of his self-driving Teslas.
Far better to have someone else do the job. Independently, of course: it's definitely NOT Elon Musk saying that, no no no. Nod, nod, wink, wink.
Do we stop using seatbelts until they are 100% effective? Was it a mistake to start putting seatbelts in cars before it could be proven that they absolutely wouldn't kill anyone?
If you are trying to attain maximum safety for a belted passenger, then no, current passenger seatbelts are not optimal. This is clearly evident from the far better seatbelts and driver safety devices used in motorsport.
Current seatbelts represent a tradeoff between safety if you use it, and how annoying they are to use (and therefore, how likely people are to get annoyed and not use them).
Say someone gets hurt from a seatbelt. If it's designed exactly as required by regulation, there is an immunity for the company (IANAL so I don't know the legal term for this)
IF you try to make it better, and do make it better for most cases, but worse for some edge case, then you have a liability issue in that your 'negligence' caused injury that wouldn't have happened otherwise.
The stance on civil liability in America often stifles innovation in many areas regarding life safety. Tesla is learning this quickly.
Yes, I doubt it. The scale of the problem is massively bigger than anybody will admit right now. To get to the point where an automated car will kill fewer people than human drivers, the car must have an understanding of its environment. Machine learning isn't going to get us there.
It's like questioning the existence of their God.
Humans are not as hard to beat as you think they are. They have lots of limitations:
- Can only see in a single direction at a time.
- Vision-only, no radar
- Prone to becoming emotional
- Drive in unfamiliar places where they do not have a full understanding of the environment or local laws
- Can be inebriated, easily distracted, fall asleep
- Selfish: time to destination is often more important than the safety of others
- Majority of people do not follow all traffic laws (speed limits, turn signals, lane-usage)
Are you sure about that?
> Machine learning isn't going to get us there
Do you mean ANNs aren't going to get us there?
Evolving FPGA's (by ignoring the digital abstraction) is a window into why.
Let's look at the scale. A brain neuron is 4 microns to 100 microns in size. Whereas a transistor sizes are now pushing 3nm. That's a lot of free space to work with, why can't we easily simulate many neurons within the space of a biological neuron?
Consider the 3 body problem. Without a math breakthrough, we can only approximate, and even with a math breakthrough, we can still only approximate (because real systems are not closed). This idea that we can simulate something misses the point. It's always going to be a guess, on bounded memory, because we cant copy real states.
Letting go of the digital abstraction provides a window into why this is hard.
These circuits evolve to become hyper-specific to their enviroment. We have no hope of repeating those results with software, it's the physical attributes of the hardware that are being exploited.
We were able to solve the n-body problem with sufficient accuracy for Voyager to successfully encounter and gravity-slingshot around Jupiter twice and every other outer planet, with only the computing power available in 1977.
The mechanism of action of a neuron is not very complex. It follows the all-or-none principle along with an activation potential which is easily emulated in a binary circuit. It does have additional chemical inputs which can raise or lower activation potential. The only difficult part is sufficient connectivity, but we can simulate that, no need to use an FPGA to get actual connectivity. Modern transistors are super-fast and super-small compared to neurons so you have quite a bit of room to sum inputs, check activation energy, check chemical levels, and lookup + signal the 'connected' neurons.
Possibly, but the far harder thing to achieve is massive publicly-funded investment in the first place.
Yes, I very much doubt it in the US at least. We are many generations deep into an extremely nepotistic and corrupt bureaucracy. I have no faith that the US Federal Government is capable of handling even mundane projects at this point. We would require a serious shift in patriotism, education and accountability before I could believe the USA of 2020 is capable of doing something like the moon landing again.
Why does government research always seem to become a corrupt, inefficient bureaucracy the moment that private industry runs off with its solutions to profit from?
As a result, in the 50s and 60s, there was no better job to be had than working for the government. Astronauts were like the modern day equivalent of rock stars who also started Google. It wasn't the money, but the pride of being a part of yet another great accomplishment that the greatest nation in history was embarking on. But that's gone today.
We still had some of this going into the early 90s, and DARPA and the NSA I'm sure could attract and underpay enough talent to make up for other government bloat. But in 2019, we have lost this national opinion (or delusion).
Why don't we see any fully autonomous buses? You could run buses 24 / 7 without worrying about drivers. The difference, people aren't going to pay 50k for a bus seat.
So, the headline of the article is wrong IMO, but in the article the author makes the correct points.
Tesla Autopilot has driven in total 1.6 billion miles on highways, in clear conditions, in modern high-crash-safety cars. There have been 3 fatalities.
The equivalent number for 1.6 billion miles across all road types, all weather conditions and all types of vehicles would be 22 deaths in the US. But in the UK that number would only be 8, illustrating that safety is tightly linked with road conditions and vehicle demographics.
As the Autopilot is only used in the easiest conditions and safest cars, there is simply not enough statistics to say whether they are safer or not.
You would have to collect data on a single version of the system for half a decade before you could say anything certain.
Congratulations, Tesla had almost a morning worth of driving data.
And yet somehow with all that data they can't make a car that can drive itself in a parking lot.
At what point do we have enough data?
The gold standard for other safety critical systems like medical devices, commercial aviation etc. is less than one fatality per billion operating hours. 50 mph avg gets you 50 billion miles. A vision for car safety that doesn't approach this level isn't much to write home about, it would in fact be difficult to distinguish from just waiting 15 years to let the statistics incorporate the safety systems that already exist today.
Edit: Forgot to add year.
For instance the UK is already below 0.5 fatalities per 100 million vehicle miles. Car demographics matter a lot.
So if autonomous is going to give us a significant improvement, it has to be something like 50x better than the average for humans today.
PS: I meant 100 million miles in my comment above as you probably assumed :-)
But you can probably get a good estimate from extrapolating the red curve in this plot using a power law:
Also you can look at the safest countries worldwide, compared to the US, to get a feel for what's achievable with today's tech. Looks like Norway is the safest, with 4x better safety statistics than the US. That's probably a combination of better drivers education and safer vehicles, so not straight forward to compare. So really, you would need something like a 50x improvement over current US rates for it to be a notable achievement.
2. I see no evidence that "massive, publicly-funded investment" would achieve this technological goal faster or better than via capitalistic means. Do you have any?
3. Lives will be lost regardless of how the tech is developed. Surely this is obvious. So what are you even talking about?
This technology is still evolving, though, and the law will need to be adjusted accordingly.
Not functionally, consistently, or meaningfully. Many of these companies have learned to how to ingratiate themselves to party machines in the markets where they operate, such that this decision-making is almost entirely excluded from public check or debate. Others, like Uber, willfully break laws with little consequence, for much of the same reason as the above.
>2. I see no evidence that "massive, publicly-funded investment" would achieve this technological goal faster or better than via capitalistic means. Do you have any?
As someone else pointed out, its not about faster but more safely. But the whole technology that these companies are building on top of was arrived at by this kind of public research to begin with:
>3. Lives will be lost regardless of how the tech is developed. Surely this is obvious. So what are you even talking about?
That private companies not be the nearly sole arbiters of what lives are worth sacrificing.
It's an inherent characteristic of capitalism that the primary driver is profit. I mean, it's the whole point of the system. What is the value of one human life within this system? Historically that value has not been high.
Now, when people saying "publicly-funded investment" it's a popular conception that this means a vast amount of government waste, because everything the government does is wasteful, right? How is three or four companies spending huge amounts of money to pursue the exact same goal not wasteful?
Look at things like the internet itself. Public funding of technology can work.
* capitalistic automatic cars mean 2000 deaths per year, reducing linearly to zero after 10 years
* socialist automatic cars eventually mean 1200 deaths per year, reducing to zero after 30 years
* manual cars mean 1000 deaths per year, not reducing
Then after 11 years total deaths will be the same in all 3 cases (11,000), but after 30 years it will be 11k capital, 18k socialist, and 30k and growing with no changes.
However that's of no comfort to the families of who died in the start, especially the ones who didn't opt in (as was typical with flying when it was more dangerous).
> I don't think anyone is suggesting that publicly-funded investment would make this happen faster. Just safer.
The point is that while it could be safer per year, being slower means that it's not safer.
Putting those billions in removing the drunk or tired drivers from the roads, enforcing the laws and rules would save more people.
If you watch videos of autopilot preventing accidents on YouTube, it involves two stages: first the car detects a situation is likely to lead to a collision, then it brakes and steers the car to avoid it. There is no reason for autopilot to be controlling the vehicle before anticipating a collision.
By contrast the deaths involving autopilot have all revolved around autopilot failing to perceive some object, or swerving the car into it. I don't know of any deaths that were caused by Tesla's accident avoidance feature.
Accidents from people who shouldn't have been driving in the first place. Part of the benefit of self-driving cars is not needing a qualified human, so the blind, elderly, children, drunk, etc can [still be] drive[n around].
Especially since no self-driving car is better than a shitty driver in 2019.
Many accidents in stop-and-go traffic jams, people falling asleep due to being tired or bored (yes, that is a real issue) and veering off course, pedestrians stepping in front of the vehicle...
Other Volvos have tools for detecting drivers losing alertness. Technology for helping to keep cars in their lanes already exists.
Technology for stooping cars when pedestrians and bicycles appear in front and/or on the side when making a turn at an intersection already exists.
Most of these systems need to get much, much better, but I think it's very reasonable to separate them from the general capability of driving a car autonomously from point A to point B.
If we can develop a car to drive autonomously from point A to point B, and if we are granting that this system can also solve the problems you list above, I believe we can also develop most of the safety features that do not involve driving the car autonomously from A to B.
I think this is more than a minor point to quibble over, as there are many extremely difficult problems to solve for fully autonomous driving that are unrelated to this kind of safety.
For example, autonomous driving involves recognizing when the map is wrong. Despite driving maps being a so-called mature technology, a variety of map programs tell me to turn left on certain streets during prohibited times.
There is a road that was closed at one end more than a decade ago, and when summoning a ride-share to an address on that street, we have to always text them not to follow the map directions, because it tries to send the car down the closed road, and today's ride-sharing drivers are terrible navigators and literally get lost.
Solving autonomous driving solves problems like those AND safety problems, true. But if we just want to solve the safety problems, we can, and we can solve them faster and sooner by focusing on them.
Particularly when driving on AutoPilot, I notice the drivers around me tend to be highly distracted, not only that but the bottom quintile of drivers are actually quite horrible at driving in practice.
Functional automation brings stability and predictability to driving patterns which will eliminate more accidents than purely emergency response systems do. The human element is responsible for $877 billion in damages, pain, suffering, and lost wages per year according to the NHTSA. I believe it's a moral imperative to massively reduce that number.
Emergency response technology works in a small percentage of cases to reduce the severity of an impact. It often does not even avoid an impact rather than just reducing the severity of one, and the systems on the market are particularly bad at avoiding pedestrians. While all that is true, these systems are also very good at annoying drivers with false positives to the point where a plurality of drivers want to switch them off.
Aside from removing distracted and poor human operators from the equation, another reason that self-driving can obtain a lower accident rate than emergency response systems is that the self-driving system will have (1) awareness of the environment, (2) keeps the car within a safety envelope at all times, (3) is in full control of inputs, and (4) has a set destination point in mind, so it knows where it is trying to go.
The emergency system may have (1) awareness of the environment, but is lacking (2) - (4). This puts it at a strict disadvantage to full self-driving.
For example, just take the simple situation of approaching a stopped car 500ft ahead at 25mph. The road is currently one lane but expanding to 2, the stopped car is turning left in the left-turn lane. When does the AEB apply the brakes? The answer must always be "at the last possible second," because the system doesn't know where the driver is headed and is not supposed to intervene during normal driving.
An autopilot knows if is planning to stay right and proceed through the light, or if it's going to come to a stop behind the lead car in order to turn left. It can set an appropriate speed at all times based on not only the environment, but the desired path ahead. It is planning and strategizing, instead of merely reacting (and not at the last possible second either).
A functional AutoPilot comes to a full stop at a stop sign, every time. It always signals before merging. It knows how to zipper merge. It doesn't get on it's cell phone on the highway, or fall asleep at the wheel. Personally I think FSD should be able to prevent an order of magnitude more accidents than a purely emergency response system.
In practice, of course, we do both.
Say the power cuts in a car, or worse its location updates no longer match the surrounding cars’ sensors, the cars in its mesh could immediately alert all the cars around that it is no longer updating its location and might be anywhere, so all cars in the area should go into low speed safe mode. It should be a doable to some degree.
At 60 mph the stopping distance including reaction time is typically referred to as around 80 meters, and that includes 20 meters of reaction time. In modern cars like Teslas it's more like 40 meters, but 20 meters of that 80 (or 40) is human reaction time.
If you can cut that down due to much faster computer reaction times you can get a lot more road throughput, of course a logical step after that is to consider networking the computer control, but talking about that from the get-go seems to me to be needlessly jumping several steps ahead.
In fact I'd probably go the other way and advocate for requiring these systems on cars similar to seatbelts. At the very least collision accident avoidance and blind spot detection. Some manufacturers still make them options or, worse, reserve them for the highest premium trims which is kinda bullshit given the technology really isn't that expensive.
That means we have a tremendous incentive to make investments which make the roads safer, and lower the crash rate. The single biggest technological advancement to make this possible (without eliminating cars) will be autonomous driving.
Today the lowest hanging fruit for self-driving is algorithmic work which allows the car to navigate safely based on its own sensor suite, and mostly by ignoring map data which could be unreliable. Reliance on highly accurate maps and roadway infrastructure is not really a pressing concern, because the cost/benefits of investing in improving the internal algorithm is much higher than the cost/benefits of adding roadway infrastructure for such a small percentage of vehicles to leverage.
I believe there will be a cross-over point, where safety for a significant number of vehicles can be significantly improved by making significant changes to road infrastructure to support self-driving. When that day comes, I hope that the USG will be ready to invest on the order of $1 trillion to deploy that infrastructure and ultimately, mandate all new vehicles on public roads are self-driving.
I've said it in many comments before - it is a moral imperative to get computers and not humans "behind the wheel". We can afford to spend trillions of dollars to get there, but that's not to say we get there faster by spending trillions of dollars today.
 - https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/2015sae-blin...
The whole process of implementation of autopilot is based on the logic that its implementation is scaled as a function of the number of lives it SAVES through its utility. Meaning that as the percent automation increases, you should be able to see a net DECREASE in casualties in automated vehicles compared to non-automated vehicles. If we are not seeing this, then the rate of implementation may be too high, and the software needs to be corrected until it is.
Nothing in life is perfect, there will always be accidents, the only question is can we create tools which will decrease the percent chance of them happening.
I am not aware of any proof that neural networks combined with current tech is enough.
I know there are some misleading statistics around but those are not correct are biased and is not a proof.
Tesla will just raise more capital. It's unlikely that they will go bankrupt anytime soon.
Unfortunately, there are nowhere near the misfortune of Faraday future.( They just announced bankruptcy today)
Then you have 12 companies testing with real people and not sharing data so killing about 12 times more people then needed.
I once had the opportunity to interview a renowned cancer doctor and researcher at MD Anderson hospital. He told me that we would never again see the kinds of advances we saw in the early days (70’s and 80’s) of cancer research because those days were the Wild West. He basically said they did many reckless things, but that’s how they stumbled upon treatments such as the protocol for using steroids with chemo for acute lymphoblastic leukemia.
Ok, so high risk experimentation lead to exaptation and innovation. Not sure I want to see that happen with self driving cars but the other part of me knows that’s how complex systems “evolve”.
Good thing we already have decades (even centuries) of research ethics we’ve developed to deal with situations like this. Too bad Tesla is ignoring them. The proposal in the article to treat these like (potential) medical breakthroughs is spot on.
I think we all agree that autonomous cars will eventually save lives, but that was true 50 years ago too. The question is whether these autonomous cars will save lives. We’ve developed the ethics. We’ve developed the standards of evidence. In seems like a no brained to require them here.
What I don't like is that I cannot opt out of this experiment. I believe many of these deaths are needless and due to cost savings and sloppiness.
Tesla to me seems less like a center of excellence and more like a group of people who can put up with Elon. Presumably made easier by telling him what he wants to hear.
While the average driver may be bad, the average hour driven is by good drivers.
Plus I don't like the idea of a Tesla can crashing into me. Seems like something that could be weaponized.
However it would not have a winner take all, Wall Street frothiness pump and dump aspect to it.
You put rfid tags in the asphalt, on the curbs and some kind of beacon on the telephone poles, or a 'wire in the road' (like a smarter version of the working system that GM built in the 70s) and then, because it is publicly owned infrastructure, everyone can use it.
That goes against the current hypocritical faux-libertarianism that is espoused by some of our tech elites-but it would in fact work.
I said a year, because that is how long it would take to install it.
Edit to add: for fun, you can go on Mouser or Digikey and determine how to put the system together with off the shelf parts. Just add weather proofing...
Because there is skin in the game, people drive carefully -- drivers who are consistently reckless are removed from the system. Pedestrians and other non-drivers have adapted their behavior as well to reduce the risks.
Autonomous drivers have the problem that they do not carry skin in the game -- they are not removed from the system for bad behavior. In the short term that just gives us an ethical quandary about who is responsible for deaths caused by autonomous systems.
In the long term it presents much higher risks as non-autonomous-drivers (both drivers who are not autonomous and humans who are not driving) will adapt their behavior to their mental model of the behavior of autonomous vehicles. Pedestrians will adapt their behavior correspondingly, creating even more dangerous scenarios because the interaction with sufficiently complex system is inherently unpredictable. We rely on the fact that we can build an approximate mental model of human behavior as it pertains to other drivers to predict what their behavior will be, and we constrain that behavior via rules and legal deterrents so that they are more predictable.
When we are 50% autonomous, what happens to someone how deliberately tries to cross a busy highway? Other systems, like trains, we have warnings for -- "implacable vehicle coming that will destroy anything left in its path", but for autonomous vehicles? The priority on keeping humans alive will cause situations like this to require building stricter behavioral systems to try to limit people's behavior, but the complexity of the underlying behavior is real -- that is, most of the time, crossing an autonomous freeway is actually a safe operation because of the way that they cars are designed -- so there's no feedback that will cause pedestrians to modulate their behavior. Similarly, the driving systems themselves are designed with some elements of human behavior implicitly included -- as these behaviors drift due to the coupling of complex systems, it's not clear how we can directly adapt the behavior of the autonomous drivers.
Artificial Intelligence is a tool. Just like anyone who uses a hammer, backhoe or dump truck is responsible for what happens when they are using it, the same is true for AI.
As a tool, all AI systems should be designed to always do the most harm to the user of the AI first. It should be embedded into every autopilot-like system, and users should be aware of that choice. It's the only moral and ethically correct solution.
Let's say your AI-controlled car is driving at speed and turning the corner, there suddenly appears a little girl in the road in front of you. It can either swerve into a wall or run down the girl. A human might not be able to make the decision in time, but an AI system would. It needs to be programmed to always hit the wall.
Though this might seem extreme, the opposite is completely immoral. If it was a human driving, one could assume bad luck to be put in that situation: aka innocent until proven guilty. But for an AI system, we have to assume the opposite: The AI should always be presumed to be faulty. Therefore the user of that AI is culpable of putting it in that situation.
And just like any other tool, the manufacturer of that tool is legally responsible for its quality and reliability.
The opposite of the above means we'll all be riding around in autonomous tanks, aggressively maneuvering around each other at higher and higher speeds, pedestrians and others be damned.
There is an entirely new set of failure modes that we are not familiar with, and particularly as a pedestrian, it worries me greatly.
But the really troubling thing is the attitude and approach: this is not an "experimental software rollout" that somehow Tesla seems to advocate. You cannot use live human beings in real streets with very little safety net to "roll out" your self driving torpedoes. That just doesn't make sense and seems very irresponsible.
I would rather not have silicon valley mentality when it comes to cars or healthcare or even banking. Real people's lives are at stake.
Will you cherry pick the geography to make your argument?
Telsa autopilot isn't even self-driving. It's fancy cruise control.
Frankly, I do not believe that you are telling the truth. If you can provide a test case demonstrating this, however, it would be international news.
Tesla’s Autopilot can be manually disengaged in several different ways. If none of those worked for you, then that is indeed worth investigation.
I personally doubt your claim, though.
Bunk. You can take over at any time by turning the wheel, pushing the brake, or pushing the accelerator.
Source: I take over from autopilot all the time.
Source: I... was there? :D
At this point, you're not saving face by arguing with fact.
Can you please expand on that?
In my experience with the most advanced available Tesla driving features over the past two years, touching the brake pedal immediately disengages all of that software, and putting more than a pound or two of turning force on the steering wheel immediately disengages autopilot (steering control), but leaves adaptive cruise control on.
You can also use the same stalk you used to engage autopilot to disengage it.
'could not instruct it to stop' is a very bold statement.
Citation needed. Breakdown by different countries would be also interesting.
" reading books, napping, strumming a ukulele, or having sex."
Perhaps it was like that years ago, long before the Model 3 but it isn't like that now. My 2015 Model S would complain long before I could complete a sex act. Or perhaps Bloomberg's hack suffers from premature ejaculation.
Distracted driving is a leading cause of crashes today, and particularly if you use AutoPilot, you really see the ridiculous numbers of distracted drivers all around you. Many drivers choose to use their phones instead of looking at the road, whether they have AutoPilot or not. It's dangerous in both cases, but less-so if you have AutoPilot on. In either case, in a L2 system, if the car crashes while the driver is on their cell phone, it’s the driver’s fault regardless of whether there was AutoPilot enabled or not.
Therefore: more auto-pilot, less human death.
But consider this, you are a decent driver and you need to send your children somewhere. Do you drive him yourself because you know that you will not speed or text or be drunk or you send it with a robot that is better then an idiot but worse then you ???
Sure if you were drunk or tired is safer to send him with the robot.
The flaw is everyone thinks they will be a better driver than an AI, even though very few actually will be.
So if I had to choose my children driving with an "average joe" / friend / etc vs an AI, I would say: the data says the AI will crash 25% less, therefore it is safer.
1 replacing all drivers with AI is x% safer
2 replacing you an experienced responsible driver with an AI is less safer
Does it make sense ?