Hacker News new | past | comments | ask | show | jobs | submit login
The complicated ethics of Tesla’s Autopilot (bloomberg.com)
102 points by NN88 29 days ago | hide | past | web | favorite | 220 comments



In most complex systems the reality is people will encounter harm. The commercial airline industry is a good example of progress through crashing.

In 2017 there were zero reported accidentally deaths [1]. Compared to 1972, one of the most dangerous years to fly, where more than 2,300 accidental deaths occurred on commercial flights.

How did we go from thousands of deaths to zero? Primarily through crashing. Then working to evaluate what went wrong and improve the design, processes, and programs around commercial aviation.

As one pilot explains [2]:

"One of the ways we’ve become so safe was to realize that our efforts were never going to be good enough."

So yes, I think technologies like autopilot will save lives. But it won't be able to do it magically, unfortunately some harm will occur because we—in all our human wisdom—just can never accurately predict all possibilities. It's the same problem most self-driving companies are facing today, and the reason promises of self-driving being here today have fallen short. The world is far, far more complex than we realize... especially when we put a computer out into it and tell it to "go forth."

https://www.reuters.com/article/us-aviation-safety/2017-safe...

https://www.usatoday.com/story/travel/columnist/cox/2018/01/...


If Tesla's AutoPilot was regulated like the airline industry with post-crash reviews, mandatory improvements, and open sharing of data we likely would see that. At the current time Tesla's system is proprietary/secret and their oversight is lax.

If we look at the history of airline safety, going back further, we can see that it took many years and numerous crashes to get to the point where it became well regulated and crashes were independently investigated. Before government stepped in, private companies weren't doing a great job and their biases (inc. bad headlines/liability/cost) got in the way of improving passenger safety.

So let's absolutely keep with the airline analogy and regulate Tesla's AutoPilot like the airline industry.


The NTSB does actually investigate their stuff. And then it does issue recommends.


I see a cost issue, easy to justify a full investigation every time a commercial plane crashes, how do you do a full investigation every time somebody crashes a car or is crashed into. How do you get the improvement loop that the airline industry had?


I thought the topic was AutoPilot crashes, not all car crashes. Is AutoPilot crashing enough to be cost prohibitive?


At least plane crashes are usually mechanical and not software, or it's straightforward software (hello MCAS), investigating a machine learning's system behavior ("what input lead it produce this output?") sounds like a more expensive problem.


Until you look you don't know if i was hardware, software, road conditions or user, unless you look its just a a pile of twisted metal.


Not currently. If we get to mixed 50/50 traffic...possibly. (Saying "but won't happen, because magic future technology" would be literal handwaving)


If we required full investigations now, while the numbers are small enough for it to be feasible, then perhaps by the time we get to 50/50 traffic, the technology will be that much better.


We already investigate nearly every car crash, it's just that we usually find one of the drivers at fault instead of one of the car manufacturers. The resources are there, currently provided mostly through police and insurance agencies.


Hopefully it takes less than ~50 years which the airline industry took to get to that point. Seems that this would need to become a popular opinion before becoming reality.


Regulation saves lives. We don't even have to look at the airline industry. Seat belts saved lives. Tailpipe emission controls saved lives. Tesla's Enhanced Autopilot and other Level 2 (partial autonomy) solutions will save lives.


Not always - it has to be appropriate - there are cases where regulation costs lives by locking in suboptimal solutions because the costs to re-certify can be too high.

We need to have the appropriate balance.


> How did we go from thousands of deaths to zero? Primarily through crashing.

No, the reduction in deaths was through not crashing.

The airline industry would not have got safer faster by deliberately doing more crashing. "Okay, we need to increase safety, so let's crash as much as we can!" is something they didn't say, because that wouldn't have made any sense.

What you've done here, and the reason you've been upvoted to the top spot, is you've applied the Silicon Valley mantra of "failure is good" to airlines, and by extension, to autonomous cars. That mantra isn't really even true for software - it's not that failures are good, it's that learning is good - but it just about works in a world where people turn can turn free VC dollars into Medium posts and acqui-hires by failing, and hence where you might as well learn that way, rather than proceeding slowly with painstaking care; this approach doesn't really work in worlds where failures are underlined in blood.


I’d really like to see the details of the “proceeding slowly with painstaking care” looks like.

Airplanes never had airbags, crumple zones, etc. The odds of surviving a car crash are much higher than odds of surviving a plane crash. It seems therefore that testing in cars can be done with lower risk of loss of life. Yes, there will be fender benders and some deaths.


NASA manned space flight.


> NASA manned space flight.

NASA is horrible at keeping astronauts alive. Worse than the Soviets/Russians, as far as I know.

Apollo 1 and the two Shuttle disasters do not paint a good picture for NASA's safety in practice.


NASA is great at keeping astronauts alive. It has no launch vehicles.


Except the airline industry got there because of regulation and federal agencies investigating every single death (or even incidents, near crash without any deaths!) Right now, there isn't any equivalent in the car world.


I would like to introduce you to the National Highway Traffic Safety Administration (NHTSA) at https://www.nhtsa.gov/.

Here's an example of one of their famous investigative reports on a Tesla Model S crash while operating under AutoPilot back in 2016:

https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


The airline industry also « gives up » under a lot of conditions.

Scenarios where a plane would refuse to takeoff or divest to another airport... are conditions where employers would still expect their employees to get to work, or their deliveries.


I'm not interested in having a self-driving car until I don't have to pay attention to how it's driving. If I have to be alert and paying attention the entire time, then it doesn't provide a benefit.

That said, I'm totally fine with a self-driving car that gives up when it can't handle something. As long as it can safely remove itself from traffic and come to a stop (e.g. pull over to the side of the road), then alert me that I need to take over, that's fine.


To be clear, the impossibility of perfection is no excuse for reckless behavior. For example, if your system cannot reliably detect stationary obstacles blocking its path, and you are relying on the driver to save the situation, you should be using all the available technology to ensure that the driver is, at all times, alert and paying attention to the road.


You have to appreciate that a system which increases your overall safety but that comes with so much overbearing annoyance that users refuse to use it is itself useless.

If NHTSA mandates driver attention management systems in all cars then this calculus changes. If the driver attention management system is only required while AutoPilot is on, then the requirement is actually counter-productive to overall safety, e.g. results in increased deaths.


This reply ignores the very significant fact that Tesla's autopilot, and any other with similar capabilities, both enables and invites the driver to not pay attention to the extent that Tesla itself insists is necessary for the safe operation of the vehicle. If actually conforming to what Tesla says is necessary is too much of an annoyance, it is not ready to be on the road.


Very interesting to not have chosen 2019, a.k.a. The Year Of 737 Max. #cherrypicking


I guess this is how people justify killing for peace. Tesla stock holders must be celebrating that their cars get to run a few people over in the name of "progress". (Well ok, they get to celebrate because their failed pet project gets to have another go - potato/potato)


> I guess this is how people justify killing for peace.

Extreme but (hopefully) relevant counter-example: was the killing of tens of thousands of people (soldiers and civilians) by American citizens in Europe during World War II justified? This is a very complex question, and any reasonable answer will be even more complex and nuanced, so I'm not really asking anyone to directly answer it in this venue.

To be clear, I'm not talking about Tesla here, only the first sentence in your response, in isolation.

I think there are some circumstances where some deaths as payment for far fewer deaths that are morally and ethically sound and defensible.

If we can agree to that, then we can move on and analyze what Tesla is trying to do.


Of course there are, and you're absolutely correct in your example - fighting against war is a noble cause.

But this is not a war in any way - there's already options in place to save lives - better policing, stricter speed limits, safer vehicles/roads/crossings. Allowing new technology on the road without the proper safeguards, (and the in my opinion abhorent justification that a few unlucky people might just get run over for the greater good) is not one of these options.

The "dangerous self driving cars on the road" option is only helping the pockets of Tesla's investors, not the public.


Are we in a war with Machines now? Did Tesla state that it needs more Lebensraum, which, regrettably, requires purging the globe of everyone else?

I'm not trying to invoke Godwin here: just that war happens be somewhat hard to compare - "people die, ergo same thing" seems a rather forced comparison.

Perhaps the only fitting parallel seems "history will be written by the victors."


Also note how well all the other USian Wars On A Concept turned out...


If you find something that's pure good, let me know. There's always a price to pay with any activity. The question is just if the positive outweighs the negative. If you can't see both sides the you're probably polarized and biased to one approach.

I think everyone wants better transportation and fewer deaths. But 0 deaths with an emerging driving technology sounds really hard. There are just so many edge cases and physics is pretty unforgiving on our bodies past 40mph.


My general expectation is that Autonomous vehicles will at some point become safer than human drivers. However, even when that happens, when they make errors they will be errors that seem stupid and incomprehensible to humans. This will be because the failure mode of an ML driving model will be utterly different to the failure mode of human beings. This will lead to a lot of fear and uncertainty, humans want to comprehend what might kill them, and autonomous driving, while safer will kill them in bizarre ways.


Communication plays a substantial role in traffic situations. If self-driven cars could signal whether they “see” you or not, it would be safer for pedestrians to cooperate. E.g. when you take a crosswalk, it could shine green for braking before you, or red/yellow for accelerating without seeing any obstacles. It would also be good, if a green laser projected its suggested path with a bold stop line right on the road (or fading red dot pattern if it doesn’t).


This is a good point- I'm thinking about how many times pedestrians will wait for eye contact with a driver before crossing. And also the ones that see you, but cross anyway, assuming you'll stop.

There's a lot of subtle interaction that I have yet to see how ai driving cars will replicate, or at least account for.


Pedestrians have the right of way, though; they don't need to cooperate. A self-driving car shouldn't be allowed to run someone down just because they were jaywalking and didn't correlate the red light speeding towards them with an autonomous death machine. We generally don't let human drivers get away with mowing down pedestrians or hit-and-runs if they get caught, so why would it be an acceptable failure mode for a machine? Who is responsible for that failure and that death, in the eyes of the court? If the answer is 'nobody', then self-driving cars will have a lot of difficulty finding acceptance.

Also, many states have regulations surrounding which colors of lights can go where on a car for these sorts of reasons - unambiguous and predictable communication is key to driving, and putting what looks like a brake light on the front of your car is confusing.


> Pedestrians have the right of way, though; they don't need to cooperate.

I'd change "don't need to" with "shouldn't have to".

There are a lot of dead pedestrians who didn't look before crossing a street because they knew they had the right of way.

It's just another form of "trust but verify". I know when I'm crossing a busy intersection, even if the cross-traffic has a red light and I have a walk sign, I'm closely watching the on-coming cars to make sure none of them are running the red light. I'm extra vigilant for the people making a right turn on red.


I have the same assumption when a dumb human is tailgating and rear ends the car in from of them. That has been solved with frontal radar and emergency braking.


“been solved”? What percentage of the overall fleet is equipped for automated frontal collision avoidance? Under 5%? Under 2%?


Every Model 3 has a forward collision avoidance that has a better reaction time than I do. I know because it's saved my life already when I narrowly avoided a suddenly stopped car in front of me. Tesla's Autopilot (not full self driving) has its issues, mainly with vague lane markings, but collision avoidance isn't one of them. I know plenty of drivers who fall victim and cause accidents because of the same issues that Autopilot has: poor lane markings, not seeing cars that are immobile on the side of the highway.


There are over 1000 times more "not Model 3"s in the US than Model 3s. That was my only point about "this has been solved". It's not a knock against the Model 3.


I'm waiting for the first race related incident to blow this topic up. A Tesla swerving around a white pedestrian but at the expense of hitting a black pedestrian will put an end to any justification of "costs one life to save others."

Edit: I'm not saying ai/ml will be racist (although it hasn't shown a great track record so far). I'm saying public sentiment, given the current status of media alarmism, is going to jump at the first hint of any sort of detection bias, whether it actually exists or not.


You'll be waiting a long time. The systems simply do not operate at this level of awareness or reasoning, and likely never will.


I don't think GP is suggesting a conscious or intentional bias would be present.

Imagine that a car swerves around a pedestrian A who happens to be in light blue jeans and a white jacket and strikes an undetected pedestrian B who happens to be in black pants and a black jacket. It seems easy to imagine how that could happen to any driver, right?

Now, imagine that it's an autonomous car, pedestrian A happens to be white, and pedestrian B happens to be black. Boom, there's your scandalous story: an autonomous car just swerved to avoid a white pedestrian and struck a black pedestrian.


This is exactly what I meant, thank you.

But there is a bit more to it. The A or B side-by-side is the most simple comparison to make the point.

Some of the same dangers that are in play now might still be risks as AI cars become more common. For instance, minority neighborhoods tend to not have as many street lights at night. The cars might be smart enough to compensate. But if they aren't, AI has the potential of falling victim to statistical correlation.

Headline by click bait reporter: "Studies show self-driving cars kill minorities at higher than average rates"

It doesn't matter if the real cause was street lights.


They don't have to. All you need is a video of a swerving car hitting one person and missing the other. The "why" is irrelevant.

The implication will suffice, in light of the public/media's understanding of ml / ai. They will think the car "chose" which person to hit.


I’m very skeptical of this authoritarian market logic whereby we are told we must sacrifice others lives for the sake of “progress,” which is almost exclusively tied up with one person or business’ individual profit.

While I don’t think autonomous cars are a particularly good investment for the future of transportation, does anyone doubt that with massive, publicly-funded investment we could not achieve them? And what’s more on terms where whatever “costs” were associated with them were subject to democratic checks and debate?


Humans are bad drivers, and countless people die in car crashes every year as a result.

The question shouldn’t be whether autonomous cars will kill people, but whether they’ll kill more people than human drivers. If they'll kill fewer people, then on net they're saving lives, not "sacrificing" them.


If your wife or child got mowed down by an autonomous car would you be thinking "Oh well, that's progress!"

Is it ethical to build something knowing it will definitely kill random people until you perfect the technology? Will auto makers be held accountable for any wrongful deaths caused by bugs?


A fascinating display of the human mind.

If your wife or child was mowed down by a normal car, would you be more supportive of autonomous ones?

If your wife or child wasn’t mowed down because the car was autonomous rather than manual (or vice-versa), would you even notice?

Yet, which of human or AI drivers get banned by 2040 depends on convincing people like you — assuming I am inferring correctly that your question is rhetorical and the answer is “no, obviously”.


There is no good reason to think pre-programmed computers are going to outperform human drivers when it counts. It's a prediction about the future, someting that has not happened yet. It's most likely wrong. No amount of marketing hype will change that.


I can think of lots of theoretical reasons computers might outperform human drivers. A computer won't get distracted, will have nearly instantaneous reaction times, and can literally have eyes in the back of its head.

I don't know if that will be enough to make the difference, but I can certainly see how it could be.


> "might"

You're still making a prediction about the future. Some predictions are safer than others: I predict I will see the sun rise tomorrow. Some predictions have little rational basis in reality: I predict you will agree with me. Your prediction about automomous cars being safer than human drivers? That falls somewhere in the middle.

From my perspective, all computer vision systems are abysmal crap and spend orders of magnitude more power to accomplish results many orders of magnitude less impressive than the human brain is capable of. The human brain uses something like 10-20 watts, how much CV/ML can you do with 10-20 watts? Not much at all. There is clearly a huge disconnect between our current approach to solving the problem technologically and how the evolutionary wetware solves the problem. That gap may be bridged sometime in the future, but I'm not holding my breath. I think the problem is being approached from the wrong direction, like trying to drive a screw with a hammer. Maybe we'll still get the job done, but I do not consider that a foregone conclusion.


Let's let all the companies in this space finish their work. Once they do, we can look at the data and decide what should be allowed on the roads.

I just hope that, if the time comes, the decision is based on comparative improvement, instead of hysterics about whether self driving cars will ever kill anyone ever.


By all means, they should be free to spend their money on whatever speculative research they want if they do it safely (it's easy to test cars on closed circuits.)


What would you say is "finished", even?


* Counting watts is probably the weakest argument for whether the tech is doing something well or badly.

* A car can spare 100x that many watts.

* Why would we even want cars to calculate the way humans do? It's a very clearly flawed method.


The point is not whether we can afford to power the machine. The point is that the power discrepancy suggests that the methods we're using are radically different than the methods the human brain employs (evidenced by their dramatically different efficiencies). Predicting radical advancement given past performance is a leap of faith.

> Why would we even want cars to calculate the way humans do?

The fact of the matter is that humans are orders of magnitude better at object recognition than computers. You may believe that will change in the near future, but that's not yet been demonstrated.


Can the human brain see through fog? Can it observe in 360 degrees in real time?


Human sensory augmentation will save lives.


Would you describe a neural network as “pre-programmed”? What if it’s updated by every vehicle constantly comparing its own predictions with reality?

Don’t get me wrong, I am aware that current generation AI is fundamentally worse than a human brain, because if it was already as good as a human then the fact Tesla’s AI has done more miles than the average person has had heartbeats would make it essentially perfect. It isn’t perfect, so I know the AI is flawed.

But my question is, does it seem to you that this approach cannot ever become the equal of the best human (if that best human had 360 degree vision, sonar, and radar, and their nerve impulse conduction speed was way faster), and if not, why not?


I personally know of a recent fatal that likely wouldn't have happened even with the (garbage) ADAS in my current car. There are plenty of good reasons to think that pre-programmed computers can outperform human drivers in the cases that lead to many of our current fatal accidents. I take those as being the ones that really count.


> If your wife or child was mowed down by a normal car, would you be more supportive of autonomous ones?

Yeah, its an interesting dichotomy. I've been to funerals for people mowed down by "normal" cars. I know the stats. I'm more supportive of autonomous cars.

I also know that if the situation were reversed and the people closest to me were affected by autonomous vehicle accidents I might have an strong emotional reaction in the other direction.

The human mind just isn't rational when it comes to these things. It will take real leadership and risk taking on the part of manufacturers to get past this impasse.


Is there a name for this type of argument? I've seen it a lot with certain types of people, one that ignores any logic from the discussion and uses a sort of trap that makes you appear insensitive if you try to argue it? Is it appeal to pity?


The good place makes me think it's a version of the trolly problem.

IANATOMP.


Pretty much what I was trying to get at


What does TOMP stand for?


Teacher of Moral Philosophy.

Watch the show, it's great.


Probably not, but are emotional responses the correct criteria for rational decision-making?


Do you not put your hands behind the car wheel with moral confidence, the confidence that you are okay with the sum of your action? Do you view the medical system differently?


aka anti-vaxxer logic, anecdotes > real world stats.

There are ~1.25 million car deaths per year (20-50 million injured or disabled), a lot of room for "progress" to save countless lives.


> There are ~1.25 million car deaths per year (20-50 million injured or disabled), a lot of room for "progress" to save countless lives.

The vast majority of those occur in areas that will not see autonomous vehicles for a very long time, and have a lot of environmental challenges that make it much harder for an autonomous vehicle to outperform a human.


You’re stringing together a few questionable statements here.

It’s true that the vast majority of cars won’t be autonomous for “a very long time” so the vast majority of deaths won’t be prevented. But what’s your point? That if the vast majority of the problem can’t be addressed quickly it’s not worth exploring?

On a quick search, 30% of fatalities involve alcohol. 7% involve rain (the most common condition that causes crashes).

Cars don’t actually need to be better than humans, or even as good as the average human. They just need to be better than the bottom 5th percentile, prefer safe driving, and not get distracted/drunk.


> It’s true that the vast majority of cars won’t be autonomous for “a very long time” so the vast majority of deaths won’t be prevented. But what’s your point? That if the vast majority of the problem can’t be addressed quickly it’s not worth exploring?

My point is that trotting out the 1.25M number when we are very far away from dealing with the vast majority of it is distracting at best. We should instead be talking about the numbers of injuries and fatalities on the countries where autonomous vehicles are most likely to be deployed, and how many of those people could have been saved.

> Cars don’t actually need to be better than humans, or even as good as the average human. They just need to be better than the bottom 5th percentile, prefer safe driving, and not get distracted/drunk.

That's very true, but orthogonal to the point of using realistic numbers.


That seems like arguing about metrics for the sake of it. If we change the definition to be x% reduction in the rate of accidents relative to the expected rate if a non autonomous car would be use otherwise... you would get a similar rate of impact, normalized to a smaller population, limited primarily by rapidly diminishing economic barriers that don’t seem especially relevant to the argument at hanf


and? it's not worth doing unless every life can be saved? The point is thousands of people die per day at the hands of human drivers. The sooner we can transition to autonomous vehicles, the sooner they'll start saving lives.


> and? it's not worth doing unless every life can be saved?

I did not say or imply that.

> The point is thousands of people die per day at the hands of human drivers.

Yes, and thousands will die every day for quite a long time. You make it sound like autonomous vehicles are going to make a significant dent in that number.

> The sooner we can transition to autonomous vehicles, the sooner they'll start saving lives.

This is quite true. It will just be limited to lives in developed nations and a few select other major cities.


I feel like this entire 0.0005% vs 0.0007% chance discussion is somewhat off-track, because it doesn’t matter. More can be less.

Personally, and I tend to believe it’s not very different than average, I don’t want less chance to be hit. I want more control and awareness, which can reduce risk fundamentally. I know that accidents happen, but I also know that chances grow by orders of magnitude if I fail to detect dangers and/or make predictions. Malfunctioning traffic lights can do much more harm than your average drunk driver. Not seeing someones intent (or steering/braking problem) does that too. If autopilots provide me a way to reduce risks by clear means on my side, I would not mind [with a great share of egoism, but that’s how it works] if on average it could harm crazy leapers even more than human drivers do. Because the road is a source of danger (due to natural physical/perception failures) and by skipping safety checks they enter a potential chaos which is un-manageable by definition. Heck, I’d better coexist with a tech that never gets tired to send me clear “fuck off” messages, than with human drivers who e.g. think they are smart enough to not brake, because “our current inertial frames render a collision unlikely anyway”, or whatever bs they believe when doing this. But it’s a false negative for me, idk for sure if they see me at all or estimate their reaction time correctly.


The question is the intrim period, when autonomous cars are improving, but still killing more people than human drivers. Still ok in that situation?


That interm period, if it existed at all, has likely already passed.

> By this last estimate, Autopilot has about the same to 35% lower fatal crash rates than any conventional vehicle at this time.

https://medium.com/@mc2maven/a-closer-inspection-of-teslas-a...


So, an autopilot-equipped Tesla is a fully competent self driving car?


Why should such a period exist? They can gather data with humans in the driver seat, as they have been.


>countless people

We've got very accurate counts. To say countless is hyperbole.

>If they'll kill fewer people

Everyone assumes ML / AI will be better drivers than humans, but I think that's just blind hope at this point.

I've seen no proof or studies that show any better logic than fusion power being about 20 years away (indefinitely).


> >countless people

> We've got very accurate counts.

Pointless rebuttal, the exact number isn't relevant, nor easily quoted as the number of deaths per day, month, year is fluid, evident as you've made no attempt to quantify an "accurate count" yourself.

> Everyone assumes ML / AI will be better drivers than humans, but I think that's just blind hope at this point.

You call someone out for non precision then follow up with a baseless supposition that you think people who believe autonomous drivers are safer then human drivers are hopelessly blind.

You can include the NHTSA and the U.S. Department of Transportation in your blind hopefuls who are keen on moving towards an automated driving future:

https://www.nhtsa.gov/technology-innovation/automated-vehicl...


Calling something countless is hyperbolic exaggeration. I wasn't expecting an exact number. I also was not expecting hyperbole.

NHTSA is no different than the DOE stating their (thusfar inaccurate) expectations for fusion. Government agencies can also express hope.

>The continuing evolution of automotive technology aims to deliver even greater safety benefits

They have a very well thought out timeline with goals and expectations. But there's no real evidence of why we should expect to hit level 5 driving in a given time frame.

It's still just a goal they hope (aim) to hit.

Unless there's a Moore's law of AI driving ability that I'm unaware of.


> Calling something countless is hyperbolic exaggeration. I wasn't expecting an exact number. I also was not expecting hyperbole.

I'm sorry if I caused any confusion. I meant "countless" simply as in "more than a person could reasonably count". The usage I'm familiar with appears to be different from yours.


> Calling something countless is hyperbolic exaggeration. I wasn't expecting an exact number. I also was not expecting hyperbole.

It's not a hyperbolic to use "countless" when it's one of the top 10 causes of deaths.

> NHTSA is no different than the DOE stating...

You're really into your inaccurate strawmans


1) that is the very definition of hyperbole. Exaggeration for effect.

2) I'm not the one making the claim that a revolutionary technology that will save millions of lives is just around the corner.

Will progress be made? Probably. Will an army of driverless cars be safer than the same number of humans driving in X years? Thats a bold statement that requires an explanation beyond "Because technology progesses."

I think the comparison to fusion is quite adept at this point. We can re-evaluate in 20-30 years.


> 1) that is the very definition of hyperbole. Exaggeration for effect.

Using the word "countless" to describe one of the leading causes of deaths is not close to a valid example of hyberbolic exaggeration.

> 2) I'm not the one making the claim that a revolutionary technology that will save millions of lives is just around the corner.

You're the only one making proclamations, hypocritical of others for not using exact numbers you wont quote yourself and proclaiming some of the smartest engineers in the world and motor vehicle institutions must be blind hopefuls for investing their efforts in automatic driving solutions to save lives.

> I think the comparison to fusion is quite adept at this point. We can re-evaluate in 20-30 years.

Says a lot that you think unproven feasible future technology is somehow an adept strawman to actual technology that's driven over 1B miles.

Tesla's AutoPilot results for the latest Q2 2019 shows it had 1 accident per 3.27 million miles vs NTHSA's recent statistics showing 1 accident per 498k miles: https://www.tesla.com/VehicleSafetyReport

The state of automation is meaningful today, it's not blind hope to presume actively developed technology will improve even more with time, it would be naive to think otherwise and suggest it needs another 20-30 years to evaluate FSD's for viability.


> But there's no real evidence of why we should expect to hit level 5 driving in a given time frame.

Not that level 5 is necessary. A good level 3 would make a huge difference. If it can self-park too then it's basically everything I want.


True, but I meant in more general terms- I'm not sure that we've got a good handle on when to expect any particular level (or achievement), other than the very short term.

It seems like even Tesla themselves have Trouble forecasting more than +/- 6 months out.

Predicting technological progress is hard, both in terms of timing and absolute ability. And this is before you even get into predicting the market success of a shippable product.


> I’m very skeptical of this authoritarian market logic whereby we are told we must sacrifice others lives for the sake of “progress,”

Then you should be _very_ sceptical of the current state of affairs

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

https://www.newscientist.com/article/2196238-does-air-pollut...


I am very skeptical of the place of the personal car in American life and the massive subsidies that automotive industry receives in terms of things like infrastructure and military defense of fossil fuels. But I also don't believe that autonomous cars are a good solution to these concerns relative to proven solutions like various forms of mass public transportation.


They may not be a perfect solution but people already waste hours of their lives stuck in traffic and clearly hate riding the bus. Look at the stats for declines in mass transportation ridership since Lyft/Uber took the US by storm. I love rail and subways, particularly high speed rail, but currently because of so many reasons, it's expensive to build per mile in major cities. Affordable mass produced semi-autonomous vehicles will quickly improve lives. We're already on the cusp of it. I own two them. Tesla and Comma.ai are the functional autonomous vehicle equivalent of the Apple iPhone and the Android T-Mobile G1. And there's no reason these sorts of industries won't be massively subsidized by a future (perhaps more liberal) American government, once we move on from subsidizing companies who barely innovate (General Motors, etc)


Air pollution is completely orthogonal to the deployment of autonomous cars.


> I’m very skeptical of this authoritarian market logic whereby we are told we must sacrifice others lives for the sake of “progress,” which is almost exclusively tied up with one person or business’ individual profit.

This is a fairly nuanced Bloomberg article. You're making it sound like Elon himself is proclaiming that some sacrifices must be made for the sake of his self-driving Teslas.


Of course. Even the article mentions Elon Musk never ever saying anything even slightly off: "uh, now I took my hands off the wheel and it still drives well, but that would be illegal, don't do it, wink wink, nod nod".

Far better to have someone else do the job. Independently, of course: it's definitely NOT Elon Musk saying that, no no no. Nod, nod, wink, wink.


People's lives are being sacrificed now, teaching us how to build safer cars. Some large percentage of the time, seatbelts prevent death or reduce the injury. Some small percentage of the time, they induce further injury or death.

Do we stop using seatbelts until they are 100% effective? Was it a mistake to start putting seatbelts in cars before it could be proven that they absolutely wouldn't kill anyone?


Also look at what regulations have done to seat belts. They are almost all identical and the tech hasn't changed much in quite a while. Are they completely optimal, or are they merely sufficient for liability by meeting government standards?


There is the question of what we are trying to optimize for.

If you are trying to attain maximum safety for a belted passenger, then no, current passenger seatbelts are not optimal. This is clearly evident from the far better seatbelts and driver safety devices used in motorsport.

Current seatbelts represent a tradeoff between safety if you use it, and how annoying they are to use (and therefore, how likely people are to get annoyed and not use them).


It's a liability issue.

Say someone gets hurt from a seatbelt. If it's designed exactly as required by regulation, there is an immunity for the company (IANAL so I don't know the legal term for this)

IF you try to make it better, and do make it better for most cases, but worse for some edge case, then you have a liability issue in that your 'negligence' caused injury that wouldn't have happened otherwise.

The stance on civil liability in America often stifles innovation in many areas regarding life safety. Tesla is learning this quickly.


> does anyone doubt that with massive, publicly-funded investment we could not achieve them?

Yes, I doubt it. The scale of the problem is massively bigger than anybody will admit right now. To get to the point where an automated car will kill fewer people than human drivers, the car must have an understanding of its environment. Machine learning isn't going to get us there.


I say variations of this all the time. I don't get why people just assume AI will be better.

It's like questioning the existence of their God.


Sci-fi conditioning mixed with a hefty assumption that people are computers.


Why is an understanding necessary, and how do you even define that?

Humans are not as hard to beat as you think they are. They have lots of limitations:

- Can only see in a single direction at a time.

- Vision-only, no radar

- Prone to becoming emotional

- Drive in unfamiliar places where they do not have a full understanding of the environment or local laws

- Can be inebriated, easily distracted, fall asleep

- Selfish: time to destination is often more important than the safety of others

- Majority of people do not follow all traffic laws (speed limits, turn signals, lane-usage)


> To get to the point where an automated car will kill fewer people than human drivers, the car must have an understanding of its environment.

Are you sure about that?

> Machine learning isn't going to get us there

Do you mean ANNs aren't going to get us there?


Binary computing is never going to emulate wetware on a reasonable scale. A few neurons? Sure. It's going to take a different pardigm, likely biological.

Evolving FPGA's (by ignoring the digital abstraction) is a window into why.


Citation needed.

Let's look at the scale. A brain neuron is 4 microns to 100 microns in size. Whereas a transistor sizes are now pushing 3nm. That's a lot of free space to work with, why can't we easily simulate many neurons within the space of a biological neuron?


Start with something easier, like simulating water. Our best ab-initio simulations, last time I checked, don't correctly predict it's phase change locations. Digital abstractions deliberately supress every single other interaction unless explicitely accounted for. Those interactions happen "for free" in real systems. We need to boil quite a bit of water to even approach simulating a small subset of atoms.

Consider the 3 body problem. Without a math breakthrough, we can only approximate, and even with a math breakthrough, we can still only approximate (because real systems are not closed). This idea that we can simulate something misses the point. It's always going to be a guess, on bounded memory, because we cant copy real states.

Letting go of the digital abstraction provides a window into why this is hard.

https://www.damninteresting.com/on-the-origin-of-circuits/ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50....

These circuits evolve to become hyper-specific to their enviroment. We have no hope of repeating those results with software, it's the physical attributes of the hardware that are being exploited.


Sure we can only approximate, but why wouldn't those approximations be sufficient?

We were able to solve the n-body problem with sufficient accuracy for Voyager to successfully encounter and gravity-slingshot around Jupiter twice and every other outer planet, with only the computing power available in 1977.

The mechanism of action of a neuron is not very complex. It follows the all-or-none principle along with an activation potential which is easily emulated in a binary circuit. It does have additional chemical inputs which can raise or lower activation potential. The only difficult part is sufficient connectivity, but we can simulate that, no need to use an FPGA to get actual connectivity. Modern transistors are super-fast and super-small compared to neurons so you have quite a bit of room to sum inputs, check activation energy, check chemical levels, and lookup + signal the 'connected' neurons.


We will see! Our intuition that we understanding something is often incomplete in critical ways. Exciting times ahead either way.


> does anyone doubt that with massive, publicly-funded investment we could not achieve them?

Possibly, but the far harder thing to achieve is massive publicly-funded investment in the first place.


"does anyone doubt that with massive, publicly-funded investment we could not achieve them?"

Yes, I very much doubt it in the US at least. We are many generations deep into an extremely nepotistic and corrupt bureaucracy. I have no faith that the US Federal Government is capable of handling even mundane projects at this point. We would require a serious shift in patriotism, education and accountability before I could believe the USA of 2020 is capable of doing something like the moon landing again.


I'm confused by this sentiment as the technology was invented using this means to begin with:

https://en.wikipedia.org/wiki/Self-driving_car#History

Why does government research always seem to become a corrupt, inefficient bureaucracy the moment that private industry runs off with its solutions to profit from?


The US government has done some of the greatest technological accomplishments in history. The people who did them are largely the Jews who had immense gratitude and pride for being a part of the nation that halted Nazi Germany. It's the children of the soldiers who stormed Normandy. It's the citizens of a nation riding the most impressive economic wave in history.

As a result, in the 50s and 60s, there was no better job to be had than working for the government. Astronauts were like the modern day equivalent of rock stars who also started Google. It wasn't the money, but the pride of being a part of yet another great accomplishment that the greatest nation in history was embarking on. But that's gone today.

We still had some of this going into the early 90s, and DARPA and the NSA I'm sure could attract and underpay enough talent to make up for other government bloat. But in 2019, we have lost this national opinion (or delusion).


It plays perfectly into the neoliberal ideal of expansive personal freedoms, enabled through technology, purchased by capital.

Why don't we see any fully autonomous buses? You could run buses 24 / 7 without worrying about drivers. The difference, people aren't going to pay 50k for a bus seat.


If you choose traditional vehicles you are also sacrificing lives... is that not clear?


It's not like we are sacrificing lives to get there. Data shows that, at the margin, Autopilot prevents more accident than it causes.

So, the headline of the article is wrong IMO, but in the article the author makes the correct points.


What data shows this? Has there been a public release of sufficient amount of data to support your claim?

Tesla Autopilot has driven in total 1.6 billion miles on highways, in clear conditions, in modern high-crash-safety cars. There have been 3 fatalities.

The equivalent number for 1.6 billion miles across all road types, all weather conditions and all types of vehicles would be 22 deaths in the US. But in the UK that number would only be 8, illustrating that safety is tightly linked with road conditions and vehicle demographics.

As the Autopilot is only used in the easiest conditions and safest cars, there is simply not enough statistics to say whether they are safer or not.

You would have to collect data on a single version of the system for half a decade before you could say anything certain.


Exactly the data you cited. I agree that we need more data, but it does seem that we're not "sacrificing lives" to get autonomous driving to work, does it?


It does not. The expected number of deaths for 1.6bn miles in expensive cars in perfect driving conditions is probably 0, Tesla had 3.


For comparison, drivers in Los Angeles county drive 1.6 billion miles before 9a this morning.

Congratulations, Tesla had almost a morning worth of driving data.

And yet somehow with all that data they can't make a car that can drive itself in a parking lot.


I asked a question on Twitter a few weeks/months ago, and didn't receive a very good answer:

At what point do we have enough data?


To be honest: probably at around 50 billion miles driven with a single version of the autonomous system without any fatalities. That would be a milestone where you could say without a doubt "we've made a leap change to car safety".

The gold standard for other safety critical systems like medical devices, commercial aviation etc. is less than one fatality per billion operating hours. 50 mph avg gets you 50 billion miles. A vision for car safety that doesn't approach this level isn't much to write home about, it would in fact be difficult to distinguish from just waiting 15 years to let the statistics incorporate the safety systems that already exist today.


According to Wikipedia there were 1.16 fatalities per 100 vehicle miles traveled in 2017. Why do you say autonomous cars need to be over 50x safer to be better than human drivers?

Source: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Edit: Forgot to add year.


Because that number is averaged over all types of vehicles, in all types of weather conditions. It includes people crashing in their '95 Honda Civic. It includes people forgetting to look over their shoulder when switching lanes. Crash survivability and "non-smart" safety features like blind spot warning has increased significantly the past ten-fifteen years. Just waiting until most of the old cars on the road today are replaced with newer ones will yield a 5x to 10x improvement, if we keep the human drivers, assuming no new tech at all.

For instance the UK is already below 0.5 fatalities per 100 million vehicle miles. Car demographics matter a lot.

So if autonomous is going to give us a significant improvement, it has to be something like 50x better than the average for humans today.


Interesting. Do you have statistics for more modern cars only that are comparable to a Tesla (without the Autopilot features obviously)?

PS: I meant 100 million miles in my comment above as you probably assumed :-)


It's kinda hard to get those statistics, as new cars havent been driven that long, yet!

But you can probably get a good estimate from extrapolating the red curve in this plot using a power law:

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Also you can look at the safest countries worldwide, compared to the US, to get a feel for what's achievable with today's tech. Looks like Norway is the safest, with 4x better safety statistics than the US. That's probably a combination of better drivers education and safer vehicles, so not straight forward to compare. So really, you would need something like a 50x improvement over current US rates for it to be a notable achievement.

https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...


1. We have democratic checks and debate now, regarding this issue.

2. I see no evidence that "massive, publicly-funded investment" would achieve this technological goal faster or better than via capitalistic means. Do you have any?

3. Lives will be lost regardless of how the tech is developed. Surely this is obvious. So what are you even talking about?


Re: 1. above, who do we sue when the autopilot kills someone? The owner of the vehicle? The manufacturer? The driver behind the wheel (if there is one and it's not the same as the owner)?


That is an area that is still being debated. I suspect it will depend on the level of autonomy the car claims to provide. If it is level 5, and perhaps doesn't even include a steering wheel or other means of controlling the vehicle, then there would be no rational basis to hold anyone inside liable for failure. If, on the other hand, it is level 3 or 4, and does provide a steering wheel, and prominently informs the user that they are ultimately responsible for the vehicle, then the driver would be liable in this case.

This technology is still evolving, though, and the law will need to be adjusted accordingly.


Ideally, every self-driving car would have secondary insurance baked into its price. If someone dies, it's an insurance payout of a standardized amount.


I don't think the plaintiff's bar is going to allow standardized payouts on deaths or injuries. That would ruin their business model, and they have a lot of influence. The question will be whether the inevitable losses due to jury awards can be sustained by any of the manufacturers.


>1. We have democratic checks and debate now, regarding this issue.

Not functionally, consistently, or meaningfully. Many of these companies have learned to how to ingratiate themselves to party machines in the markets where they operate, such that this decision-making is almost entirely excluded from public check or debate. Others, like Uber, willfully break laws with little consequence, for much of the same reason as the above.

>2. I see no evidence that "massive, publicly-funded investment" would achieve this technological goal faster or better than via capitalistic means. Do you have any?

As someone else pointed out, its not about faster but more safely. But the whole technology that these companies are building on top of was arrived at by this kind of public research to begin with:

https://en.wikipedia.org/wiki/Self-driving_car#History

>3. Lives will be lost regardless of how the tech is developed. Surely this is obvious. So what are you even talking about?

That private companies not be the nearly sole arbiters of what lives are worth sacrificing.


I don't think anyone is suggesting that publicly-funded investment would make this happen faster. Just safer.

It's an inherent characteristic of capitalism that the primary driver is profit. I mean, it's the whole point of the system. What is the value of one human life within this system? Historically that value has not been high.

Now, when people saying "publicly-funded investment" it's a popular conception that this means a vast amount of government waste, because everything the government does is wasteful, right? How is three or four companies spending huge amounts of money to pursue the exact same goal not wasteful?

Look at things like the internet itself. Public funding of technology can work.


If

* capitalistic automatic cars mean 2000 deaths per year, reducing linearly to zero after 10 years

* socialist automatic cars eventually mean 1200 deaths per year, reducing to zero after 30 years

* manual cars mean 1000 deaths per year, not reducing

Then after 11 years total deaths will be the same in all 3 cases (11,000), but after 30 years it will be 11k capital, 18k socialist, and 30k and growing with no changes.

However that's of no comfort to the families of who died in the start, especially the ones who didn't opt in (as was typical with flying when it was more dangerous).


Is there any real point in debating numbers like this that are just plucked from the air?


The relevant point is

> I don't think anyone is suggesting that publicly-funded investment would make this happen faster. Just safer.

The point is that while it could be safer per year, being slower means that it's not safer.


Maybe OP is referring that some billionaire appears with some bad statistics that if we let him kill 1000 people he assures us that he will probably invent something to save double.

Putting those billions in removing the drunk or tired drivers from the roads, enforcing the laws and rules would save more people.


So are you volunteering?


What kinds of accidents is autopilot going to prevent that automated accident avoidance cannot? Why does it have to drive the car to achieve the safety benefits?

If you watch videos of autopilot preventing accidents on YouTube, it involves two stages: first the car detects a situation is likely to lead to a collision, then it brakes and steers the car to avoid it. There is no reason for autopilot to be controlling the vehicle before anticipating a collision.

By contrast the deaths involving autopilot have all revolved around autopilot failing to perceive some object, or swerving the car into it. I don't know of any deaths that were caused by Tesla's accident avoidance feature.


>What kinds of accidents is autopilot going to prevent that automated accident avoidance cannot? Why does it have to drive the car to achieve the safety benefits?

Accidents from people who shouldn't have been driving in the first place. Part of the benefit of self-driving cars is not needing a qualified human, so the blind, elderly, children, drunk, etc can [still be] drive[n around].


And people who are not very good at driving! Myself included.


Thats why taxis and mass transit exist. If you suck at driving, please don't.


There is no reason to believe that a shitty driver with good automated accident avoidance would be worse than a self-driving car.

Especially since no self-driving car is better than a shitty driver in 2019.


Maybe society should invest in providing services for these people rather than enriching more corporations. There are plenty of unemployed people, we can ferry a lot of passengers, no need for robots on the streets.


Hmm, unemployment in the US is very low. What is happening is underemployment through gig economy jobs like Uber. You seem to be advocating for more of that.


> What kinds of accidents is autopilot going to prevent that automated accident avoidance cannot?

Many accidents in stop-and-go traffic jams, people falling asleep due to being tired or bored (yes, that is a real issue) and veering off course, pedestrians stepping in front of the vehicle...


Automated accident avoidance covers accidents in stop-and-go traffic jams. My 2012 Volvo already a rudimentary system along these lines that is designed specifically for stop-and-go traffic.

Other Volvos have tools for detecting drivers losing alertness. Technology for helping to keep cars in their lanes already exists.

Technology for stooping cars when pedestrians and bicycles appear in front and/or on the side when making a turn at an intersection already exists.

Most of these systems need to get much, much better, but I think it's very reasonable to separate them from the general capability of driving a car autonomously from point A to point B.

If we can develop a car to drive autonomously from point A to point B, and if we are granting that this system can also solve the problems you list above, I believe we can also develop most of the safety features that do not involve driving the car autonomously from A to B.

I think this is more than a minor point to quibble over, as there are many extremely difficult problems to solve for fully autonomous driving that are unrelated to this kind of safety.

For example, autonomous driving involves recognizing when the map is wrong. Despite driving maps being a so-called mature technology, a variety of map programs tell me to turn left on certain streets during prohibited times.

There is a road that was closed at one end more than a decade ago, and when summoning a ride-share to an address on that street, we have to always text them not to follow the map directions, because it tries to send the car down the closed road, and today's ride-sharing drivers are terrible navigators and literally get lost.

Solving autonomous driving solves problems like those AND safety problems, true. But if we just want to solve the safety problems, we can, and we can solve them faster and sooner by focusing on them.


Distracted driving is a leading cause of accidents, and if you spend any time driving at all it seems like practically the majority of drivers are distracted.

Particularly when driving on AutoPilot, I notice the drivers around me tend to be highly distracted, not only that but the bottom quintile of drivers are actually quite horrible at driving in practice.

Functional automation brings stability and predictability to driving patterns which will eliminate more accidents than purely emergency response systems do. The human element is responsible for $877 billion in damages, pain, suffering, and lost wages per year according to the NHTSA. I believe it's a moral imperative to massively reduce that number.

Emergency response technology works in a small percentage of cases to reduce the severity of an impact. It often does not even avoid an impact rather than just reducing the severity of one, and the systems on the market are particularly bad at avoiding pedestrians. While all that is true, these systems are also very good at annoying drivers with false positives to the point where a plurality of drivers want to switch them off.

Aside from removing distracted and poor human operators from the equation, another reason that self-driving can obtain a lower accident rate than emergency response systems is that the self-driving system will have (1) awareness of the environment, (2) keeps the car within a safety envelope at all times, (3) is in full control of inputs, and (4) has a set destination point in mind, so it knows where it is trying to go.

The emergency system may have (1) awareness of the environment, but is lacking (2) - (4). This puts it at a strict disadvantage to full self-driving.

For example, just take the simple situation of approaching a stopped car 500ft ahead at 25mph. The road is currently one lane but expanding to 2, the stopped car is turning left in the left-turn lane. When does the AEB apply the brakes? The answer must always be "at the last possible second," because the system doesn't know where the driver is headed and is not supposed to intervene during normal driving.

An autopilot knows if is planning to stay right and proceed through the light, or if it's going to come to a stop behind the lead car in order to turn left. It can set an appropriate speed at all times based on not only the environment, but the desired path ahead. It is planning and strategizing, instead of merely reacting (and not at the last possible second either).

A functional AutoPilot comes to a full stop at a stop sign, every time. It always signals before merging. It knows how to zipper merge. It doesn't get on it's cell phone on the highway, or fall asleep at the wheel. Personally I think FSD should be able to prevent an order of magnitude more accidents than a purely emergency response system.

In practice, of course, we do both.


These are all present (in one way or another) in many cars that don't drive themselves otherwise. (They're not that great at the moment, but these are part of accident avoidance, not self driving.)


If you have all the cars driving themselves they can function more like a swarm/grid of vehicles managed externally vs individual agents working against each other. This means you can have cars going like 80-100 mph but driving bumper to bumper.


There’s no evidence that this sort of system is possible given the latency of wireless communications and probability of momentary glitches in real time systems long enough to cause a cascading set of failure leading to crashes.


Well, maybe. We’ll likely never have cars zippering through intersections at high speed, but we could likely increase intersection throughout by mind boggling amounts while still allowing for multiple systems failures not killing anyone.

Say the power cuts in a car, or worse its location updates no longer match the surrounding cars’ sensors, the cars in its mesh could immediately alert all the cars around that it is no longer updating its location and might be anywhere, so all cars in the area should go into low speed safe mode. It should be a doable to some degree.


It's not that different from thousands of human drivers coordinating using waze during rush hour.


You don't need them to be networked to get large benefits from computer controlled, and doing so adds a lot of complexity.

At 60 mph the stopping distance including reaction time is typically referred to as around 80 meters[1], and that includes 20 meters of reaction time. In modern cars like Teslas it's more like 40 meters[2], but 20 meters of that 80 (or 40) is human reaction time.

If you can cut that down due to much faster computer reaction times you can get a lot more road throughput, of course a logical step after that is to consider networking the computer control, but talking about that from the get-go seems to me to be needlessly jumping several steps ahead.

1. https://www.brake.org.uk/facts-resources/15-facts/1255-speed

2. https://www.theverge.com/2018/5/30/17409782/consumer-reports...


You mean like a train? Why not have trains instead?


Because trains can't take exits... unless the whole train wants to.


I love trains but for some stupid reason >$200 million per mile is the average cost for urban rail. We already sank a huge upfront investment into the interstate highway system, so we'll have to settle for a private enterprise solution with semi-autonomous vehicles. At least, until the US Congress becomes less dysfunctional...


Rails are expensive and limiting.


All cars are not going to the same place.


Stops.


I think bumper to bumper is a but much, you still need enough distance between cars to compensate for unexpected events.


We will have flying cars long before that happens


Tesla wouldn't be where it is if it just had mere accident avoidance. Fewer would buy their cars and so they'd have worse training data and less money/viability. They probably wouldn't exist. The author is essentially correct but you don't see the whole picture because there aren't news articles when a Toyota or BMW LKAS system fails and causes an accident. We'd just have incumbent car manufacturers with worse systems.

In fact I'd probably go the other way and advocate for requiring these systems on cars similar to seatbelts. At the very least collision accident avoidance and blind spot detection. Some manufacturers still make them options or, worse, reserve them for the highest premium trims which is kinda bullshit given the technology really isn't that expensive.


I don't think Tesla can claim credit for development of collision avoidance. They have been vocal proponents of it, but so have Volvo. Tesla just happened to start manufacturing mass market vehicles around the time of the ML revolution in the 2010s.


The automated accident avoidance systems will never become advanced enough without something like autopilot driving them.


NHTSA found that the Economic and Societal impact of Motor Vehicle Crashes, when taking into account harm from pain & suffering and lost wages, is about $877 billion per year. [1]

That means we have a tremendous incentive to make investments which make the roads safer, and lower the crash rate. The single biggest technological advancement to make this possible (without eliminating cars) will be autonomous driving.

Today the lowest hanging fruit for self-driving is algorithmic work which allows the car to navigate safely based on its own sensor suite, and mostly by ignoring map data which could be unreliable. Reliance on highly accurate maps and roadway infrastructure is not really a pressing concern, because the cost/benefits of investing in improving the internal algorithm is much higher than the cost/benefits of adding roadway infrastructure for such a small percentage of vehicles to leverage.

I believe there will be a cross-over point, where safety for a significant number of vehicles can be significantly improved by making significant changes to road infrastructure to support self-driving. When that day comes, I hope that the USG will be ready to invest on the order of $1 trillion to deploy that infrastructure and ultimately, mandate all new vehicles on public roads are self-driving.

I've said it in many comments before - it is a moral imperative to get computers and not humans "behind the wheel". We can afford to spend trillions of dollars to get there, but that's not to say we get there faster by spending trillions of dollars today.

[1] - https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/2015sae-blin...


The title makes it appear as if "Sacrifices" must be made, which is either a disingenuous click-bait or misguided, as its categorically untrue.

The whole process of implementation of autopilot is based on the logic that its implementation is scaled as a function of the number of lives it SAVES through its utility. Meaning that as the percent automation increases, you should be able to see a net DECREASE in casualties in automated vehicles compared to non-automated vehicles. If we are not seeing this, then the rate of implementation may be too high, and the software needs to be corrected until it is.

Nothing in life is perfect, there will always be accidents, the only question is can we create tools which will decrease the percent chance of them happening.


The problem is that "it could" there is no proof that after Tesla tests on public roads won't fail or go bankrupt.

I am not aware of any proof that neural networks combined with current tech is enough.

I know there are some misleading statistics around but those are not correct are biased and is not a proof.


>The problem is that "it could" there is no proof that after Tesla tests on public roads won't fail or go bankrupt.

Tesla will just raise more capital. It's unlikely that they will go bankrupt anytime soon.

Unfortunately, there are nowhere near the misfortune of Faraday future.( They just announced bankruptcy today)


Assuming Tesla will do excellent, it could be Uber,Waymo or company X in country Y where some politicians will sign a piece of paper letting them killing people on the streets and later X fails because some bizarre reason or maybe they wanted to use n cameras but they had no idea that you needed double that so all that testing and people killed was for nothing.

Then you have 12 companies testing with real people and not sharing data so killing about 12 times more people then needed.


People were having the same discussions about airbags. They ended up saving quite a number of lives. The media should do more to contribute to a better world vs writing clickbaity articles that make adrevenue. It’s astonishing that somethinf like drunk driving for example, is portrayed less scary than autonomous vehicles.


I’ve posted this several times on HN but I’ll post it again.

I once had the opportunity to interview a renowned cancer doctor and researcher at MD Anderson hospital. He told me that we would never again see the kinds of advances we saw in the early days (70’s and 80’s) of cancer research because those days were the Wild West. He basically said they did many reckless things, but that’s how they stumbled upon treatments such as the protocol for using steroids with chemo for acute lymphoblastic leukemia.

Ok, so high risk experimentation lead to exaptation and innovation. Not sure I want to see that happen with self driving cars but the other part of me knows that’s how complex systems “evolve”.


Think how many lives we could save by loosing regulations on medical experimentation. If we killed not that many people we could save billions.

Good thing we already have decades (even centuries) of research ethics we’ve developed to deal with situations like this. Too bad Tesla is ignoring them. The proposal in the article to treat these like (potential) medical breakthroughs is spot on.

I think we all agree that autonomous cars will eventually save lives, but that was true 50 years ago too. The question is whether these autonomous cars will save lives. We’ve developed the ethics. We’ve developed the standards of evidence. In seems like a no brained to require them here.


Wouldn't public transport "save millions" much more efficiently?


Yes, but we haven't worked out how use public transport to divert billions of dollars of investment into paying machine learning specialists telephone number salaries, so it's not actually plausible.


Sarcasm aside, the discussion seems to be driven by technology fetish and a promise to buy / sell more stuff.


Looks like HN has a new strapline!


Sounds like the next YC startup...


Like many people I accept that there are unavoidable deaths in the name of progress.

What I don't like is that I cannot opt out of this experiment. I believe many of these deaths are needless and due to cost savings and sloppiness.

Tesla to me seems less like a center of excellence and more like a group of people who can put up with Elon. Presumably made easier by telling him what he wants to hear.

While the average driver may be bad, the average hour driven is by good drivers.

Plus I don't like the idea of a Tesla can crashing into me. Seems like something that could be weaponized.


We could fix autopilot problems across the entire USA in approximately one year.

However it would not have a winner take all, Wall Street frothiness pump and dump aspect to it.

You put rfid tags in the asphalt, on the curbs and some kind of beacon on the telephone poles, or a 'wire in the road' (like a smarter version of the working system that GM built in the 70s) and then, because it is publicly owned infrastructure, everyone can use it.

That goes against the current hypocritical faux-libertarianism that is espoused by some of our tech elites-but it would in fact work.

I said a year, because that is how long it would take to install it.

Edit to add: for fun, you can go on Mouser or Digikey and determine how to put the system together with off the shelf parts. Just add weather proofing...


US is known for their world class infrastructure, and its regular maintenance. https://www.businessinsider.com/asce-gives-us-infrastructure...


Systems created by humans adapt to humans, especially where they carry significant downside risks. The risk of death or injury is high enough in driving motor vehicles that we've taken a ton of precautions (licensing drivers, criminalizing reckless behavior, road signs, stop lights, intersections design, freeway design) to try to minimize those risks, and the designs have adapted to how humans drive.

Because there is skin in the game, people drive carefully -- drivers who are consistently reckless are removed from the system. Pedestrians and other non-drivers have adapted their behavior as well to reduce the risks.

Autonomous drivers have the problem that they do not carry skin in the game -- they are not removed from the system for bad behavior. In the short term that just gives us an ethical quandary about who is responsible for deaths caused by autonomous systems.

In the long term it presents much higher risks as non-autonomous-drivers (both drivers who are not autonomous and humans who are not driving) will adapt their behavior to their mental model of the behavior of autonomous vehicles. Pedestrians will adapt their behavior correspondingly, creating even more dangerous scenarios because the interaction with sufficiently complex system is inherently unpredictable. We rely on the fact that we can build an approximate mental model of human behavior as it pertains to other drivers to predict what their behavior will be, and we constrain that behavior via rules and legal deterrents so that they are more predictable.

When we are 50% autonomous, what happens to someone how deliberately tries to cross a busy highway? Other systems, like trains, we have warnings for -- "implacable vehicle coming that will destroy anything left in its path", but for autonomous vehicles? The priority on keeping humans alive will cause situations like this to require building stricter behavioral systems to try to limit people's behavior, but the complexity of the underlying behavior is real -- that is, most of the time, crossing an autonomous freeway is actually a safe operation because of the way that they cars are designed -- so there's no feedback that will cause pedestrians to modulate their behavior. Similarly, the driving systems themselves are designed with some elements of human behavior implicitly included -- as these behaviors drift due to the coupling of complex systems, it's not clear how we can directly adapt the behavior of the autonomous drivers.


Caution: Extreme opinion below.

Artificial Intelligence is a tool. Just like anyone who uses a hammer, backhoe or dump truck is responsible for what happens when they are using it, the same is true for AI.

As a tool, all AI systems should be designed to always do the most harm to the user of the AI first. It should be embedded into every autopilot-like system, and users should be aware of that choice. It's the only moral and ethically correct solution.

Let's say your AI-controlled car is driving at speed and turning the corner, there suddenly appears a little girl in the road in front of you. It can either swerve into a wall or run down the girl. A human might not be able to make the decision in time, but an AI system would. It needs to be programmed to always hit the wall.

Though this might seem extreme, the opposite is completely immoral. If it was a human driving, one could assume bad luck to be put in that situation: aka innocent until proven guilty. But for an AI system, we have to assume the opposite: The AI should always be presumed to be faulty. Therefore the user of that AI is culpable of putting it in that situation.

And just like any other tool, the manufacturer of that tool is legally responsible for its quality and reliability.

The opposite of the above means we'll all be riding around in autonomous tanks, aggressively maneuvering around each other at higher and higher speeds, pedestrians and others be damned.


What bothers me about autonomous-driving software is that while you might argue that drivers are free to take the risk, I as a pedestrian (or another driver) did not sign up for the risk.

There is an entirely new set of failure modes that we are not familiar with, and particularly as a pedestrian, it worries me greatly.


It reminds me of how the violence-harm that exists in a democracy vs. a tyranny isn't evenly distributed: there may be very little violence (due to control mechanisms) in a tyrant controlled state - at least until you're their target and then slaughter, mass murder can occur.


If anyone else balked at the pilot story, here it is (and it's a bit juicy): https://en.wikipedia.org/wiki/Northwest_Airlines_Flight_188


The ending is telling: [I trust the AV, but not enough to risk hitting a cyclist right during the interview. More safety...at the expense of people sans Teslas] (paraphrasing mine)


All of these efforts will eventually result in an amazing world where there are a lot fewer or zero crashes. Perhaps even change how cities or homes are built/organized. No doubt.

But the really troubling thing is the attitude and approach: this is not an "experimental software rollout" that somehow Tesla seems to advocate. You cannot use live human beings in real streets with very little safety net to "roll out" your self driving torpedoes. That just doesn't make sense and seems very irresponsible.

I would rather not have silicon valley mentality when it comes to cars or healthcare or even banking. Real people's lives are at stake.



Anyone have a link to get past the paywall for this article?



Cool! That's me.


Hey, cool! That's me in the story


I honestly hate this "false baseline" story line that applies to almost all anti AI/automation comments. A person who is focused may be better than self driving vehicle but even that is debatable. However, given real life self driving are already much safer than real life human drivers. Think about pedestrians and bike rider safety too!!


> However, given real life self driving are already much safer than real life human drivers

Citation needed


The citation is that there is a wikipedia list of all the fatalities from self driving cars with only 6 entries on it, and most of them were due to human error still. https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal...


You only forgot to consider that there are still drivers in the cars, so each time the human intervene it could have prevented an accident , I am not sure what is the average victims per accident say 2 , you can multiply it with the number of time the drivers had to take over and you have an upper limit, if you don't like this upper limit then we need better numbers but this companies keep them secret or under report the problems.


According to this [1], fatality rates vary a lot from country to country. For example, in Germany and UK the rate is about 1/3 of the US rate.

Will you cherry pick the geography to make your argument?

[1] https://en.m.wikipedia.org/wiki/List_of_countries_by_traffic...


That is not a good citation given the number of autonomous cars on the road compared to the number of user-driven cars.


Due to human error because they are only level 2 or 3 selfing driving, which requires a human driver. Which means they clearly are not safer than human driving.

Telsa autopilot isn't even self-driving. It's fancy cruise control.


Bunk. Autopilot makes very strange decisions during normal operation -- decisions that normal human drivers would NOT make, such as merging into an entrance lane on the freeway. This in and of itself is enough to confuse and befuddle other drivers on the road, which makes it more surprising and dangerous than a human driver. Worse, it will actively fight the driver at the helm to make these decisions. Source: test drove a model 3 and it did just this and nearly caused an accident during the test drive and I could not instruct it to stop.


> during the test drive and I could not instruct it to stop.

Frankly, I do not believe that you are telling the truth. If you can provide a test case demonstrating this, however, it would be international news.

Tesla’s Autopilot can be manually disengaged in several different ways. If none of those worked for you, then that is indeed worth investigation.

I personally doubt your claim, though.


> I could not instruct it to stop

Bunk. You can take over at any time by turning the wheel, pushing the brake, or pushing the accelerator.

Source: I take over from autopilot all the time.


Bunk. I was not made aware that was even possible.

Source: I... was there? :D


Everyone knows that you can step on the brake to stop a car, even with the cruise control on.

At this point, you're not saving face by arguing with fact.


Did you even think to try the brake?


I would compare it to a brand new teenaged driver. I've had plenty of poor Uber drivers make the same sorts of mistakes that Autopilot makes. However, I'm surprised you had issues taking control, because a slight turn of the wheel or a quick brake disengages the system. However that said, Autopilot is definitely not safe or usable on city streets or in the rightmost lane of a highway, because of issues with lane markings. Source: I use it every day on the highway and I absolutely love it.


> and I could not instruct it to stop.

Can you please expand on that?

In my experience with the most advanced available Tesla driving features over the past two years, touching the brake pedal immediately disengages all of that software, and putting more than a pound or two of turning force on the steering wheel immediately disengages autopilot (steering control), but leaves adaptive cruise control on.

You can also use the same stalk you used to engage autopilot to disengage it.

'could not instruct it to stop' is a very bold statement.


It was a bold statement because it was a situation I was unfamiliar with, and the salesman in the car with me wasn't explaining how to prevent it from making such a move. Since the system was alien to me, I wasn't prepared for what I needed to do to do so.


>However, given real life self driving are already much safer than real life human drivers

Citation needed. Breakdown by different countries would be also interesting.


This is a really misleading and inflammatory article.

" reading books, napping, strumming a ukulele, or having sex."

Perhaps it was like that years ago, long before the Model 3 but it isn't like that now. My 2015 Model S would complain long before I could complete a sex act. Or perhaps Bloomberg's hack suffers from premature ejaculation.


The author is alluding to a specific incident, which was posted to porn sites. Elon Musk commented about it on Twitter (because of course he did), saying "Turns out there’s more ways to use Autopilot than we imagined."


I take issue with the headline. The fact that crashes still do and will occur even while AutoPilot is on does not mean that AutoPilot is legally or ethically responsible for the crash, until the system is advertised as a Level 4 system. Only at that point we can look at the accident rate of the L4 system and decide if those accidents are the fault of the autonomy, or if the algorithms could be improved to have avoided an accident that was even another driver's fault.

Distracted driving is a leading cause of crashes today, and particularly if you use AutoPilot, you really see the ridiculous numbers of distracted drivers all around you. Many drivers choose to use their phones instead of looking at the road, whether they have AutoPilot or not. It's dangerous in both cases, but less-so if you have AutoPilot on. In either case, in a L2 system, if the car crashes while the driver is on their cell phone, it’s the driver’s fault regardless of whether there was AutoPilot enabled or not.


I don't get why people are up in arms over this: the average person drives like an I D I O T. A Tesla auto-pilot drives above average. And right now they are driving the worse they will ever drive.

Therefore: more auto-pilot, less human death.


I don't think this is true, there are maybe many young drivers and some drunk drivers that cause a lot of accidents.

But consider this, you are a decent driver and you need to send your children somewhere. Do you drive him yourself because you know that you will not speed or text or be drunk or you send it with a robot that is better then an idiot but worse then you ???

Sure if you were drunk or tired is safer to send him with the robot.


Yes that is the physiology barrier that you describe that will be difficult to overcome.

The flaw is everyone thinks they will be a better driver than an AI, even though very few actually will be.

So if I had to choose my children driving with an "average joe" / friend / etc vs an AI, I would say: the data says the AI will crash 25% less, therefore it is safer.


I assume you are better then a drunk teen that comes from a party and has 3 months of experience, The most road deaths that happen here in Romania are caused by young drivers cumming from parties late at night, maybe drunk, the car is full with people so you have 1 crash causing a lot of deaths, So the average driver from the stats is a terrible driver because the stats are skewed hard by the inexperienced, drunk or tired drivers. You can have both this next two facts true at the same time

1 replacing all drivers with AI is x% safer

2 replacing you an experienced responsible driver with an AI is less safer

Does it make sense ?


[flagged]


I know




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: