I don't see the problem if rate of accidents and incidents are way lower, and they will be.
If rate of accidents and incidents gets to 10 times less, the price of insurance will drop 10 times.
Claims go to manufacturers? Manufacturers will insure, and you will pay that price, that will be smaller than current insurance.
We know the rate of accidents could drop enormously because we have seen what has happened with airplanes.
I modeled mathematically accident causes for a big insurer in Spain. They have a database of accidents and most accidents are preventable.
10x less accidents is very conservative and easy to get in a small period of time(10 years).
Something as simple as "knowing the road conditions", like "this place is dangerous after the bridge in winter because there is shadow and water and ice is created", or "here there are construction's trucks that leave sand at the curve, dangerous for motorbikes", "here children go out the school that is near and cross the road instead of taking the bridge".
When a car can download all this information and take decisions based on that, I expect the rate of accidents to drop dramatically.
It’s not really the rate of accidents so much as who will be getting sued. I think it will take legislation to protect autonomous vehicle manufacturers against certain lawsuits otherwise the cost of liability will be so high the vehicles will be rendered uneconomical as the costs are passed on to consumers, or the companies driven out of business altogether.
The article mentions the General Aviation Revitalization Act. The small aircraft industry was almost wiped out until these protections were afforded. I don’t see how any company could risk unleashing a fleet of vehicles with the threat of uncapped liability looming over them.
Yep. The key point here isn't whether autonomous cars will be better at solving general AI problems related to accident risk than humans [although the OP's extraordinary claim that this will be 'easy to get in a short period of time' requires extraordinary evidence], but who the liability falls upon when they don't, and what happens when this is tested in court.
When a human is accused of an egregious driving error with serious consequences, it's a discrete problem. When software makes an egregious error, it probably isn't, and simply removing that particular instance of software from the road and paying off directly affected parties isn't likely to be a satisfactory resolution.
To take another example from aviation, two 737 MAX accidents attributed to its software see the entire fleet grounded and very serious questions being asked about its future, and by extension the company's. That entails much more expensive consequences and complex legal cases than car accidents for which humans are held responsible, even though car accidents are much more frequent.
The latest cars are constantly collecting data, so automated driving errors likely could be treated as a discrete problems. Process would likely be:
1) Determine damages like we currently do, and manufacturer, which is likely to be the same as the insurer, pays out.
2) Manufacturer has a fixed amount of time to show that they've re-trained the system on the problem scenario and scenarios close to it such that it no longer occurs given similar conditions.
3) Push the update to the fleet, the same accident should never happen again.
If we could do this with human drivers, we would be in great shape.
Even if monotonic improvements to near bug-free complex software were actually feasible, it might be difficult to persuade legal systems that they should be treated that way.
In real world regulatory environments, Boeing's patch to its MCAS system - which was tractable and trivially simple compared with teaching a software program how to react safely to certain types of human behaviour whilst leaving behaviour in other situations unchanged - is undergoing months of testing and the fleet has been grounded for over a year.
I disagree, the MCAS system was a poor attempt at fixing a hardware design flaw with software. Boeing tried to slap larger engines on an airframe that was not designed for those engines to save cost. This made the aircraft more more prone to stalls, requiring the MCAS system. Boeing convinced the FAA not only that the MCAS system was sufficient for preventing stalls but also that pilots didn't need retraining. It's likely that MCAS is not sufficient and that's actually what is taking so long.
Obviously MCAS was not a success, but the passengers didn't die because of the hardware's propensity to stall, but because the software incorrectly handled an edge case it hadn't encountered in testing by fatally altering the aircraft's course. Needless to say, the risks of that happening when software takes 'everything going on the roads' as input and not just 'certain angle of attack parameters' are higher, not substantially lower as they would have to be for self-driving car manufacturers to be unconcerned about regulatory intervention or consequences.
The FAA always takes its sweet time making decisions, mostly for good reasons anybody regulating mass market autonomous road vehicles ought to follow. But what's spooked them here isn't software between a pilot and the aircraft's control surfaces altering the piloting experience (that's been around for a while), but the software making a decision and the pilots being unable to effectively override it...
Right, but the patch is likely taking a long time because they're trying to fix an inherently unstable design with software.
This is possible in aviation but is typically only done fighter jets where you have vectored thrust, massive control surfaces, and an ejection seat if things go wrong.
If your goal is safety, software-based stability is a very poor design. I think there's a decent chance the MAX won't be re-certified without physical design changes.
Its not about when it works. Its about the first accident. Should the car have performed better? Who is liable - the owner? the manufacturer? the software company?
The good news is, when an issue is found then all the cars can be updated. The bad news is, conflicting decisions about right behavior. The trolley car problem springs to mind - can an AI car leave the road to avoid a collision? What if somebody is on the shoulder and could be hit? Which accident should the AI choose?
Different municipalities/states/judges will have different ideas. It'll be a madhouse until some groundrules are in place e.g. "An AI car shall not leave the road". Call it the 'Laws of automotive Robotics'.
The "trolley problem" is such nonsense in this context. As if a human driver in that situation isn't going to freeze up / knee-jerk-react / not even be paying attention at the moment.
These are split second decisions and people wheel out philosophical problems?
It's always insurance that pays regardless, so it isn't really a part of the equation. It also depends a lot on what kind of accident it is.
Is someone driving into you? Not much physically you can do to avoid it no matter who/what is driving.
Semi-autonomous cars can easily be blamed on the driver, because they're still in charge of the car, just being assisted, not commanded.
Is the car taking unlawful actions on it's own? Should propably fix that on the maker side. Why should any company be exempt when their product is not up to spec? The price increase to make it compliant is necessary, not a nice to have.
And with the trolley problem, you often can't compute philosophical questions with pure logic because of their fractal nature, but you can make it a legal argument. Does the car have the right to kill? If it doesen't, it can't and won't choose. It will be an accident just like any other would be.
You are a rational person. But the law is a machine. A rule has to be made, or arguments will go on endlessly.
Never mind what a person could do; the AI car will have a different expectation. And remember, there's the chance of suing Google when your mailbox is run over. Not just some schmuck.
> I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody's baby. 11% is more than enough. A human being would've known that.
The trolley problem is silly. Was the car putting itself into an ultimatum? That’s the fault of the car. The car can be programmed to not exceed a certain speed calculated from its stopping distance. The tradeoffs would be that it would drive more conservative but that’s really the whole point of automating cars. If the car gets itself into a trolley problem something is wrong with the car and that’s pretty clear
Then all AI cars will go about 10 MPH. Because something might dart out at any moment.
Our roads aren't built so you can 'not exceed...stopping distance'. Sure in good conditions with dry road and clear shoulders and a wide ditch, you can come up with a figure. But that's not where accidents happen. They happen in town, at intersections, driveways.
Every AI car already has to make the 'trolley car' decision, and you can bet the programmer put something in there. If something is in the road, does it dodge around? Yes or no. That's the gist of it.
Makes a lot of sense. Instead of insurance premiums being paid directly by consumers it'd be paid by car manufacturers who pass it on by slight price increases. It could work very well.
I'm pretty extreme, I'd even accept a look-the-other-way policy for the first 10 years where vehicle manufacturers are given a few extra shields from liability. The potential for this tech to save thousands of young lives is very real and we should be bringing it to fruition with indecent haste.
Individuals aren't generally the target of class-action lawsuits or giant billion dollar judgements to force a collective behavior change (or just outright bankruptcy) of every driver in one judgement. They could do this once big companies become responsible for a large number of "drivers" at once. I am also hopeful technological progress will be allowed to win out, however.
Speed limits (ignoring shenanigans like setting them low to increase revenue) and other traffic laws are fundamentally a measure of what society considers acceptable risk. If people are regularly exceeding the speed limit in certain conditions then the limit is too low for those conditions.
Attempts to get drivers to act like robots and follow the rules or to replace them with AI that does just that are mostly than a waste of words because the fundamental hard problem is reaching a compromise on what is acceptable risk in any given situation.
There is also sometimes a disconnect between perceived risk by the driver and the actual risk. How we design roads can have a large effect on this. If people are regularly exceeding the speed limit, you designed your road wrong.
Generally we drivers are very poor at judging acceptable risk, like iffy weather conditions where we will feel pressure to keep that annoying tail-gaiter from overtaking even thou they are driving as if the conditions were optimal. Why? Time constrains of appointments or just a need to get home quickly. Then a myriad of things; stress, lack of sleep, boredom, age, health, hunger, intoxicants. We don't parse any of these risks at all well given the real conditions, as evidenced by the road toll. Removing human foibles from everyday transport should make driving steadily safer just as automated systems have for flying or rail, etc.
As for the liability, perhaps some portion of the reduction of insurance costs for damages to health and property can be funneled into a fund for mitigating wherever improvements are most needed across the sector generally including signage, road markings, code audits, sensor checks and so on. A small levy on the autonomous car's insurance should do the trick.
More and more people are giving up on self driving car tech. If it can't cope with American grid systems, it is going to flounder on more complex street layouts.
Still, as a philosophical question it has a fairly clear answer. Generally we go for decision-makers as those who are responsible for decisions made. There's a clear line between the vehicles and the manufacturers. The only way I could see it being construed as the drivers responsibility is if people are given control of the risk profile of the vehicle or if they eg install a custom unit to do the driving. Both seem insanely reckless, although I suppose that has never stopped such developments from taking place before..
> “ This first category of defect is unlikely to apply very often to autonomous vehicles, since modern manufacturing methods, especially for critical components of autonomous vehicles such as the software and navigation systems, can be manufactured with low error rates.”
That is a very strange thing to say. I would think this assertion is not settled at all, and actually the hearth of the matter.
The first category is a manufacturing defect, where the product departs from its intended design
Errors of this kind are transfer errors, where (for software) the image loaded onto the vehicle is not bit-for-bit identical to the original code. I would agree with the assessment that these errors are rare, since verification of the process is rather easy: all firmware update procedures already have an established process for verifying the written ROM contents.
What they're actually saying is that most navigation errors will stem from design flaws, not from bit-flip errors, which seems a defensible assumption to me.
> errors will stem from design flaws, not from bit-flip errors, which seems a defensible assumption to me.
Unless the manufacturer decides that automotive grade parts are overpriced and low performance for the price and starts deploying non-automotive/non-industrial grade parts.
Fortunately no reputable automaker would do something like that as they'd surely face dire consequences in the market if they did.
Automotive grade is consumer grade. There's standards for some things but they're standards of convenience so that an engineer from GM can call up Bosch and just reference a standard rather than spec it out in detail. Some safety critical things and the overall performance of some systems have standards that need to be met per law but there is no special "automotive grade" for pretty much anything that goes into a car.
In my experience working with critical hardware and software with sensors, I believe this is pretty much settled, because those components are fixed and well known.
We know with precision how to maintain those, the rate of failure and audit or overhaul times. In fact, right know sensors could "call home" when there is a problem.
The heart of the matter is what is not known, like the wild board getting in the road, or a cyclist that appears from the side, or the road is wet, there is ice or snow on the road.
Usually what happens for accidents is all of the above, like it is dark(you don't see cars but specular reflections of lights), it is wet, there is ice and snow and a cyclist appears out of nowhere.
Those things are extremely unknown and hard to model, and AI does not work there.
>Those things are extremely unknown and hard to model, and AI does not work there.
That might be because they're problems for _any_ thing. Can't really apply a solution to a problem with what hasn't been solved in the first place. No human is doing any better either, so it'd be unfair to expect autonomous vehicles to do miracles either.
For some reason it's fine that Bob just killed a family of four, but unfathomable for a thinking rock to make the slightest error.
Bob was having yet another bad day, and everyone knows Bob drives like he owns the road, and is really just a ticking time bomb. Bob doesn't heed any of the caution his colleagues have, nor care.
That clever rock had many of the brightest geniuses on the planet working around the clock for decades steadily making it less prone to calamity. We entrust our lives and family on that proverbial magic carpet ride made by unseen digital gods. It's almost a religious level of faith and expectation. They thought rock-et science was hard...
This aspect of autonomous vehicles is very boring compared to the ethical dilemma.
Should self-driving cars perhaps have a morality setting, with a range from "Screw everyone else" to "Save as many lives as possible (prioritize younger), don't worry about me at all"? And should this in turn be connected to your insurance policy (the more selfish the setting, the more you pay)?
The trolley problem is much ado about nothing. Or nerd-sniping for ethicists. The vehicle could be set to maximize harm in case of an accident and still perform better than the average human driver by creating fewer accidents in the first place.
So if you want to optimize for human lives then don't worry about the trolley problem, instead consider how you can get sufficiently good autopilots into as many hands as possible as soon as possible.
Nearly every fatal collision could have been avoided by driving more slowly. Human drivers aren't blamed unless their speeds are notably excessive, but a computer system that basically is just turning a knob could always have just taken a bit more time and saved that life. If any "screw everybody" settings arise, the lawsuits will eliminate them soon enough.
What if that setting were tied to your risk threshold? Eg, you could set the car to drive very conservatively (respect speed limits, slow in dangerous conditions or areas) and it would prioritize your safety over all else, or to be a bit more reckless, but prioritize your safety less. That way you're making an explicit choice that you accept the personal risk in order to get to your destination faster, but you can't offload that risk on other people.
Every time the car in front of you makes an emergency brake and you have to decide whether to rear end the car in front of you or risk the truck following you to rear end you.
If rate of accidents and incidents gets to 10 times less, the price of insurance will drop 10 times. Claims go to manufacturers? Manufacturers will insure, and you will pay that price, that will be smaller than current insurance.
We know the rate of accidents could drop enormously because we have seen what has happened with airplanes.
I modeled mathematically accident causes for a big insurer in Spain. They have a database of accidents and most accidents are preventable.
10x less accidents is very conservative and easy to get in a small period of time(10 years).
Something as simple as "knowing the road conditions", like "this place is dangerous after the bridge in winter because there is shadow and water and ice is created", or "here there are construction's trucks that leave sand at the curve, dangerous for motorbikes", "here children go out the school that is near and cross the road instead of taking the bridge".
When a car can download all this information and take decisions based on that, I expect the rate of accidents to drop dramatically.