If rate of accidents and incidents gets to 10 times less, the price of insurance will drop 10 times.
Claims go to manufacturers? Manufacturers will insure, and you will pay that price, that will be smaller than current insurance.
We know the rate of accidents could drop enormously because we have seen what has happened with airplanes.
I modeled mathematically accident causes for a big insurer in Spain. They have a database of accidents and most accidents are preventable.
10x less accidents is very conservative and easy to get in a small period of time(10 years).
Something as simple as "knowing the road conditions", like "this place is dangerous after the bridge in winter because there is shadow and water and ice is created", or "here there are construction's trucks that leave sand at the curve, dangerous for motorbikes", "here children go out the school that is near and cross the road instead of taking the bridge".
When a car can download all this information and take decisions based on that, I expect the rate of accidents to drop dramatically.
The article mentions the General Aviation Revitalization Act. The small aircraft industry was almost wiped out until these protections were afforded. I don’t see how any company could risk unleashing a fleet of vehicles with the threat of uncapped liability looming over them.
When a human is accused of an egregious driving error with serious consequences, it's a discrete problem. When software makes an egregious error, it probably isn't, and simply removing that particular instance of software from the road and paying off directly affected parties isn't likely to be a satisfactory resolution.
To take another example from aviation, two 737 MAX accidents attributed to its software see the entire fleet grounded and very serious questions being asked about its future, and by extension the company's. That entails much more expensive consequences and complex legal cases than car accidents for which humans are held responsible, even though car accidents are much more frequent.
1) Determine damages like we currently do, and manufacturer, which is likely to be the same as the insurer, pays out.
2) Manufacturer has a fixed amount of time to show that they've re-trained the system on the problem scenario and scenarios close to it such that it no longer occurs given similar conditions.
3) Push the update to the fleet, the same accident should never happen again.
If we could do this with human drivers, we would be in great shape.
In real world regulatory environments, Boeing's patch to its MCAS system - which was tractable and trivially simple compared with teaching a software program how to react safely to certain types of human behaviour whilst leaving behaviour in other situations unchanged - is undergoing months of testing and the fleet has been grounded for over a year.
The FAA always takes its sweet time making decisions, mostly for good reasons anybody regulating mass market autonomous road vehicles ought to follow. But what's spooked them here isn't software between a pilot and the aircraft's control surfaces altering the piloting experience (that's been around for a while), but the software making a decision and the pilots being unable to effectively override it...
This is possible in aviation but is typically only done fighter jets where you have vectored thrust, massive control surfaces, and an ejection seat if things go wrong.
If your goal is safety, software-based stability is a very poor design. I think there's a decent chance the MAX won't be re-certified without physical design changes.
The good news is, when an issue is found then all the cars can be updated. The bad news is, conflicting decisions about right behavior. The trolley car problem springs to mind - can an AI car leave the road to avoid a collision? What if somebody is on the shoulder and could be hit? Which accident should the AI choose?
Different municipalities/states/judges will have different ideas. It'll be a madhouse until some groundrules are in place e.g. "An AI car shall not leave the road". Call it the 'Laws of automotive Robotics'.
These are split second decisions and people wheel out philosophical problems?
In the vehicle, is it still the fault of the driver, or the software company? Or insurance?
Is someone driving into you? Not much physically you can do to avoid it no matter who/what is driving.
Semi-autonomous cars can easily be blamed on the driver, because they're still in charge of the car, just being assisted, not commanded.
Is the car taking unlawful actions on it's own? Should propably fix that on the maker side. Why should any company be exempt when their product is not up to spec? The price increase to make it compliant is necessary, not a nice to have.
And with the trolley problem, you often can't compute philosophical questions with pure logic because of their fractal nature, but you can make it a legal argument. Does the car have the right to kill? If it doesen't, it can't and won't choose. It will be an accident just like any other would be.
Never mind what a person could do; the AI car will have a different expectation. And remember, there's the chance of suing Google when your mailbox is run over. Not just some schmuck.
Our roads aren't built so you can 'not exceed...stopping distance'. Sure in good conditions with dry road and clear shoulders and a wide ditch, you can come up with a figure. But that's not where accidents happen. They happen in town, at intersections, driveways.
Every AI car already has to make the 'trolley car' decision, and you can bet the programmer put something in there. If something is in the road, does it dodge around? Yes or no. That's the gist of it.
I'm pretty extreme, I'd even accept a look-the-other-way policy for the first 10 years where vehicle manufacturers are given a few extra shields from liability. The potential for this tech to save thousands of young lives is very real and we should be bringing it to fruition with indecent haste.
Might it not reduce accidents, and reduce the severity of accidents, hence, reducing expensive human injuries.
Attempts to get drivers to act like robots and follow the rules or to replace them with AI that does just that are mostly than a waste of words because the fundamental hard problem is reaching a compromise on what is acceptable risk in any given situation.
As for the liability, perhaps some portion of the reduction of insurance costs for damages to health and property can be funneled into a fund for mitigating wherever improvements are most needed across the sector generally including signage, road markings, code audits, sensor checks and so on. A small levy on the autonomous car's insurance should do the trick.
- the speed limit is 130 km/h
- it only shortens the trip by 1-2 minutes
- it's bad for the environment
I suspect they many of my decisions aren't remotely rational.
See, Steve Wozniak has given up on autonomous vehicles: https://www.extremetech.com/extreme/300927-steve-wozniak-no-...
See also, the magic roundabout in Swindon, UK: https://en.m.wikipedia.org/wiki/Magic_Roundabout_(Swindon) - I'd love to see a self driving car deal with this.
Still, as a philosophical question it has a fairly clear answer. Generally we go for decision-makers as those who are responsible for decisions made. There's a clear line between the vehicles and the manufacturers. The only way I could see it being construed as the drivers responsibility is if people are given control of the risk profile of the vehicle or if they eg install a custom unit to do the driving. Both seem insanely reckless, although I suppose that has never stopped such developments from taking place before..
What do you mean when you say this? I’m seeing more and more of them on the roads not less.
That is a very strange thing to say. I would think this assertion is not settled at all, and actually the hearth of the matter.
Errors of this kind are transfer errors, where (for software) the image loaded onto the vehicle is not bit-for-bit identical to the original code. I would agree with the assessment that these errors are rare, since verification of the process is rather easy: all firmware update procedures already have an established process for verifying the written ROM contents.
What they're actually saying is that most navigation errors will stem from design flaws, not from bit-flip errors, which seems a defensible assumption to me.
Unless the manufacturer decides that automotive grade parts are overpriced and low performance for the price and starts deploying non-automotive/non-industrial grade parts.
Fortunately no reputable automaker would do something like that as they'd surely face dire consequences in the market if they did.
We know with precision how to maintain those, the rate of failure and audit or overhaul times. In fact, right know sensors could "call home" when there is a problem.
The heart of the matter is what is not known, like the wild board getting in the road, or a cyclist that appears from the side, or the road is wet, there is ice or snow on the road.
Usually what happens for accidents is all of the above, like it is dark(you don't see cars but specular reflections of lights), it is wet, there is ice and snow and a cyclist appears out of nowhere.
Those things are extremely unknown and hard to model, and AI does not work there.
That might be because they're problems for _any_ thing. Can't really apply a solution to a problem with what hasn't been solved in the first place. No human is doing any better either, so it'd be unfair to expect autonomous vehicles to do miracles either.
For some reason it's fine that Bob just killed a family of four, but unfathomable for a thinking rock to make the slightest error.
That clever rock had many of the brightest geniuses on the planet working around the clock for decades steadily making it less prone to calamity. We entrust our lives and family on that proverbial magic carpet ride made by unseen digital gods. It's almost a religious level of faith and expectation. They thought rock-et science was hard...
Should self-driving cars perhaps have a morality setting, with a range from "Screw everyone else" to "Save as many lives as possible (prioritize younger), don't worry about me at all"? And should this in turn be connected to your insurance policy (the more selfish the setting, the more you pay)?
So if you want to optimize for human lives then don't worry about the trolley problem, instead consider how you can get sufficiently good autopilots into as many hands as possible as soon as possible.
Any action other than avoiding collateral altogether might be punishable.
As a side-note, nobody is going to buy a car that prefers to kill it's owner
There also aren't many situations where your safety isn't tied to others around you