Hacker News new | past | comments | ask | show | jobs | submit login

It’s not really the rate of accidents so much as who will be getting sued. I think it will take legislation to protect autonomous vehicle manufacturers against certain lawsuits otherwise the cost of liability will be so high the vehicles will be rendered uneconomical as the costs are passed on to consumers, or the companies driven out of business altogether.

The article mentions the General Aviation Revitalization Act. The small aircraft industry was almost wiped out until these protections were afforded. I don’t see how any company could risk unleashing a fleet of vehicles with the threat of uncapped liability looming over them.




Yep. The key point here isn't whether autonomous cars will be better at solving general AI problems related to accident risk than humans [although the OP's extraordinary claim that this will be 'easy to get in a short period of time' requires extraordinary evidence], but who the liability falls upon when they don't, and what happens when this is tested in court.

When a human is accused of an egregious driving error with serious consequences, it's a discrete problem. When software makes an egregious error, it probably isn't, and simply removing that particular instance of software from the road and paying off directly affected parties isn't likely to be a satisfactory resolution.

To take another example from aviation, two 737 MAX accidents attributed to its software see the entire fleet grounded and very serious questions being asked about its future, and by extension the company's. That entails much more expensive consequences and complex legal cases than car accidents for which humans are held responsible, even though car accidents are much more frequent.


The latest cars are constantly collecting data, so automated driving errors likely could be treated as a discrete problems. Process would likely be:

1) Determine damages like we currently do, and manufacturer, which is likely to be the same as the insurer, pays out.

2) Manufacturer has a fixed amount of time to show that they've re-trained the system on the problem scenario and scenarios close to it such that it no longer occurs given similar conditions.

3) Push the update to the fleet, the same accident should never happen again.

If we could do this with human drivers, we would be in great shape.


Even if monotonic improvements to near bug-free complex software were actually feasible, it might be difficult to persuade legal systems that they should be treated that way.

In real world regulatory environments, Boeing's patch to its MCAS system - which was tractable and trivially simple compared with teaching a software program how to react safely to certain types of human behaviour whilst leaving behaviour in other situations unchanged - is undergoing months of testing and the fleet has been grounded for over a year.


I disagree, the MCAS system was a poor attempt at fixing a hardware design flaw with software. Boeing tried to slap larger engines on an airframe that was not designed for those engines to save cost. This made the aircraft more more prone to stalls, requiring the MCAS system. Boeing convinced the FAA not only that the MCAS system was sufficient for preventing stalls but also that pilots didn't need retraining. It's likely that MCAS is not sufficient and that's actually what is taking so long.


Obviously MCAS was not a success, but the passengers didn't die because of the hardware's propensity to stall, but because the software incorrectly handled an edge case it hadn't encountered in testing by fatally altering the aircraft's course. Needless to say, the risks of that happening when software takes 'everything going on the roads' as input and not just 'certain angle of attack parameters' are higher, not substantially lower as they would have to be for self-driving car manufacturers to be unconcerned about regulatory intervention or consequences.

The FAA always takes its sweet time making decisions, mostly for good reasons anybody regulating mass market autonomous road vehicles ought to follow. But what's spooked them here isn't software between a pilot and the aircraft's control surfaces altering the piloting experience (that's been around for a while), but the software making a decision and the pilots being unable to effectively override it...


@notahacker was not talking about the original MCAS, but about the patch to it -- the modifications now under development to fix the problems.


Right, but the patch is likely taking a long time because they're trying to fix an inherently unstable design with software.

This is possible in aviation but is typically only done fighter jets where you have vectored thrust, massive control surfaces, and an ejection seat if things go wrong.

If your goal is safety, software-based stability is a very poor design. I think there's a decent chance the MAX won't be re-certified without physical design changes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: