Hacker News new | past | comments | ask | show | jobs | submit login

> I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.

> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically

The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...

> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.

> And if it was really that common then why aren't their safety records worse than they actually are? They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)

> The rate for autonomous vehicles is also very low, and the average person is still average.

You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.

> You're assuming they're the same problem rather than merely the same result

From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: