Hacker News new | past | comments | ask | show | jobs | submit login

> Until you remember that to obtain statistically significant evidence that the latest version number of Tesla (or any other firm's) software is safer than the average driver entails hundreds of millions of miles of driving.

It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average. So you have a probability of being the unlucky first person to encounter a new bug, but unless the probability of that is significantly higher than the overall probability of being in a collision for some other reason, that's just background noise.

Moreover, it isn't an unreasonable expectation that newer versions should be safer than older versions in general, so using the risk data for the older versions would typically be overestimating the risk.

> And for that matter that the "average driver" accident rates are skewed upwards by the number of incidents involving people who you'd never, ever volunteer to be driven by [in their state of intoxication].

That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

> In the mean time, the human heuristic that a car which tries to kill you by accelerating at lane dividers isn't safer than your own average level of driving skill in many circumstances is probably better than trusting exponential curves and the Elon Musk reality distortion field.

The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR. But then your heuristic is stale as soon as they fix it, which is likely to happen long before any kind of true driverless operation is actually available.




> It is possible for a newer version to contain a new bug that would increase the accident rate significantly, but given the existence of realtime collision data, that seems like the sort of thing that would be caught and corrected rather quickly, before it would dramatically affect the long-term average.

This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action. This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios. Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low. The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

> That doesn't help you much when the intoxicated driver is driving a vehicle that hits you rather than the vehicle you're a passenger in. You presumably would prefer that vehicle to be self-driving rather than operated by the aforementioned drunk driver.

I don't get to choose what vehicles other people use. I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

> The somewhat ironic thing about stories like this is that whenever they discover something like this, it automatically becomes the focus of engineering time, both because a specific problematic behavior has now been identified and because not fixing it is bad PR.

This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed. I'm sure plenty of engineer time has been devoted to studying them (despite Tesla's actual PR strategy being to deny the problem and deflect blame onto the driver rather than announce fixes) but the fixes aren't trivial or easily generalised and approaches to fixing them are bound to produce side effects of their own.


> This assumes they correctly diagnose the fault and know how to fix it, and are able to fix it without any adverse side effects on superficially similar situations requiring a different course of action.

We're talking about a regression that makes things worse than they were before. The worst case is that they have to put it back the way it was.

> This, and the assumption of a monotonic decrease in bugs and other undesired behaviour, seems like assumptions which are inconsistent with real world development of complex software aimed at handling a near-infinite variety of possible scenarios.

I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically, which puts pressure on the companies to compete to have the best safety record and therefore not trade the reductions in misbehavior for additional complexity as much.

> Any driver is going to encounter situations which are subtly different from those the car has been trained to handle on a constant basis, so the probability of being the first to encounter a new bug doesn't strike me as being particularly low.

It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

And if it was really that common then why aren't their safety records worse than they actually are?

> The gross human accident rate per million miles driven is very low (and a driver who is experienced, responsible and not intoxicated has good reason to believe their own probability of causing an accident is substantially lower)

The rate for autonomous vehicles is also very low, and the average person is still average.

> I do get to choose whether to pay more attention to a car's actual erratic behaviour than a statistical claim that various previous iterations of the software have had fewer accidents than a set of humans whose accidents are heavily skewed towards people with less regard for road safety than me.

So who is forcing you to buy a car with this, or use that feature even if you do? Not everything is a dichotomy between being mandatory or prohibited. You can drive yourself and the drunk can let the software drive both at the same time.

Though it wouldn't be all that surprising that computers will one day be able to beat even the best drivers the same way they can beat even the best chess players.

> This argument works in theory, but videos of Teslas accelerating at lane dividers are neither a new phenomenon nor one which is reported to have been fixed.

You're assuming they're the same problem rather than merely the same result.

And in this case it's purposely adversarial behavior. There are tons of things you can do to cause an accident if you're trying to do it on purpose, regardless of who or what is driving. The fact that software can be programmed to handle these types of situations is exactly their advantage. If you push a sofa off an overpass into the highway full of fast moving traffic, there may be a way for the humans to react to prevent that from turning into an multi-car pile up, but they probably won't. And they still won't even if you do it once a year for a lifetime because every time it's different humans without much opportunity to learn from the mistakes of those who came before.


> I think this is misunderstanding what happens with large software systems. What happens is that people have a certain level of tolerance for misbehavior, so the system gets optimized to keep the misbehavior at that threshold. Then every time a component improves to reduce its misbehavior, it allows them to trade off somewhere else, usually by increasing the complexity of something (i.e. adding a new feature), because they'd rather have the new feature which introduces new misbehavior than the net reduction in misbehavior.

I think this is misunderstanding the difference between a safety critical system which is designed to be as simple as possible, such as an airline system to maintain altitude or follow an ILS-signalled landing approach, and a safety critical system which cannot be simple and is difficult to even design to be tractable, such as an AI system designed to handle a vehicle in a variety of normal road conditions without human fallback.

> That doesn't really play out the same way for safety-critical systems, because people highly value safety and it's not especially difficult to measure it statistically

The benchmark maximum acceptable fatality rate for all kinds of traffic related fatality is a little over 1 per hundred million miles, based on that of human driver. Pretty damned difficult to measure safety performance of a vehicle type statistically when you're dealing with those orders of magnitude...

> It's not just a matter of encountering a new situation with a subtle difference. The difference has to cause the system to misbehave, and the misbehavior has to be dangerous, and the danger has to be actually present that time.

Well yes, the system will handle a significant proportion of unforeseen scenarios safely, or at least in a manner not unsafe enough to be fatal (much like most bad human driving is unpunished). Trouble is, there are a lot of unforeseen scenarios over a few tens of millions of miles, and a large proportion of these involve some danger to occupants or other road users in the event of incorrect [in]action. It's got to be capable of handling all unforeseen scenarios encountered in tens of millions of road miles without fatalities to be safer than the average driver.

> And if it was really that common then why aren't their safety records worse than they actually are? They really haven't driven enough to produce adequate statistics to judge that, and invariably drive with a human fallback (occasionally a remote one). Still, the available data would suggest that with safety drivers and conservative disengagement protocols, purportedly fully autonomous vehicles are roughly an order of magnitude behind human drivers for deaths per mile. Tesla's fatality rate is also higher than that of other cars in the same price bracket (although there are obviously factors other than Autopilot at play here)

> The rate for autonomous vehicles is also very low, and the average person is still average.

You say this, but our best estimate for the known rate for autonomous vehicles isn't low relative to human drivers despite the safety driver rectifying most software errors. And if a disproportionate number of rare incidents are caused by "below average" drivers, then basic statistics implies that an autonomous driving system which actually achieved the same gross accident rate as human drivers would still have considerably less reliability at the wheel than the median driver.

> You're assuming they're the same problem rather than merely the same result

From the point of view of a driver, my car killing me by accelerating into lane drivers is the same problem. The fact there are multiple discrete possibilities for my car to accelerate into lane dividers and that fixing one does not affect the others (may even increase the chances of them occurring) supports my argument, not yours. And even in this instance, which unlike others was an adversarial scenario, involved something as common and easily handled by human drivers as a white patch on a road.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: