Hacker News new | past | comments | ask | show | jobs | submit login

I think what the poster above is going for is that the level of safety (as total number of saved human lives) resulting from turning a (napkin or otherwise) calculation into a definitive technological choice is probably suboptimal. Typically we would want a safe process to include retroaction or "self-improving cycles". That is not a single point of measurement or calculation is being used but instead we do provision for future safety evolution of the system, and monitor the safety conditions by setting up regular measurements. So, what is considered safe is not any standard in itself, because as conditions and technologies change it may become outdated quickly, but rather the process to redefine that standard so that we have confidence that our product is not only safe but that we can keep it safe in the long run (since we want to optimize on the total of lives saved, not on a weekly or monthly death toll).

A calculation that leads you to underdesign a product's safety and leaves no room for this product's safety improvement, in terms of mechanical or electronic update, is clearly not thought as being safe in that regard, regardless of the economies of scale or even low-term utilitarian goals (that would be expressed as: people spending money on a tesla would be safer in the short run, rather than using no automatic driving at all while waiting for a better product).

This is an important difference, and there is a societal choice to make here: do we (as society) want to buy now, and potentially have regrets later (when the safety of the product degrades with time, causing it to also have a record of people's deaths), or do we want to proactively force a notion of safety onto cars that is more than just being good enough at an arbitrary point in time, so that we hav more confidence over the long-term viability of that (societal) investment? As you can guess I gravitate towards the later, but of course it's a gradient, with several choices in-between, because pushing that thinking to an extreme would lead to stagnation, which would not do anything in terms of improving safety, as you noted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: