Hacker News new | past | comments | ask | show | jobs | submit login

Not doubting this, but it seems to cut both ways. I can just as easily justify an overreaction by claiming to have averted some worse outcome. It seems to be a general problem of counterfactuals.



This is why it's often good resource management to wait until something breaks before committing resources to fix it. Especially true in software systems.

One might think that constant firefighting is a waste of resources, and we'd be better off solving problems before they happen. That's true if and only if you know for sure that the problem and eventual breakage is really going to happen AND that it's worth fixing. At least in my experience, it's more often true that people overestimate the risk of calamity and waste resources fixing things that aren't actually going to break catastrophically. Or fix things that we don't actually need, but only figure out that we don't need them when they finally break and we realize that the cost of fixing or replacing it outweighs whatever value it was providing.

The engineer in me hates saying this, but sometimes things don't have to be beautifully designed and perfectly built to handle the worst. Duct tape and superglue often really is good enough.

Of course, this doesn't apply to problems that are truly existential risks. If the potential systemic breakage is so bad that it irreparably collapses the system, then active preparedness can certainly be justified.


This is the no-brainer choice for anything that can be immediately replaced/ordered. Most of us aren’t keeping a stash of computer monitors in case of failure.

On firefighting…huge swaths of burned down land can’t be reordered on Amazon and delivered next day. People quip “just replant the trees” but of course that doesn’t rebuild an ecosystem, we might not even replant the right trees, and the things that lived there are now dead.

On personal scales, waiting for your car to break to fix it isn’t a good strategy either, nor would you wait for you gas pipes to leak, or see if the thunder actually hits your home before preparing for it.

Basically I feel “don‘t fix until it breaks” is a good strategy for day to day small scale decisions, but problematic for most stuff beyond that.


Sources of resilience operate on three different timescales. The first is foresight. The ability to use feedforward to predict potential problems and avoid them. The second is coping. The ability to stop a bad thing from getting worse. The third is recovery. The ability to recover to a normal state once disaster has struck.

You need all three.


>This is the no-brainer choice for anything that can be immediately replaced/ordered.

Well, at least until C-19 hit then you realize that 'immediate replace/reorder' doesn't actually exist any longer, and now your forklift parts are actually going to take 2 months to show up on a delayed boat and nobody in the US has any replacements.


This is why I think most of the 'absolutes' that programmers, software architects, managers etc. talk about are not so.

For example you must never declare 'magic numbers' in code. Or you must always obey S.O.L.I.D. or get 100% TDD. There will be be people who believe in these dogmatically to the point they won't employ anyone who says different (it becomes an interview question).

I am not arguing that these are wrong!

I am arguing that they are not evidence driven (they cannot be, software is to complex, it is not a narrow experiment on a lab mouse). So they must be culture/preference/worldview driven.

When there is no evidence driven approach to 99% of your decisions on software it becomes: an art. And that is fine.

That said it might be possible to show evidence that an approach is good for your code base, for your team, as that is a more limited scope, rather than "in general".

What isn't fine is the number of overly confident global assertions we hear from software people about how to build software.


> There will be be people who believe in these dogmatically to the point they won't employ anyone who says different (it becomes an interview question).

Though when you’re lucky enough to be in a tight labor market it sure is a convenient filter for companies you don’t want to join!


Yes it is a no brain filter too. They have already rejected me so I don’t have to think about accepting them ;-)

Seriously though I hate shibboleth driven interviews.


I think it would depend on the context which often depends on what the real risk is. Building 5000 nuclear missiles? Overreaction. Overbuilding flood control systems such that the region has not experienced major flooding in 100 years? Justified preparation. The tell for what is justified and what isn't is through what you can remove from the system and not see any ill effect, like a jenga tower. We've already decommissioned thousands of nukes and the sky didn't fall, so that goes to show all that preparation was useless. Take away flood control systems OTOH and that would probably result in thousands of lives lost before long given the odds of a bad storm in the area. Likewise with pandemic preparations (mentioned in the intro); what are the odds of a pandemic? High, so the preparations are justified.


>The tell for what is justified and what isn't is through what you can remove from the system and not see any ill effect, like a jenga tower. We've already decommissioned thousands of nukes and the sky didn't fall, so that goes to show all that preparation was useless.

Bad example. It absolutely wasn't useless at the time to build those thousands of nukes. The whole concept of mutual assured destruction breaks down if the other side has 20x more nukes.


Wasn't it unclear for most of the cold war how many nukes were even in play? As far as I understand the U.S. early on overbuilt nukes, assuming the soviets had a lot more of them at the time when they had hardly any at all. Then the soviets had to play catch up with the americans as you state, but once again it was all for nothing in the end for those nukes that were built, sat idle in silos, then decommissioned without seeing any use at all. If you have nukes for ten targets that would probably be enough to make a nation nonfunctional. Once the enemy launches their nukes, you launch yours and the world ends in 7 minutes. I can't imagine any nation would rise from the ashes having its 10 largest population centers annihilated. Personally I think the U.S. MAD plans of the 1950s-60s are absolutely horrific. "Mr. Secretary, I hope you don't have any friends or relations in Albania, because we're just going to have to wipe it out." The russians were not thinking along the same terms as the americans in terms of destruction.


Given the disregard for human life that the Russians have typically displayed in war, including the current conflict, what would you propose as a deterrent instead of MAD?

Second question. Let's say you want to negotiate a treaty in the case where neither side trusts the other, and where neither side really has any enforcement power over the other. What mechanism would you propose, to which both parties could agree, and to which both parties could be pretty sure the other side would respect?

MAD is indeed horrific. The authors of the policy thought so as well. Everyone would very much appreciate a better solution. Until now, none has been proposed.


I mean, it’s not some unique problem of counterfactuals, right? It doesn’t seem like counterfactuals have some unique epistemological status. You can use reason to propose and criticize counterfactuals the same as any other kinds of explanations.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: