Hacker News new | past | comments | ask | show | jobs | submit login
Preparedness Paradox (wikipedia.org)
236 points by thunderbong on Aug 15, 2022 | hide | past | favorite | 124 comments



In the modern world, a good fraction of potential problems can be estimated using good computer models. Examples include building design parameters for withstanding earthquakes and hurricanes, river flow models for projecting potential flooding, aircraft design models that discover problems before planes start crashing, climate models that show the results of burning fossil fuels for decades, etc.

Yes, many people don't trust models, because they worry something might have been left out, but there's all kinds of ways of running real-world tests on models to get around those issues. For example, climate models accurately predicted the effects of the Pinatubo eruption in 1991, so there was no reason to distrust their predictions by the late 1990s.


The problem is that lawmakers are rarely statisticians and are sometimes even the people not trusting the models because they don't understand them. Someone needs to write up a bayesian form of government and then take over a country.


Or they are incentivized to not trust the models...


This is why command economies like China work today, when they did not in the past. With massive amounts of data and advances in theory, a centrally planned economy (and government) can be dramatically more efficient than a democracy.


The problem with command economies is that the incentives are broken, not that just that central planning is hard. The established elite uses political power to keep their position.

The USSR kept it running for half a century. Give them a while.


I think that's true for every system of both economics and governance. Capitalism, or at least the early (Adam Smith era) form of it, is merely an easy-to-implement approximation of alignment, where the incentives usually go in the right direction, but even that has to have significant pressure from beyond its natural feedback mechanism to get us things like 8-hour days, weekends, and health-and-safety laws that says it is not OK to employ small children to jump in and out of bone-crushingly heavy industrial machinery while it is in operation in order to clear blockages.


I want to be in charge of the priors.


On the other hand, models said that Japan could never experience an earthquake as bad as the 2011 earthquake right up until 2011 proved them wrong (https://physicstoday.scitation.org/doi/10.1063/PT.3.1361)


The other side of this is "whats the alternative?"

You can't prepare for an arbitrarily strong Earthquake, so the only lesson we take is that our models needed correcting and we'll work to that next time.


No one has been able to make a climate model that is able to predict present conditions from historical data.


Y2k. People act like it was a nothing because we fixed everything, but it wouldn't have been a nothing if we didn't fix everything!


We’ll see in 2038 I guess.


Nop. We just want to pretend to think that we did something the last two years.


I guess Robert Challender of Nevada received a DMV bill for $378,426.25 because of cosmic rays?

* https://catless.ncl.ac.uk/Risks/20/84#subj10


Not doubting this, but it seems to cut both ways. I can just as easily justify an overreaction by claiming to have averted some worse outcome. It seems to be a general problem of counterfactuals.


This is why it's often good resource management to wait until something breaks before committing resources to fix it. Especially true in software systems.

One might think that constant firefighting is a waste of resources, and we'd be better off solving problems before they happen. That's true if and only if you know for sure that the problem and eventual breakage is really going to happen AND that it's worth fixing. At least in my experience, it's more often true that people overestimate the risk of calamity and waste resources fixing things that aren't actually going to break catastrophically. Or fix things that we don't actually need, but only figure out that we don't need them when they finally break and we realize that the cost of fixing or replacing it outweighs whatever value it was providing.

The engineer in me hates saying this, but sometimes things don't have to be beautifully designed and perfectly built to handle the worst. Duct tape and superglue often really is good enough.

Of course, this doesn't apply to problems that are truly existential risks. If the potential systemic breakage is so bad that it irreparably collapses the system, then active preparedness can certainly be justified.


This is the no-brainer choice for anything that can be immediately replaced/ordered. Most of us aren’t keeping a stash of computer monitors in case of failure.

On firefighting…huge swaths of burned down land can’t be reordered on Amazon and delivered next day. People quip “just replant the trees” but of course that doesn’t rebuild an ecosystem, we might not even replant the right trees, and the things that lived there are now dead.

On personal scales, waiting for your car to break to fix it isn’t a good strategy either, nor would you wait for you gas pipes to leak, or see if the thunder actually hits your home before preparing for it.

Basically I feel “don‘t fix until it breaks” is a good strategy for day to day small scale decisions, but problematic for most stuff beyond that.


Sources of resilience operate on three different timescales. The first is foresight. The ability to use feedforward to predict potential problems and avoid them. The second is coping. The ability to stop a bad thing from getting worse. The third is recovery. The ability to recover to a normal state once disaster has struck.

You need all three.


>This is the no-brainer choice for anything that can be immediately replaced/ordered.

Well, at least until C-19 hit then you realize that 'immediate replace/reorder' doesn't actually exist any longer, and now your forklift parts are actually going to take 2 months to show up on a delayed boat and nobody in the US has any replacements.


This is why I think most of the 'absolutes' that programmers, software architects, managers etc. talk about are not so.

For example you must never declare 'magic numbers' in code. Or you must always obey S.O.L.I.D. or get 100% TDD. There will be be people who believe in these dogmatically to the point they won't employ anyone who says different (it becomes an interview question).

I am not arguing that these are wrong!

I am arguing that they are not evidence driven (they cannot be, software is to complex, it is not a narrow experiment on a lab mouse). So they must be culture/preference/worldview driven.

When there is no evidence driven approach to 99% of your decisions on software it becomes: an art. And that is fine.

That said it might be possible to show evidence that an approach is good for your code base, for your team, as that is a more limited scope, rather than "in general".

What isn't fine is the number of overly confident global assertions we hear from software people about how to build software.


> There will be be people who believe in these dogmatically to the point they won't employ anyone who says different (it becomes an interview question).

Though when you’re lucky enough to be in a tight labor market it sure is a convenient filter for companies you don’t want to join!


Yes it is a no brain filter too. They have already rejected me so I don’t have to think about accepting them ;-)

Seriously though I hate shibboleth driven interviews.


I think it would depend on the context which often depends on what the real risk is. Building 5000 nuclear missiles? Overreaction. Overbuilding flood control systems such that the region has not experienced major flooding in 100 years? Justified preparation. The tell for what is justified and what isn't is through what you can remove from the system and not see any ill effect, like a jenga tower. We've already decommissioned thousands of nukes and the sky didn't fall, so that goes to show all that preparation was useless. Take away flood control systems OTOH and that would probably result in thousands of lives lost before long given the odds of a bad storm in the area. Likewise with pandemic preparations (mentioned in the intro); what are the odds of a pandemic? High, so the preparations are justified.


>The tell for what is justified and what isn't is through what you can remove from the system and not see any ill effect, like a jenga tower. We've already decommissioned thousands of nukes and the sky didn't fall, so that goes to show all that preparation was useless.

Bad example. It absolutely wasn't useless at the time to build those thousands of nukes. The whole concept of mutual assured destruction breaks down if the other side has 20x more nukes.


Wasn't it unclear for most of the cold war how many nukes were even in play? As far as I understand the U.S. early on overbuilt nukes, assuming the soviets had a lot more of them at the time when they had hardly any at all. Then the soviets had to play catch up with the americans as you state, but once again it was all for nothing in the end for those nukes that were built, sat idle in silos, then decommissioned without seeing any use at all. If you have nukes for ten targets that would probably be enough to make a nation nonfunctional. Once the enemy launches their nukes, you launch yours and the world ends in 7 minutes. I can't imagine any nation would rise from the ashes having its 10 largest population centers annihilated. Personally I think the U.S. MAD plans of the 1950s-60s are absolutely horrific. "Mr. Secretary, I hope you don't have any friends or relations in Albania, because we're just going to have to wipe it out." The russians were not thinking along the same terms as the americans in terms of destruction.


Given the disregard for human life that the Russians have typically displayed in war, including the current conflict, what would you propose as a deterrent instead of MAD?

Second question. Let's say you want to negotiate a treaty in the case where neither side trusts the other, and where neither side really has any enforcement power over the other. What mechanism would you propose, to which both parties could agree, and to which both parties could be pretty sure the other side would respect?

MAD is indeed horrific. The authors of the policy thought so as well. Everyone would very much appreciate a better solution. Until now, none has been proposed.


I mean, it’s not some unique problem of counterfactuals, right? It doesn’t seem like counterfactuals have some unique epistemological status. You can use reason to propose and criticize counterfactuals the same as any other kinds of explanations.


I’m having this struggle with projects at work. I’ve had to push back on all sorts of requests from our project manager and designer so that our team has the bandwidth to focus on some very important stability and security concerns for our next release. They want all sorts of additional fancy bells and whistles (that don’t add much user value or functionality) that we just can’t focus on right now, because it would come at the expense of making sure the feature is actually stable and secure.

I’m almost positive there will be some amount of blowback when the thing releases and there are no stability or security problems…which was only because I made sure we spent the necessary time on them.


Yep, the curse of doing things right. You can't prove it was needed.

You need to demonstrate that you are fixing genuine problems, or you will eventually be replaced by someone who delivers faster, even if there are subsequent bugs.

One way to do this is to negotiate with the business in what needs doing, using risk. If you think there is a risk of a security or stability issue then you should be able to assess that risk. The business can then choose to accept the risk and add some features, or fix the risk. It is essential that the owner of the system officially accepts the risks presented. You cannot own the risks.

This lets the business prioritise the work according to its risk appetite. And if the shit hits the fan, you are not only covered but your reputation will increase.


While this works with rational actors the experience I have had in the industry is often the opposite. In fact, the company I work for now is probably the only company I've worked for in the last decade that actually correctly evaluates risk. The average corporate drone overseeing the engineer org is very typically the least rational actor in the entire org.

Given the opportunity most start-up and mid-tier business will prioritize speed over safety. Despite my many attempts to explain this trade off using various methods such as engineer-speak, business-speak, or some combination of the two the need for money and the need to constantly impress investors trumps all. I have quite literally told people the total cost of a half-fix will be more than double the cost in engineering hours to implement a correct fix and by-and-large the half-fix will be chosen because it "gets the feature out to users quicker". It's the most asinine thing I've heard and I fully understand the need to deliver on time and on budget.

In the end your ass is never covered. It will be your fault whether you suggested to do it and they said no, or they said yes. Your team will end up working the long hours to implement the obvious security and safety changes. The math for the other side is simple, if the cost to take on the risk is less than the cost to implement the fix, it will never get done. Companies use pager duty for free labor for a reason. It's the industry's most effective permitter of poor practices.

Sure, something as simple as "we should really hash our passwords" might be so glaringly obvious even the most dense business person would understand. But when you wander into the land of ambiguity is when you really get burned. When the company is spending $XX,XXX/mo. on cloud storage because the ticket specifically said to not worry about lifecycle it's going to be you in the office explaining why this wasn't fixed. Rarely will any business person take "its your fault" as the answer. They'll happily assign you as many 60 hour weeks as you need to fix the problem and in a large enough corporate-tier screw up you may be the sacrificial lamb for the investors to feel like "the problem was solved".

Call me cynical but this is an unwinnable battle. Unfortunately, until software bugs start literally killing people, the desire to actually allow engineers to do their job will be low.


And that is as it should be (except the blaming for issues you raised).

The business should be able to choose what the priorities are. The business does not exist to produce beautiful code.

If a business wilfully disregards security or stability risks that they were informed of, and they get bitten, then they will almost certainly end up paying more to fix in resource and engineering time. But that's the trade off they chose to make.

If a business plays the blame game here, it's simply time to find another job. They are not going to be a good place to work at all.


As an addendum, I've been on both sides of this argument. When I became the chief technical architect for a software company, I had to tell my colleagues to build a huge pile of crap to hit important commercial milestones and keep customers happy.

I pointed out that the total cost of doing this badly, then unwinding it and doing it properly would be at least double the total cost of just doing it right.

That didn't matter. We had to hit those goals. So that's what we did. It was expensive, buggy and kept us in business. My colleagues finally understood I had turned to the dark side :)


Yet if you acquiesced to those bells and whistles and there were stability or security problems it would be much worse for you personally.

It sounds like you guys don't trust each other - they don't trust your ability to assess risk and you don't trust their ability to properly value features vs those risks. As a tech lead in those environments sometimes it helps to act like a lawyer, communicate the risks, let them make the decisions but cover your ass and get everything in writing.


Hero’s get rewarded: allow something critical to break in a non-catastrophic manner and then get recognition for fixing it promptly (ideally with some high profile drama).

There is usually far less reward for preventative measures that avoided breakage in the first place.


Ideally, the fact that you're working on internal maintenance shouldn't even be known to external customers.

My rule of thumb is that 1/3 of engineering time needs to be spent on maintenance.


You should add an additional rule that another 1/3 of the time will need added on maintenance for the 1/3rd of the time spent on new features.

1/3 = adding new features.

1/3 = maintaining the security posture of the application and keeping it up to date.

1/3 = spent figuring out why the new features blew up in the field after passing all testing.


I've seen this so much regarding covid-19 in Australia, especially at the start. Everyone I know complained bitterly at the restrictions we had in Sydney, even when they weren't that strict, saying "but only 5 people have died" or whatever. I'd say "Yeah, because what we're doing is working. Look at Italy, or the UK, or France, or the USA".

Interestingly, there was a clear divide between people(like me) who had family & friends overseas, and worried about them, and people(like my in-laws) who didn't.

Eventually we had to stop discussing it because my in-laws refused to consider anything happening outside of Australia as relevant input.


Australia is on track to have similar per capita death rates as Norway which had far fewer restrictions. If it hits that point, will you still think the heavyhanded response was worth it in the end?

The Australian response in general was over the top:

* Postcode based restrictions targeting poor areas and not always related to actual case numbers

* Locking healthy people in their homes under 24hr police guard, all deliveries to their house inspected (people are shipping in covid apparently?), alcohol purchases limited.

* Banning outdoor playgrounds, exercise and confining people in their homes instead of allowing them fresh air.

* Flying polair choppers hundreds of km's to fine people camping in remote wilderness.

* Protesting made illegal.

* Nearly all laws made by decree and never actually facing a democratically elected parliament to debate.

* Making it illegal for a citizen to return home from India under threat of jailtime.

* Essentially banning citizens from returning home by drastically limiting flight numbers.

* Forcing all returnees to quarantine in 5 star hotels not fit for purpose while foreign celebrities and sport stars could choose to go to B&B's in the countryside.

It was a dark time for Australia in my opinion, yet somehow many seem to agree with it. Locking out our own citizens was one of the most popular policies ever, it consistently polled 90+% for over a year. I find something like that terrifying and speaks volumes to the mindset of the average Australian who loves to harp on about mateship.


The vast majority of deaths here in Australia, since the end of lockdowns and closed borders, have been amongst the unvaccinated, the elderly, and the immunocompromised (particularly the first of those three). We're not seeing young, normally healthy people dying in their thousands, as happened in much of the world at the height of the pandemic. Our response worked, and we've avoided widespread suffering, and many Australians apparently don't appreciate that, I guess due to the preparedness paradox!


> as happened in much of the world at the height of the pandemic

The median age of death in Australia is ~83yo. It's not far off from comparable European countries which are all around 80yo too. ( I don't consider the US comparable for a number of reasons, western Europe is generally the baseline we draw for most health metrics)

The UK has a median age of 83yo also, despite their horrific death toll and lax attitude compared to Australia. [1][2][3]

So the onus here is on you to prove evidence to back up such a statement that it's "healthy young people dying in the thousands"?

[1] https://www.ons.gov.uk/aboutus/transparencyandgovernance/fre...

[2] https://www.ons.gov.uk/aboutus/transparencyandgovernance/fre...

> Meanwhile, deaths among those aged 44 or younger made up under 2% of the total

[3] https://www.theguardian.com/world/2022/jan/16/what-do-we-kno...



Well, why not "look at Sweden", then?


I don't understand why it's called a paradox. It's just people having trouble understanding counterfactuals. Getting better at systems thinking is a great way to get better at avoiding this. At work I've learned to point out "we wouldn't need to spend time on this if we invested the time to implement X", so the product folks are more aware of the counterfactuals when it comes time to justify the investment.


> It's just people having trouble understanding counterfactuals.

That's one issue, but another issue is how accurately we can estimate the counterfactual outcomes. In the case you described, where some up-front investment can reduce costs later on, the accuracy of the estimate of the counterfactual is usually fairly good. But when we talk about society-wide or planet-wide outcomes, our accuracy is much worse. Even in many cases where it seems fairly obvious that an up front intervention mitigated significant harm, we really don't know that with a very high level of confidence. There are just too many uncontrolled and unmeasured variables.


> It's just people having trouble understanding counterfactuals.

You've just described most paradoxes. From the definition of "paradox":

> a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true.


How odd, I've never come across that definition of paradox. I've always understood it to be purely self-contradictory, like: This sentence is false. If I take it to be false, it's true; if I take it to be true, it's false. The proper understanding is that it actually has no semantic meaning, but it certainly doesn't prove to be well-founded or true.

Using "paradox" for something like this concept though is along the lines of also using it for the phenomenon of people appearing to vote against their self-interest. They keep doing it, we don't understand why - it might be that they're stupid, it might be that we don't understand enough of their perspective, but it just doesn't strike me as a paradox. Not unless every phenomenon we don't understand is also a paradox. Are software bugs paradoxes?


Yeah, this is something that has always bugged be a tiny bit. I was more familiar with the idea of a paradox as something like your definition -- containing an actual contradiction. But it seems to be used instead to describe any initially counterintuitive situation.

It is tempting to attribute this to a technical/non-technical difference (similar to fallacy, which in non-technical discussion has been expanded to basically include almost any bad argument). But somehow the Birthday "Paradox" has managed to stick in probability.


Paradox isn't synonymous with contradiction. Some paradoxes are, or contain, logical contradictions (i.e. they effectively say both X and not X are true) but the term is much broader.

Some of the earliest paradoxes are Zeno's, and they were referred to by that term at the time. For example the paradox that an object that moves towards a point must first cover half the distance, and then half the remaining distance, then half of the remainder, etc. Since this is an infinite number of steps, Zeno playfully argued that motion is impossible. There's no logical contradiction there, just a way of pointing out something counterintuitive about reality and maths.


That's fair, the thing I'm looking for isn't quite a contradiction.

What I like about Zeno's motion based paradoxes is they have this aspect of "here's a reasonable model of motion, and here's the ridiculous result you get from it." There's clearly something wrong in the model, but working it out takes a while, you need someone to come around and invent series first.

After a little reading, I think I just like falsidical paradoxes and don't like veridical paradoxes.


i always heard it called the birthday problem: https://en.wikipedia.org/wiki/Birthday_problem

[weirdly, someone discovered an birthday overlap today at work and i just re-google/wiki'd this today]


Huh, what are the odds that the same mathematical concept will come up in conversation twice on one day? We have invented the Synchronicity paradox, er, problem.


Baader–Meinhof phenomenon or frequency illusion.

Sadly nobody called it baader-meinhof paradox.


Your notion of paradox is more precisely known as an antinomy:

https://en.wikipedia.org/wiki/Antinomy

Yes, all antinomies are paradoxes but not all paradoxes are antinomies.


An antinomy that isn't a paradox would be paradoxical indeed.


>How odd, I've never come across that definition of paradox. I've always understood it to be purely self-contradictory, like: This sentence is false.

That's just one kind of paradox in one domain (say, logic). There are well known named paradoxes of several different types, belonging to several different domains...


The reverse may also be true: that the "preparedness" truly was unnecessary. No one will ever know.


I guess that gets me closer to understanding it, thanks. If we consider an example where the potential outcome is truly unknowable. If we don't prepare, it might happen; if we do, it might not have ever happened. So in that sense, the Y2K bug isn't a good example, but perhaps preparing for catastrophic low-probability events like "AI paper-clip doom" is.


I saw this playing out in the California Super-storm stories that went out late last week. Some of the headlines made it sound like a mega storm that would bring 10 feet of rain was just off shore. Only reading the article led to to the possibility that such a storm could happen sometime in the next 50-500 years.

No real indication of what anyone should DO with such information.


>No real indication of what anyone should DO with such information.

Do you build infrastructure? If no, the fact that 10 feet of rain can fall is mostly useless, unless you're buying a house downstream of a dam to keep in the family.

The situation is widely reported because it's a great fear based headline and gets the clicks. With that said, it's something every civil engineer that could build a project in CA should know about, because their decisions now could lead to the death of a very large number of people at some point in the future. That marginal wetland that could be reclaimed for housing shouldn't be, unless you like the news that comes out of Houston every once in a while.


>I don't understand why it's called a paradox. It's just people having trouble understanding counterfactuals.

So? Most paradoxes can be described as "people having trouble understanding X".

The Liar's paradox is "people having trouble understanding meta-statements" (at least according to Russel's theory).

Zeno's Ahilles paradox is people not understanding convergent infinite series's.

The Potato paradox is people not understanding algebra.

The Friendship paradox is people not understanding statistics.

And so on...


It seems that tunesmith has stumbled upon the paradox paradox.


Agreed, I think it should only be called a paradox if it met the pattern “the more effort you spend on preparedness, the less prepared you become…”

Otherwise it just seems like a relationship between two variables…


A few months ago, I wrote a review of the book A Libertarian Walks Into a Bear. The book describes the Free Town Project, a kind of offshoot of the Free State Project in New Hampshire, in which people moved to a particular town in order to try to reduce the role of local government in their lives. The book notes that the town then had significant difficulty coordinating on wildlife control issues, as there were lots of bears in the nearby woods and the residents had trouble agreeing on what to do to keep them away from people.

While the issues were somewhat complex and not solely the result of the Free Town Project, it seemed clear that the lack of governmental coordination and some residents' bear-attracting behaviors made the bears' presence a bigger problem than it had been before.

One thing I thought several times while reading the book was that the preparedness paradox was a big part of the challenge (although I didn't remember that it was called that!). Specifically, it seemed like quite a few of the people involved sincerely thought that wildlife management or wildlife control wasn't "a thing" because they had only ever lived in places where it was already being handled well. So they didn't perceive any need to continue actively addressing it in their new environment, because it seemed like such a hypothetical or fanciful risk.

Since then, I've thought that the question of understanding or evaluating what is a real risk that one needs to make a real effort to deal with gets extremely clouded by all of the things that people and institutions are already doing in the name of risk mitigation. We've seen this most dramatically with measles vaccines (where people felt like measles was an incredibly remote risk, because they had never seen it occur at all in their environments, because other people had successfully mitigated it by vaccination and hygiene programs in earlier generations!). But I imagine that this comes up over and over in modern life: how do people get a clear sense of what is dangerous (and how dangerous it is) when they already live in settings where whatever degree of danger exists is already being dealt with well, so most people rarely or never witness its consequences?


“Y2K was a hoax”, is an example of this bias.


We'll be able to retry the same scenario but without preparation in 16 years.


Why without preparation? We all know the epoch integer will overflow on 32-bit systems in 20388


Because as it seems Y2K was a hoax (see ggp), so why should we prepare next time?

Plus, noone in charge to decide will understand the significance of such a weird date.


The systems that would have suffered the most, are the old Cobol systems that used to run the world. They were mostly fixed in Y2K.

Taiwan and Japan have their own version of Y2K problem in 2011 and 2025 respectively due to era names. Nothing big happened for Taiwan, and I can't see big problems coming up in 2025 or 2038.


Then why South Korea and Italy didn't suffer Y2K problems. And invested little to nothing in Y2K remediation.


I'd love to see a source for that.

This article from 1999 I found suggests that South Korea was worried about North Korea's preparedness for Y2K, https://www.deseret.com/1999/12/17/19480898/s-korea-worried-.... That seems to suggest that South Korea itself would have been making sure its own systems were secure.

Is it simply possible that most of their systems were newer than those in other countries, so updates weren't necessary?


Y2k wasn't a hoax but it was exploited and hyped.

There was definitely the possibility of really bad impacts on critical infrastructure. If we had all behaved like Italy then I think it could have been quite bad.

The majority of y2k work I saw on the ground was companies using it as an excuse to upgrade all their kit. I did some assessments and was asked more than once to emphasise the risk a bit more.


And also Russia , country with nuclear arsenal, did nothing for Y2K


Little reliance on Y2K impacted platforms?


Like what platforms?


I invested little to nothing in tiger remediation, and lo and behold, no tigers!


Imagine if you did invest. You would be called sucker.


That really depends if you live in Iowa or Sumatra.


A problem is the media hyping things too much. If not for media hype, maybe this paradox would not be such a problem or prevalent. People's expectations are in part formed formed by the media.


I see this all the time in software development. No media hype involved.

I worked with a senior engineer who had a brilliant knack for finding design flaws in review (usually security or performance issues) and would put in heroic efforts to fix them before they went to production. Someone privately called him out as an obstructionist - "He's constantly worried about BadThing happening, but it never does! He's just wasting time.". I politely corrected them - "Did you ever consider that BadThing never happens BECAUSE he's constantly worried about it?"


Related: https://web.mit.edu/nelsonr/www/Repenning=Sterman_CMR_su01_.... "Nobody Ever Gets Credit for Fixing Problems that Never Happened" by Repenning and Sterman.


I'm getting a 404, is there an archive link elsewhere?

Edit: It's been fixed.


Nobody got credit for the site being up so the sysadmin quit


Fixed the link.


No doubt this existed before the media. Think of when you were a kid, and your parents were always making you pick up your things, saying people would trip on them. But you knew how dumb they were, because nobody ever actually tripped on your things...what you didn't realize is that was largely because your parents made you pick them up.


This also manifests in for instance how we treat testing in software engineering. Folks don't get as much credit for writing tests because it's impossible to count the set of SEVs that didn't happen. On the other hand, you get outsized credit for the heroics of fixing them.


Testing is a loss of time. It absorbs about 50% the workforce, and projects that don’t have it don’t necessarily suffer.

Also, ask an engineer whether tests are complete, and he’ll always tell you that we haven’t tested anything yet. You need a cutoff at one point.


I think you're demonstrating exactly the fallacy that I identified.

I know personally I've caught massive issues in my own unit testing of my own code - so I know for a fact it's not a dead loss of time. I'm also not sure why you think it takes 50% of the workforce - that's never been my experience.

The trick is knowing what to test, how much to test it and how long to spend.


I make 600k ARR with software that has 5% test coverage (ie the clever parts to test).

And the one risk I faced was that I should have used React, because recruitment on this product will fail me i the long term.

Tests are only good for some situations.


You can't A/B test a lot of things without a time machine so you need to be good at assessing risks and tradeoffs.


Glad the article mentions y2k. The massive hype and hysteria resulted in all the systems of significance being patched.


My dad uses y2k as an example for why climate change is overblown


Well, there was certainly a similar panicked “sky is falling and society will end as we know it” vibe back then with it with some people, not unlike some of the more rabid climate change activists rhetoric I have seen.

I’ll readily admit that I take some of the more dire climate change predictions with a grain of salt because of my Y2K experience. It was a damn nice time to make some money off the panic though.


It’s kinda hilarious in hindsight. Almost everything could still be done analog back in 2000. If we’d have the Y2K problem _now_ the sky might actually fall.


Home many y2k patches did u install in 1999?


Writing a few, not installing. I was still in hardware development, with a bit of software exposure. Using a shortcut to represent and calculate dates at the time was the norm.


https://en.wikipedia.org/wiki/Roselyne_Bachelot

Excerpt from Wikipedia : In 2009, Roselyne Bachelot French minister for health back then [ordered 94 million vaccines from Sanofi Pasteur, GlaxoSmithKline, Novartis and Baxter International for the French Government at a cost of 869 million euros (and an option on 34 million additional vaccines in 2010) to fight against the H1N1 influenza virus; however, less than 10% of French population (about 6 million people) had been vaccinated by the end of the winter. She later canceled over half the flu vaccines ordered to combat the virus, in an effort to head off criticism after reserving too many shots.]

This lady was pulled in the dirt for so long for this. This episode was politically exploited, she was deemed falling for pharmaceutical lobbying, being paid on the side for ordering that many vaccines. Only in 2020 she was praised for her action.

Excerpt from Wikipedia in french : [Retrospectively, her action was judged appropriate during COVID-19 crisis, when France ran out of Masks]


The best strategy for countering the preparedness paradox is to prepare, while simultaneously telling everyone else not to prepare/over-react. Then you get the benefit of preparedness AND the proof that it was required. Win-Win(-lose)!


There's a book kind of premised on (the logical opposite) of this...

"The End Is Near and It's Going to Be Awesome: How Going Broke Will Leave America Richer, Happier, and More Secure" [0]

[0] https://www.amazon.com/End-Near-Its-Going-Awesome/dp/0062220...


So that's why no one will buy my tiger-repelling rock!

(It's so effective at keeping the tigers away that no one around here is concerned about tigers.)


People called me crazy for doing the rain dance to prevent volcanos, but just imagine if I stopped!

I kid.

Point is, not every preparation is correlated to actual prevention


See also the (oft-submitted to HN) Tale of Two Programmers:

https://c00kiemon5ter.github.io/code/philosophy/2011/10/30/T...


I generally don't prepare for anything because the opportunity cost of being prepared for something is almost necessarily against responding to the events I am not prepared for - and those are the ones you really have to worry about.

The secret I find is to always be ready for the things you aren't prepared for.


This makes me wonder; how different would the COVID pandemic death toll have been if governments didn’t change anything? No travel bans, no lockdowns, etc.

I suspect many people would still voluntarily use masks, self-isolate, protect their eldery and take other precautions.


Very different.

It would have spread very rapidly, overwhelming health systems utterly. Do you remember the mask shortage early on in the pandemic? Do you remember the oxygen shortage recently? Have you heard the news about nurses and doctors quitting because of burnout? Imagine all of those dialed up to eleven, all at the same time. Along with shortages of cleaners, orderlies, and basic hospital supplies.

Nearly all the people whose lives have been saved by treatment in intensive care units would be dead, and many more besides: accident victims, cancer patients, etc., etc.

The sickness could have spread rapidly enough that essential services were entirely out of action for long periods of time . No water. No power. No air traffic control. No road repairs. No trains. No food transport. All of these at the same time, for weeks.


Both Japan and Sweden were very hesitant to impose legally-compelled rules compared to most other developed countries. People behaved as you described. Though one can debate whether they would have behaved even more so with an order compelling.

In hindsight, I suspect the biggest factor was not whether it was compelled, but whether people could afford it. (Plenty of payments to stay home or keep workers home were still made in Japan and Sweden.) If your rent depends on providing black market haircuts, you'll still perform them despite the ban. And if you're allowed to do haircuts, but the government will instead pay you to stay home to avoid the epidemic disease going around, maybe you'll just stay home.


If you could get accurate data on the different strategies that countries used and their results you could extrapolate with huge error margins but it would give you a general idea. However, many countries did not accurately report: testing numbers, results, outcomes, well practically everything!


Looks like we'll get a chance to find out with Monkeypox!


Probably would have done better: https://www.covidchartsquiz.com/


I didn’t know this had a name but I was actually thinking about this idea while on holiday.

After putting on sun-cream and insect repellent, I’d be later considering if I had really needed to: Did I not get bit by mosquitoes because there weren’t any in that area at that time of day – or was it because of the insect repellent? Did I not get burnt because there had been enough cloud to block damaging ultra-violet radiation – or was it the sun-cream? At the start of the holiday, I was bad at estimating the risk (due to lack of experience) so called it wrong and ended up getting bitten and slightly sun-burnt.

I compared these experiences with those of acquaintances who had been vaccinated against Covid-19. When they and others they know personally finally got infected they concluded it wasn’t that bad and that the health services and the government had been over-reacting. I tried to persuade them that comparing Ireland against other countries that had less vaccination uptake showed that the decreased severity of the disease on a macro scale was most likely due to the vaccines doing their job.


Regarding the pandemic in my filter bubble people said "There's no glory in prevention."


This assumes exact knowledge of an alternate reality. It is not a paradox in any way.


When the trains are running smoothly no one celebrates the train engineer.


Another variation is "Why are you dieting? You're not fat!"


Most people dieting are fat. Where is the paradox?


The idea is that you don't need to diet if not fat.

But in reality you're not fat because you're dieting.


This is explored in the Book of Jonah, thousands of years ago.


This is why I don't wear a helmet on my bicycle


A core logical fallacy made by anti-vaxxers.


Incoherent.

How can there be a levee paradox.

You can see the water it holds back.

More people build because there are less floods.

No idea what they are talking about with Fukushima

The Millennial Bug is a good prospect. But that's a debate in itself.


Good point about the levee issue. Apparently there's a little wikiwar going on with that one already.

I couldn't follow their logic with Fukushima either. The wording was a little strange.

The Year 2000 scenario and covid scenarios are great examples IMO. The problem is that any great example is intrinsically going to be controversial, and that seems to be the paradox itself.


For a real world example, you can look to the hole in the ozone layer. This conservative commentator and roughly 42k twitter users agree that we "suddenly just stopped talking about it", when in reality governments implemented bans on CFCs that mostly solved the problem.

https://twitter.com/mattwalshblog/status/1549713211188027394


This examples can be made up arbitrarily. For any situation. You can always say that if it weren't for x, y would be even worse.

"If people didn't carry guns, there would be more violence."

"If we didn't start climate talks , climate change would be even worse."

etc...


> How can there be a levee paradox.

> You can see the water it holds back.

> More people build because there are less floods.

And then because of that, when the levee breaks, more damage is done than it the levee wasn't there in the first place. The paradox is that trying to be safe can cause more harm.


This has also been described as risk compensation

https://en.wikipedia.org/wiki/Risk_compensation




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: