This right there is a big red flag. The whole bendy business is bad enough, but here you're actively training people to ignore the wolf cries.
Don't allow false failures in tests. The entire test suite needs to be binary: either everything works, or it fails.
We're using PHP (so runtime warnings, instead of compiler issues) but we have a zero-error policy, if anything raises a warning or error we refuse to merge (or ticketize fixing since a bunch of stuff is legacy and we're still paying it down).
Starting in 5.4 and getting far more prominent in 5.6, PHP started breaking bad backwards compatibility and failing to respect notices could easily lead to errors on version bumping up to 7.
In fact, maybe NASA should have similarly ‘unbendable’ (cough) software checklists. The temperature is 39ºF? Red color says no launch today.
This kind of content is what I come to HN for!
What's better, it's not just about disabilities. You've got kids to take care of? Here goes one spoon. You are not a native speaker and a bit slow in getting what others say? Here goes another one. A sick spouse at home? The third and the last one is gone and that's when things become interesting!
It is a much different thing than having to take care of a child, not being a native speaker, or having a sick spouse, which describe additional difficulties onto an otherwise functional human being. Spoon theory explicitly applies to the context in which fundamentally functional actions like self-care take significantly more resources than expected.
I have some concern that this is actually a dilution of the original meaning such that it removes its original usefullness to the chronically ill community, which has to deal with the stigma of being potentially severely limited with an "invisible" illness, and the spoon theory was an attempt to describe the "invisible" illness's effect on a person's life (therefore aiding in destigmatization). By diluting the original meaning, we may be contributing to the continued stigmatizing of "invisible" illnesses, as now spoon theory is changed to mean something almost opposite of its original meaning.
On the other hand, do we really want to build this wall by saying there are chronically ill people, there are healthy people and these two crowds are governed by completely different rules, have absolutely orthogonal problems and no parallels can be drawn between them to avoid the "dilution"?
Or we'd rather say that disabled people's problems are the same as healthy people's problems, just worse and affect your life in more aspects? Wouldn't that remove the stigma and help us understand each other better?
Disabled people's problems are not the same as healthy people's problems. Spoon theory was a way to describe exactly how it's not the same.
Maybe this helps to build understanding across x and y initially, but maintaining a hard and impassible empathic barrier between groups will eventually lead people to say, "Why should I care, every time I try to relate they say I'm not one of them?"
In this specific case, why is it such a bad thing to expand a metaphor for costs to build that shared empathy? Then we can all relate to each other better.
Building empathy is the skill of identifying with situations that are not one's personal situations. Arguably, "expanding" a metaphor to suit one's personal situation is the opposite of fostering empathy.
EDIT: Towards "Why should I care, every time I try to relate they say I'm not one of them?" -> The spoon theory was not explicitly to be related to. It was a way to abstract a very real problem that has been broadly minimized into a finite situation in order to explain a phenomenon that a healthy person explicitly does not need to deal with and cannot relate to. It would be similar to me explaining a technical problem with metaphor; I'm not asking for you to relate to the problem. I'm asking you to understand the problem.
As for why should you care? Because this person is in a lot of pain and is dealing with a lot of difficulties that I'm not dealing with, and in fact, upwards of 40% (more maybe) of people in my country (USA) have a chronic/incurable condition. The scale of the problem means that developing a compassionate, understanding view will greatly benefit a significant portion of people one may meet.
Do we have any reason to think that health status / disability is discrete rather than continuous?
I'm reminded of Ludwig Guttmann, who founded the Paralympic Games and revolutionised the care of people with spinal cord injuries. Prior to Guttmann, the received wisdom was that paraplegics would inevitably die within a few years from a variety of complications; the standard treatment was simply palliative. Guttmann realised that paraplegics could live long, active and meaningful lives with the right programme of physical and psychological rehabilitation. His approach was tough bordering on brutal, but it saved and transformed thousands of lives.
I worry that spoon theory nudges us back towards old attitudes to chronic illness and disability that define people in terms of their limits rather than their abilities. We can't pretend that everyone has the same opportunities and capabilities, but it doesn't help anyone to think of ability as a finite resource that is continually depleted. We all have the opportunity to develop and grow.
Spoons theory is more metaphor, trying to describe the penalty on both time, energy and capacity a disability applies to your day to day to people who have an excess of both to handle the mundane.
One problem I have is that spoons theory says, "You have 20 spoons per day, I have 5." While the end result is the same, it's more that getting dressed in the morning requires 1 spoon for me and 4 spoons for someone with a disability. Similarly, other mundane activities require four times the amount of time and energy.
Also you can borrow spoons from the next day, either by skipping sleep or just outright pushing through an activity well beyond your disability's limit. However, then you have less time and energy the next day.
It's definitely in line with normalizing deviance. Most people who can hold down a job and an independent life with a disability are operating at their limits. They have no spare spoons for emergencies or unexpected events. The people around then normalize that and then become surprised when "one small thing" blows everything out of whack for two days. That person had no more spoons to spare. They borrowed from the next day, and now their Fibro has them bedridden, the stress triggered a manic episode, etc.
Planners think of speeds of roads being intrinsic to the design of the road thus if a people are going to fast on a road, you need to change it by narrowing it or putting in bumps or something.
The other side of this is to think of speeds of a road as based on whats around it, so if there are a lot of houses on a road, people should go slower so you don't hit people so you put in speed limits.
But the problem with limits is that since the road feels faster then the the speed limit, people just go faster then the limit, but since changing the road is a lot more expensive then just putting up speed limits that tool is used a lot less frequently.
While it is a no brainer to say every vehicle should be traveling sufficiently far behind another such that they have enough time to stop before they hit the one in front, this spacing out of vehicles will result in longer travel times as you are effectively reducing road capacity by having each vehicle take up more space on the road. Instead, it's (politically) easier to have each person take individual risks of getting into a collision.
However, with autonomous vehicles, it should be politically easier to force this constraint because the liability will be shifted from individuals to presumably the manufacturer so the manufacturer isn't going to stick their neck out so the individuals can save time on their commute.
This is for two reasons:
1) slower traffic requires a smaller safety gap between each vehicle.
2) the dominating factor in traffic jams is often people responding to the brake-lights of the people in front of them. People over compensate and therefore a "braking bubble" (Soltion: https://en.wikipedia.org/wiki/Soliton ) forms in the traffic which causes it to get even more spaced out.
Keeping traffic flowing is more important to throughput than keeping it flowing it fast. Constant braking and speeding up on a busy road means that the flow rate is constantly disrupted and this causes the traffic to tend towards clumping and congestion.
You're not "in traffic"; you "are traffic".
Maximum throughput at a fast speed with minimal spacing between vehicles is more than maximum throughput at a slower speed with minimum spacing between vehicles. I see this in every urban area, people sacrifice safety for speed.
The author is underplaying the problem here. There were tests that showed burns through the o-rings and the reports rationalized the danger-- not by normalization of deviance but through deceptive language.
It's a lot more like having an audit that shows that no users were observed writing a password on a sheet and putting it in their wallet. And since extant passwords sheets stored in wallets don't match an idiosyncratic definition of "written down" they pass the audit.
That's not to say that normalization of deviance didn't happen. Obviously both it and a more direct type of corruption happened. But I get the sense the author here is trying to cram everything into the former to make a tractable problem out of a messy political situation.
I’m so used to seeing this, but I still don’t get it. Seriously, we spend $X/week on some nice-but-unnecessary luxury, but won’t spend $X/16 once for some really useful thing.
I suspect that it has to do with how corporate budgets are designed, but … maybe they could be designed better?
It's rarely quite so concrete, but because money is so countable and is explicitly subject to control that it is seen most keenly. Most companies will have some sort of "financial controller" role, and it all flows downhill from there.
It's where the false positive / false negative phenomenon comes in, too. The person doing the controlling has an incentive to reject as many requests as possible, because they get criticised for any retroactively identified as "waste" - but they don't get to see, and can't count, all the time wasted dealing with the process and opportunities lost as a result.
I once worked for a small company that would let you buy anything under £100 on the company card so long as you sent in an explanation by the end of the month, preferably identifying a client it could be billed to. This worked very well because when you put "£100k engineer time, £10k custom electronics, £10 misc stationary" on the same invoice no sane person is going to question the stationary.
I went cold reading that.
I assumed the explosion took the whole shuttle out instantly.
Now contrast this with the wiki entry "NASA managers also disregarded warnings from engineers about the dangers of launching posed by the low temperatures of that morning, and failed to adequately report these technical concerns to their superiors."
Here's another quote, from the beginning of the article: "... I think it’s too easy to think of it as just a random-chance disaster or just space/materials engineering problem that only has lessons relevant to that field. And that’s not really the most important lesson to learn from the Challenger disaster!" [my emphasis.]
Now this is suppose to sound like an engineer "But then one day you’ve missed a bunch of launch windows and it’s 28F and the overnight temperatures were 18F but you did a quick check of the designs and specs and you probably have enough safety margin to launch, so you say GO."
But in reality the engineers never said that. The managers made the call in opposition to engineering.
If you want to have an example of when there is a Normalization of Deviance in engineering you need to have the engineers say actually say "GO" and for there to be a disaster. You cant have the managers "dismissing the engineers' concerns" and then turn around and suggest that engineering Normalized deviance. Thats simply the wrong lesson here.
You just made that up. No-one else is reading it that way, and for a good reason: it isn't written that way.
Also (following on from what pjc50 wrote above), most (if not all) of the managers referred to in your wiki quote were also engineers - so, not only did the author not make the claim you made up, it could arguably be justified if he had done so.
Your criticism of the article fails on at least four grounds: 1) the managers in question were engineers, as is often the case in large, highly technical projects; 2) engineers can make mistakes, and did so here; 3) engineers can disagree (especially when some of them make a mistake), and did so here; 4) the article does not actually make the claim you say it does.
The primary issue here is the choice of example (Challenger disaster) to explain "Normalization of Deviance".
Its like if you were an SSE working on a product that had some big security issues.
You tell your manager not to launch the product because there is a high risk of a very bad security violation.
The manager under pressure to get it done decides to ignore your warnings and launch the product.
There is a huge security violation just after the product launch and it causes a scene.
Why reach for some pet theory (Normalization of Deviance) to explain this?
This is just the ongoing tension between the desires of higher management and the reality that those on the ground know.
I think the reason why I am so insistent on this is because I am worried about the wrong lesson being learned.
Politics and Power play far more of a role than we like to admit.
This hierarchical structure is also very good at deflecting and distributing blame.
Is Normalization of Deviance just another excuse to explain the commonality of bad management?
I encourage you to read both reports, available from NASA:
The term 'nomalization of deviance' was coined precisely because accident investigators have found a recurring pattern. The recognition of a recurring pattern is more helpful in accident prevention than simply attributing each incident to "just the ongoing tension between the desires of higher management and the reality that those on the ground know."
Here's another example of it, not involving engineers, and where explicit management pressure was not an issue: