So let's take those disasters and list the lessons that you would have learned from them. That's the way to constructively approach an article like this, out-of-hand dismissal is just dumb and unproductive.
FWIW I've seen the leaders of software teams all the way up to the CTO run around like headless chickens during (often self inflicted) crisis. I think the biggest lesson from the Titanic is that you're never invulnerable, even when you have been designed to be invulnerable.
None of these are exhaustive and all of them are open to interpretation. Good, so let's improve on them.
One general takeaway: managing risk is hard, especially when working with a limited budget (which is almost always the case) and just the exercise of assessing and estimating likelihood and impact are already very valuable but plenty of organizations have never done any of that. They simply are utterly blind to the risks their org is exposed to.
Case in point: a company that made in-car boxes that could be upgraded OTA. And nobody thought to verify that the vehicle wasn't in motion...
There are two useful lessons from the Titanic that can apply to software:
1) Marketing that you are super duper and special is meaningless if you've actually built something terrible (the Titanic was not even remotely as unsinkable as claimed, with "water tight" compartments that weren't actually watertight)
2) When people below you tell you "hey we are in danger", listen to them. Don't do things that are obviously dangerous and make zero effort to mitigate the danger. The danger of atlantic icebergs was well understood, and the Titanic was warned multiple times! Yet the captain still had inadequate monitoring, and did not slow down to give the ship more time to react to any threat.
The one hangup with "Listen to people warning you" is that they produce enough false positives as to create a boy who cried wolf effect for some managers.
Yes, that's true. So the hard part is to know who is alarmist and who actually has a point. In the case of NASA the ignoring bit seemed to be pretty wilful. By the time multiple engineers warn you that this is not a good idea and you push on anyway I think you are out of excuses. Single warnings not backed up by data can probably be ignored.
FWIW I've seen the leaders of software teams all the way up to the CTO run around like headless chickens during (often self inflicted) crisis. I think the biggest lesson from the Titanic is that you're never invulnerable, even when you have been designed to be invulnerable.
None of these are exhaustive and all of them are open to interpretation. Good, so let's improve on them.
One general takeaway: managing risk is hard, especially when working with a limited budget (which is almost always the case) and just the exercise of assessing and estimating likelihood and impact are already very valuable but plenty of organizations have never done any of that. They simply are utterly blind to the risks their org is exposed to.
Case in point: a company that made in-car boxes that could be upgraded OTA. And nobody thought to verify that the vehicle wasn't in motion...