The beauty of software is that it can be replaced/repaired quickly. If a power plant gets utterly destroyed, it could take a minimum of months to build a new one. If a piece of software gets corrupted, you load from backups or buy a new copy. Downtime is incredibly minimized for even the worst designed system.
That's not to say disasters can't happen, but they will be limited in time and scope so long as there's enough money to throw as the problem, and if the amount of money-throwing ever becomes too much to swallow then we might finally see some widespread solid security practices.
Honestly simply keeping systems updated would mitigate most of the potential "devastating" attacks, the reason the recent ransomware attacks got as far as they did is lack of funding/will, because it's largely cheaper for organizations to let themselves get pwned then it is for them to protect themselves.
> If a piece of software gets corrupted, you load from backups or buy a new copy
Software is cheap, as you say. Data isn't. The piece's author indicated that the influenza-of-the-month was launched via tax software, which generally isn't interoperable. If there is no reasonable replacement, what do you do?
Heterogeneous systems are more expensive to operate, require more expertise, and cause more compatibility problems than monocultures. But they also don't all die at once due to the same bug.
Also, just because devices are on the same network does not mean they're homogenous. A Windows vulnerability won't take out a Linux server or an IoT webcam. For everything to truly "die at once due to the same bug" you'd need a network layer attack. Compromising the internet protocol itself, even if possible would cut off the attacker as well, so that leaves us with denial of service attacks, which are common. But services like Cloudflare already effectively defend against such attacks, and even compromising an entire major cloud platform (as in the author's AWS hypothetical) will simply result in the cloud provider as well as other interested parties pouring all possible effort into fixing the problem as quickly as possible.
Imagine that your monoculture crop could develop immunity to a new disease within hours, days or weeks, with immunity developing faster for more serious diseases. Then all you need is a big enough field to absorb any potential losses.
Well, yes. Just like, after a house fire, you rebuild and try to carry on. It is still catastrophic.
> Also, just because devices are on the same network does not mean they're homogenous.
Of course not. The point is that there are many pressures towards running homogenous systems - easier to hire for, easier to manage, fewer support systems to run, fewer interop problems, bigger vendor discounts, etc. etc.
These pressures are hard to resist, but we have to do better at running heterogenous systems, because without them, your entire farm can burn down before you notice.
Nothing you say is wrong, but just because "it's just software" doesn't mean these aren't catastrophes.
The author seems to be making an assumption that we could enter a world where insecure technology could be taken offline all at once by some random hacker(s) and remain in such a state long enough to completely destroy economies and institutions. Frankly the system just isn't that brittle, if it was it would have failed long ago.
It's like looking at Hurricane Katrina, which while catastrophic was never an existential threat to the US, and saying, "now imagine if Hurricane Katrina happened everywhere, every day!" without considering if such a thing were possible or likely in the first place.
As for heterogeneous systems, what do you think the cloud is? Sure it may be homogenous for users, but that's all abstraction for what is a VERY heterogeneous system under the hood. Sure having independent/redundant subsystems is ideal for reliability, but at the end of the day I don't see the need for all the frightful abstractions. The farm isn't going to burn down without anyone noticing, there are simply too many powerful interested parties and too much built-in resiliency.
I'm all for building in more resiliency/redundancy to prevent catastrophes as you mention, but the author takes a couple of major security incidents and spins them into apocalyptic techno-panic.
The thing you fail to grasp is the resilience of physical records. Imagine Germany trying to take down England's records during World War I, one hundred years ago. The destruction of merely 3-5% of all relevant documents would have required the coordinated strikes of hundreds if not thousands of arsonists, all of whom would need to be part of an even bigger spy network. That's for England proper; you might be able to scale it up to Scotland and Walles, but it would be completely unimaginable to pull this off accross the whole British Empire.
Today, it might be harder to take down one hospital, but once you have gone that far, scale it up to national level should be pretty feasible.
> Imagine that your monoculture crop could develop immunity to a new disease within hours, days or weeks
You never have opened a priority bug in one of the big software companies, have you?
Not to mention the insecure institutions are insecure in different ways. Whatever methods used to take down one hospital is unlikely to work on the next, although standardized National Healthcare systems like the NHS might be more vulnerable to such things.
Opening a priority bug is an entirely different animal than responding to an active attack. If something was able to shut down AWS, Google, a bunch of hospitals, or any other critical service, it gets immediate attention and reaction. Humans become incredibly productive once shit hits the fan.
If people would be willing and able to make proper digital records, the Cloud would not exist. Actually, most of the technology stacks in use today do not make any sense until you consider the fact that a very large segment of the market wants to have all the goodies IT-fairy-godmother can provide, but are too damned stingy to pay for even 10% of the cost.
Your characterization of Banks is correct, but irrelevant. In many ways they are the perfect IT customer: deep pockets, an inner culture that values detail orientation and rational risk assesment, appreciation of external expertise, etc. Most organizations are very not like this.
Healthcare IT, in particular, are the stuff of nightmares. A culture of bikeshedding, - excesive regulation of what systems ought to do, plus borderline criminal negligence of the implementation details, - reliance on obsolete OSes that cannot be updated anymore, needlessly large attack vectors... do I need to say more?
Except it isn't because all these services are fluff. If Facebook went down for a month, when it came back everyone would have forgotten about it, might even have forgotten social networking as a concept. If Uber was offline for a day, everyone would shrug and download Lyft and one ride later would have forgotten Uber ever existed.
Of course that's a Y2k-type disaster scenario, but it's a sad fact that that there are actors seeking the capability to militarize hacking on a grand scale, and others who seriously and desperately want to subvert the dominant paradigm and revert to older ones (Aleksandr Dugin in Russia being a good example, though fortunately not a very influential one). In a way, the US has yet to fully recover from 9-11, for example; the country has been in a defensive military posture ever since, suffered the economic equivalent of a heart attack ~7 years later, and has now undertaken a course of international isolationism and domestic policy that seems perverse by historical standards.
I don't disagree with you on the general resilience of software and networked information structures, indeed I'd say they're the best hope for our society and (contrary to the arguments of this writer) that maybe we should be hurrying to make our political and economic structures work more like the internet does, so that when they're damaged society can route around the damage rather than being paralyzed by it. What if, for example, we were able to do away with legislatures and the corruption they engender, and find a way to manage our legal codes like Wikipedia or Github projects at their best?
But while we grope towards more responsive political and social structures, we have to deal with the reality of high informational interconnectedness coupled with extremely rigid and asymmetry-maximizing power and control structures that are mostly hierarchical in nature. The vast majority of our organizations, government, institutional, and corporate, are hierarchical/pyramidal in nature with a very small number of executive actors exerting operational control which then propagates downward through the organization. Even if a firm relies on an internal culture or decision-model that's more cellular or distributed in nature, those legal structures of control still matter. It's an open question how well society can function if there's an attack on those critical structural elements.
With the advent of machine learning, is it so hard to conceive of a program that can seek out the critical actors within a corporation based not on things like their PII or job descriptions, but simply the volume, frequency, and centrality of their information traffic patterns? It might not go after the CEO or upper management at all; it might go after the paralegal of the smartest lawyer in the legal department, the person in the logistics department who hasn't called in sick in 10 years, and the manager of the company canteen, whose steadiness and reliability are critical to organizational health precisely because they've become taken for granted by everyone who interacts with them. Right now cyberattacks appear (to my uneducated eye) to be launched and evaluated in terms of scale and intensity, but it's only a matter of time before they evolve a preference for criticality and simultaneity.