Hacker News new | past | comments | ask | show | jobs | submit login
The Stuxnet worm may be the most sophisticated software ever written (quora.com)
1493 points by graposaymaname on May 18, 2018 | hide | past | favorite | 481 comments

I'd argue that Google Search is much more sophisticated than Stuxnet. Windows is much more sophisticated. Linux is more sophisticated than Stuxnet. The list goes on.

We tend to ignore the sophistication of things we are familiar with, and hype those that surprise. But that's not a fair measure of anything.

In my view, the sophistication is implied by the breadth of expertise required to put the whole thing together. Google Search and the OS landscape are for sure broad and sophisticated. However, their development was accomplished by computer scientists.

In order for stuxnet to be effective, it was necessary to employ expertise in:

- Uranium enrichment methods and processes

- Capital equipment control systems and their development environments

- Theory of operation of centrifuge machines

- Corporate espionage of some sort

- Organizational management skills that can pull all that together

- and deep understanding of the operating systems referenced above

But do those things really contribute to the sophistication of the software? For example imagine some code written with no understanding of uranium enrichment:

    const int CENTRIFUGE_RPM = 500;
And then some other code written with a deep understanding of uranium enrichment:

    const int CENTRIFUGE_RPM = 1203;
Can you really say that the second bit of code is more "complex"? Same goes for stolen driver signing keys and some of the other things mentioned in the post.

Other large software projects like operating systems or Google search involve much more complex software concepts which I think is the primary thing that should be measured when discussing the sophistication of software.

>Can you really say that the second bit of code is more "complex"?


Complexity in the sense discussed is related to the domain knowledge (including CS knowledge) required for the program to be written and work well.

Else even a trivial BS program could be very complex, just sprinkle it with gotos and unnecessarily convoluted code...

This is such a powerful distinction that I feel it should help us rethink language paradigms. Complexity is not (just) the complications one can impose by construct or the involutions required of ones algorithms, it's the overall real world system your code addresses.

Simple programs which are coded simply may address complex phenomena to complex ends--perhaps that's even the ideal?

You might enjoy Fred Brook's essay "No Silver Bullet", where he distinguishes between "Accidental Complexity" (basically, complexity created by software engineers when implementing a solution) and "Essential Complexity" (complexity that arises because software is written to solve some real world problem and the world is complex).

Most people perceive complexity as things they don't understand. In that case, complexity will be relative.

> Most people perceive complexity as things they don't understand.

I don't think this is true. For example, as a math teacher, I couldn't do a very good job predicting how easy or difficult students would find particular problems. But I could easily predict which problems would be easier and which would be more difficult. I could do that even though I personally understood all the problems.

I don't think difficulty is complexity. For example, the bitcoin mining protocol complexity is the same but the difficulty goes up or down.

I'll attribute difficulty to the energy required to resolve a system. For example, pulling weight. The complexity of the action is the same. But the difficulty depends on the weights to pull.

Complexity is difficulty of understanding. In the context of mathematical problems, that is the relevant kind of difficulty.

It seems you’re vastly misusing the words and their contexts here.

Sure, I suppose you'd just need a good definition for complexity. Notions like computational complexity have clear definitions while what I think you're describing might not. Or may be it would require some thinking and be valid in some limited regimes of "real world" effects as you call it.

Something something about simple rules being able to describe complex behaviour. Example: you can describe a flock of birds in motion around an object with 2 or 3 rules.

Complex rules yields stupid results. Example: tax codes in most countries.

Must be a quote but I wasn't able to find a source for it.

The problem with simple rules is the volume of computation. Theoretically you could write a tax code using quantum mechanics, but good luck calculating your tax each year (or before the heat death of the universe).

When systems get too complex to simulate from first principles, we have to resort to inductive reasoning--observe the system and then create rules as we see a need.

Yes the resulting rule set is a mess, like our tax code. But the physical system that the U.S. federal tax code (for example) covers--the United States of America--is mind-bogglingly complex.

We have trouble computationally simulating more than a certain number of neurons... there are billions of neurons in each human brain, and there are hundreds of millions of human brains interacting in the U.S. This does not even get into other physical phenomena like surface water or mineral distribution.

The results are stupid because we are too stupid to understand and analyze the system we're trying to describe and manage.

That something something is actually Agent Based Modeling / Simulation.

Back when I was in academia I used to develop ABMs to represent the behaviour of complex systems with a simple set of rules of agent action and interaction.

The game of Life is the quintessential example of that.

Stringing together independent pieces doesn't produce a significant rise in complexity.

For instance, the payload which specifically looks for uranium centrifuge hardware is independent of the worm which carries the payload. They can be developed separately, by different people, and combined.

That specific worm could carry a payload to attack anything.

Or, that specific payload could be carried by any worm.

There is next to no coupling between "worm" and "payload".

Agreed. In computer science the smallest lever can move the largest mass and smaller levers are not inherently less sophisticated.

agreed. upvoted.

> const int CENTRIFUGE_RPM = 1203;

As the linked article points out, it wasn't just raising the speed, it was raising it in a subtle enough way to ruin the process while other experts routinely monitored the system

Most importantly it worked very successfully. With Windows, Google search, or most of the others mentioned here, they have had a huge number of problems. The word used was "sophisticated." I think that also implies some level of near-flawlessness in the end result.

... except it was discovered and widely publicized.

After it had dismantled a whole countries nuclear weapon program...

7 years after deployment

There is no telling. It could still be out there.

It was found, and now everyone is using that code base to iterate on new weapons.

99.99% of the time, Windows works just fine for me.

Stuxnet only needed to work once.

It worked multiple times. And it needed to propagate, undetected, for months until it made its way into the nuclear facility.

It didn’t just work once.

I think vkou might have been talking about the precision of Stuxnet.

The complexity or quality of a software code does not neccessarily say anything about the complexity of the problem it solves.

so true. upvoted

The sophistication of a piece of code is not merely an attribute of its complexity.

Else a program with tons of accidental complexity (a badly written program by an intern) would be equally good with a program with huge essential complexity (a 10 line program that takes tons of domain and/or programming knowledge to write)...

youre right. upvoted

There was a way to make your point politely and be taken seriously. This was not the way.

You mean the parent was being sarcastic? If so, it went over my head.

I think, the number of zero-days included in Stuxnet is an important factor in making it sophisticated and complex.

The second piece of code is not more complex, but it is (presumably) a lot more sophisticated.

The fact that I had to prefix that with "(presumably)"—i.e. I can't actually tell using my own expertise—is evidence of that.

Have you written motor control software before? If you haven't that might be why you can't tell. Whenever hardware is involved, with perhaps sometimes the exception of GPUs and workstation CPUs, I've noticed people's intuitions get a lot less reliable -- it's sort of like looking up the programming abstraction tower, lexical closures with higher-order functions to compute derivatives can seem awfully sophisticated to someone who's never seen something like it.

Of course if the sophistication is more about what they needed to know in order to break the things (and make that code change), then talking about this subsystem by itself that's either way lower or roughly the same as what they'd need to know to build and operate their own centrifuges. Much less, if they only needed to focus on one part of the process (motor control) that would cause problems (which might just be a brief consultant call with our own nuclear physicists and engineers, I don't know, nuclear science details seem as mysterious to me as high level language details might to impoverished programmers), or about the same, if they knew everything the Iranians knew about the systems (did we ever find out if they got all the blueprints and so forth and built replicas for end-to-end testing?) plus a bit extra on how and where to make it break without easily being detected.

Anyway how sophisticated can they really be when they didn't even use source control? (Old joke... https://news.ycombinator.com/item?id=4052597)

Uh, that’s one small but important conponent of Stuxnet. The complexity is in the delivery mechanism, and the way it disguised itself, and the way it actually broke the centrifuges.

upvoted thanks

From https://en.wikipedia.org/wiki/Sophistication

> Sophistication has come to mean a few things, but its original definition was "to denature, or simplify". Today it is common as a measure of refinement

So no, it can in many cases even be the precise opposite of complexity.

It actually originally comes from "sophistry", which is an ancient greek discipline of wisdom and excellence. I would generally associate the word with a high level of complexity that has been expertly reduced and refined to an elegant quality.

The sophists, as you say, were ancient Greek teachers.

But sophistry now means something rather different: using subtle, specious reasoning to deceive.

Typically, different words refer to different things. Most often, words considered synonyms actually refer to slightly different things.

>Can you really say that the second bit of code is more "complex"?

Yes. Take fastinvsqrt() for example. Cleve Moler learned about this trick from code written by William Kahan and K.C. Ng at Berkeley around 1986.

  float fastInvSqrt(float x) {
    int i = *(int*)&x;
    i = 0x5f3759df - (i >> 1);
    float y = *(float*)&i;
    return y * (1.5F - 0.5F * x * y * y);
Simple instructions, VERY complex code. Not as complex as this one, though, which took almost 20 years to come about:

  float fastInvSqrt(float x) {
    int i = *(int*)&x;
    i = 0x5f375a86 - (i >> 1);
    float y = *(float*)&i;
    return y * (1.5F - 0.5F * x * y * y);
Chris Lomont says "The new constant 0x5f375a86 appears to perform slightly better than the original one. Since both are approximations, either works well in practice. I would like to find the original author if possible, and see if the method was derived or just guessed and tested."

A model aircraft can be simple, but an understanding of principles for designing it can be hard. IMHO, these two pieces of code are extremely simple, in terms of logic, instructions and computations. But they are sophisticated, the second is even more sophisticated than the first.

Root of the debate: words are not well-defined.

> But do those things really contribute to the sophistication of the software?

> Can you really say that the second bit of code is more "complex"?

I don't think you should equate complexity with sophistication.

I, personally, would differentiate between complex and sophisticated.

That is just one line of code, sure. But I can't imagine what it took to get that line of code there, and everything that comes with that. How many people were involved, PhD's, years of experience in a range of fields, and not just years of experience in any field but experience in fields like espionage.

My uneducated brain would still put "most sophisticated software ever written" in the hyperbole box, but even then I'm hesitating.

yeah. in order to agree with that "most sophisticated software" claim i think he'd need to compare it to some other candidates for that title.

Hell, sure yes. The complexity is in the data. At the end of day, it is all 0s and 1s. It is the pattern/effect that matters.

Wouldn't the people who know the physical things just write requirements for those farther on down the chain?

The threat analysts say, we need to destroy Iran's ability to make nuclear weapons. The nuclear weapons specialists say, the part where we can best do that is by somehow breaking their centrifuges. The centrifuge technician they call up says, "well, x RPMs will really ruin those things. And it would be hard to tell if they did it like this..." Then the software guys make the code that ruins the centrifuge, and the red team incorporates it into their fancy worm, with specs on what exactly to look for.

Ultimately, it was kind of a failure in that anyone found out about it. Maybe there were better programs, and because they were better we never heard about them at all. But still it's pretty amazing :)

The key part is that you have to bring all of those all together. In hindsight it might be straightforward but if you had a blank slate, how would you approach the problem of "stop Iran from refining Uranium"?

To me the most surprising result would be if it cost fewer than bombing the nuclear facility. At 100k$ each bomb, Stuxnet looks affordable, plus all the expertise and other attack vectors you get from piecing it togethet.

$100K for a bomb? I have no idea what bombs really cost, but if we go with that number, they could have dropped a lot of bombs for that price. One junior engineer working for a year costs that much. We know that expertise in a lot of fields existed, that implies a number of engineers.

I'm going to guess a bomb is cheaper. Of course a bomb has a lot of other disadvantages which is why it wasn't used.

One particularly expensive component of stuxnet is deniability. Although the commonly accepted theory for stuxnet's invention is "a state actor", specifically the United States, there's no proof of that at all. And conjecture without proof poses no threat to the US government.

If the government were to, on the otherhand, bomb Iranian nuclear facilities, one small mistake in the plan could ruin their chances of deniability, bringing down international condemnation on the US.

>and deep understanding of the operating systems referenced above

I think this understates it; it required a deeper understanding of the vulnerabilities of those operating systems than anyone else in the world, including the creators of the operating systems

Well, in the case of windows, I recall that maintaining backward compatibility with a variety of applications required a knowledge of the resource demands of each of those applications, with those applications each operating in a different domain. Similarly, creating memory allocators is something of a "black art" - it's a matter of generically good allocator but one which doesn't generate fragmented memory in "normal usage patterns" and then you have to learn what those normal usage patterns are, which involves understanding however many applications.

So the question of "sophistication" is both subtle and difficult to call.

Edit: And the production of a algorithm that's a conglomeration of ad-hoc processes might qualify as another sort of sophistication, see "the hardest program I ever wrote":

http://journal.stuffwithstuff.com/2015/09/08/the-hardest-pro... http://journal.stuffwithstuff.com/2015/09/08/the-hardest-pro...

Most of the bullet points you've listed can be summed up as "business logic." I'm sure the Stuxnet programmers worked with physicists and industrial controls specialists.

Developing software for, say, jet engines requires sophisticated knowledge of jet engines, which is probably about equally complex. But it's manageable because programmers work with engineers who are subject matter experts.

Or just like: how do you mess up a centrifuge controlled by SCADA without them knowing? just change the speed and report another speed, done.

You don't need to know classical mechanics to use a bike, or know about internal combustion engines to use a car.

Then think about the expertise to put together a self driving car... From sensors to ML...

The breadth of the expertise is: writing the worm, plus the domain knowledge of a nuclear engineer. Period. You could argue that the control software of those centrifuges is as sophisticated as the worm, since in requires knowledge in two separate domains: writing software and nuclear engineering. Same goes for any ERP software, which requires the contribution of software experts and domain experts.

I think Windows and Stuxnet are sophisticated in different ways.

Windows has to cover a huge area and a lot of "known" unknowns and be able to recover (somewhat) reliably. Stuff breaks, you get weird error messages, that driver for your Wi-Fi never really worked right, but at the end of the day you have a computer that works pretty well, and that's quite remarkable. The same is of course true of Linux and other operating systems.

Stuxnet is a hyper-specialized piece of software (malware) that cannot fail or it loses it's purpose. The authors clearly knew they had to have multiple fallbacks for every step of the process, but I find it very impressive that it reached it's end goal successfully and without being discovered. A lot of software (including malware) break because of regular software bugs, environments that differ from the expected, interference by the user, the list goes on. For Stuxnet to have avoided all of those, that is quite sophisticated.

I agree.

It's the most sophisticated piece of malware, that's for sure (at least counting the ones we know of).

But calling it the most sophisticated piece of software is too big of a stretch.

That said, other answers to this question include what we would traditionally consider as contestants (like Linux kernel), it just happens that the submitter decided to submit this specific answer. I don't know was this the top answer before it exploded here, but it sure is now.

> It's the most sophisticated piece of malware, that's for sure (at least counting the ones we know of).

Isn't Stuxnet a part of a family of similar nation state malware that would also include Flame and Duqu?

They are a family, as in, all of them were almost certainly created by the same group.

Symantec said that Duqu is "near identical" to Stuxnet. As for Flame, Kaspersky[0] initially said that it contains no resemblance to Stuxnet, and then later on discovered that they've even shared a zero day in their early versions.

From my understanding, I don't necessarily consider them as different software, more as a single software + forks by the same group for different purposes and with different zero days.

Stuxnet just happened to be the one that got to be the most popular one, for a number of reasons (most destructive, attacking the most sensitive targets, the one that got out of control and spread outside of Iran uncontrollably, first to be discovered...), so I refer to Stuxnet as the original one and Flame and Duqu as more of forks than completely different pieces of software.

Which one is more sophisticated between the three would be the same as if we tried figuring out which Linux-based OS is the most sophisticated, except that in this scenario, we only have 3 Linux distros (maybe four with Gauss) and they've all been created by the same group. There's really no point in trying to compare their sophistication.

[0] Before people bash on me for using Kaspersky as a source, Kaspersky, Iranian CERT and a university in Bucharest were the ones that initially discovered Flame, and Kaspersky's the group that published the first detailed analysis on Flame.

Out of topic but your defense for referencing Kaspersky makes me wonder why people would see a problem with it? I'm not familiar with the field and don't know who's who.

Only if you're on "Team USA". Looking on from outside, it seems to me pretty obvious that a Russian security company might provide useful insights on US malware operations that a large US security company would be less inclined or would not immediately report on.

Otherwise it's just your basic mudslinging; Both Kaspersky and US security companies are likely to do their governments favours, in particular by selectively not reporting things, both willingly and under pressure. If you're a US citizen working for a US security company and you'd stumble upon a US malware operation that appears to be doing something benign, such as preventing nuclear whatnots, you might be disinclined to report on it for fear of ruining a US malware mission--and even look past the fact that they're using such a risky, dangerous type of software to do it (being a worm/virus, remember that Stuxnet also disrupted and got into places that weren't targets).

Back when Stuxnet was active, I closely followed the story and the existence of the (airgap-hopping) virus was discovered long before people got any solid ideas about its purpose. When finally the first reports came that the special control software checked for machines running on a frequency that were only used in either some Finnish industrial plant or these Iranian refineries[0], the first reports on this did not come from a US security company.

[0] This part is a bit vague sorry. I wish I had sourced/fact-checked this part of the story better, years ago. There was so much going on.

They're a Russian company and semi-recently Trump banned their software from government agencies.

People theorize they're controlled by the Russian government but I've never come across any evidence that they're anything other than a top tier security company though.

They have done some fairly bold moves in the past though, like cleverly calling out other AV companies that were copying their detections [0] and kind of embarrassing the NSA [1] when a NSA employee took their malware/cyber weapons home to their PC running Kaspersky AV, which detected the malware and sent it back to Kaspersky server for analysis.


[1] https://www.bleepingcomputer.com/news/security/nsa-employee-...

In Kaspersky's defense, they have started making their source code auditable for certain customers. Kaspersky is well aware of how they are perceived as a company, and they are aware that if anyone ever traces any of their activities back to the KGB, it's game over for them. I can't pretend I trust Kaspersky 100%, but I can see why others might.



"I've received feedback from people who were just focusing on the question why other anti-virus companies would detect a clean file we uploaded. And I can only repeat as I did in the blog: This could have happened to us as well," Kalkuhl explained."

Well, he clearly says, the test was to expose the "negative effect of cheap static on-demand tests" and not that others copied from them, because this seems to be routine and they do the same.

> They're a Russian company and semi-recently Trump banned their software from government agencies.

I know it's popular to bash Trump, but it was the DHS that banned the software, not Trump:

In a binding directive, acting homeland security secretary Elaine Duke ordered that federal civilian agencies identify Kaspersky Lab software on their networks. After 90 days, unless otherwise directed, they must remove the software, on the grounds that the company has connections to the Russian government and its software poses a security risk.

Which came after the GSA removed them from the list of approved vendors:

The directive comes months after the federal General Services Administration, the agency in charge of government purchasing, removed Kaspersky from its list of approved vendors. In doing so, the GSA suggested a vulnerability exists with Kaspersky that could give the Kremlin backdoor access to the systems the company protects.


I say this without having seen the code base for either, but I'd be surprised if Stuxnet's code base was anywhere near as large or with as many moving pieces. Still, it's incredible to imagine the knowledge base that needed to go into Stuxnet to get things off the ground.

Google Search was originally written by two guys in graduate school and has been refined and rewritten many times since then. I'm sure the code base is complicated and undoubtedly some of the greatest minds in software engineering and computer science have used it. The same goes for Linux, which was written by one guy and grew from there.

On the other hand, Stuxnet isn't something that a few brilliant graduate students could have put together. To even get this thing off the ground, you need people with backgrounds in nuclear physics and/or chemistry, operating systems specialists, people with knowledge of industrial equipment, networking experts, an espionage network and competent management to pull it all together. Plus, you need to keep the whole project secret. Oh, and funding. Lot's of funding.

I'd call that sophistication in that you can't even think about starting to tackle this problem if you're just two guys in a garage.

I think of it like if everyone who read these comments on HN got together we could engineer a very good OS.

I doubt we could come close to solving the problem of "stopping Iran nuke production without killing anyone or starting a war"

Without any kind of metric for “sophisticated” it’s all subjective anyhow. I like Stuxnet as an example - it’s devious and a true hacker approach, albeit as blackhat as they come.

i think for something to be sophisticated we are looking at how complex it is. this worm does nothing new in that regard (taking advantage of 0days, hiding, covering tracks etc.) it is no more sophisticated than a regular worm. quora is a fucking joke.

Personally, I think the Stuxnet work is comparable to a Rube Goldberg contraption.

So you could ask, is a RG machine more sophisticated than say a computer - maybe not on a strictly technical level, but again without a metric, it's all about how we feel about it.

Anyways, I thought it was a great writeup that explains at least one aspect of what sophisticated software is, in a language most anyone could understand.

I think for something to be sophisticated we are looking at the metric that differentiates fine wines and cheeses from plebeian non-fine wine and cheeses.

If we can just capture that essence, we will wield the power of sophistication in our hands.

Oh, in that case it's just placebo, price, and primed expectations.

You mean how much money is charged?

Big != sophisticated. I’m not denying that there’s a ton of effort and features that go into windows, but operating systems are well known, and I’m sure most of the code powering windows is not all that sophisticated outside of some core components.

It’s sort of like comparing a skyscraper to an iPhone. Sure the skyscraper requires a lot more manual labor, but the iPhone is more sophisticated. It took ~80 years from when the Empire State Building was built to when the iPhone was built. The iPhone is more sophisticated but it’d still take more time and resources to make another Empire State Building.

Sorry if that’s a poor analogy- it’s the best I got right now.

You could debate this all day long for various values of "sophisticated." I think the author just meant some variation on "amazingly devious."

Lets not forget software used for extremely complicated and risky operations like Mars Rover or Rosetta mission, developers did some quite amazing things there with very limited hardware resources...

To me the primary difference is the software you mention performs their tasks in the open with cooperative users.

Stuxnet installed itself without cooperation, hid itself perfectly and still completed its objective flawlessly against a hostile user base.

Windows certainly has more undiscovered Windows/Driver exploits in it than Stuxnet ever had!

There's sophistication of the domain, and of the code. They are independent.

For example - it might take years of research to develop a formula for calculating something, but the final code can be very simple one-liner.

This comparison is ridiculous. Searching the internet can be imperfect, it can be lossy, and there aren't any real consequences. We are talking about an ad platform after all. If you search for kittens and you get back 345 results or 101 results or 2345 results, does it really matter? No, it has no consequence to anyone.

I agree on the sophistication part but I think you are missing out on resources used on development of this.

the responsible party(ies) did not access to resources , man power or infra at google or even at an enterprise scale.

MariaDB and PostgreSQL. Amazing software that is open source for us to dive in and play with. FoundationDB was a recent treat.

>I'd argue that Google Search is much more sophisticated than Stuxnet. Windows is much more sophisticated.

Not to be rude, but it really doesn't sound like you read the article.

Especially when you claim that Windows is more sophisticated. Stuxnet had to get past all of Windows security, and did so by using not just one or two or three never-before-known flaws, but a bunch of them.

The code base for the International Space Station is probably also VERY complex

Probably not. Complex things break a lot. You want the life-critical code to be as simple as possible, with proven correctness if possible. I bet you'll not find 1 recursion in ISS flight code, and you'll not find anything that's not having an known-upper-bound on time run.


It is possible that code itself is not that complex, but the interaction between all modules certainly has a high level of complexity.

"In the International Space Station’s U.S. segment alone, more than 1.5 million lines of flight software code run on 44 computers communicating via 100 data networks transferring 400,000 signals (e.g. pressure or temperature measurements, valve positions, etc.)."

Several years later, and even with the code, researchers are not able to summarize a complete list of what stuxnet definitely does.

What I see here is that the word "sophistication" is misunderstood by a lot of people.

Stuxnet took control of multiple layers of complex production environments. There are numerous "0day" kits in the code.

It's not like an effort like a search engine or most other organized software projects, because there are logistical dependencies of the worm itself in those exploits. If it was a US-israel effort (I think it almost definitely was, but who cares) then consider how much discipline and effort it takes to keep TWO govt groups of hackers coordinated enough to keep those exploits fresh, whilst simultaneously building a dependable worm.

Another thing, a lot of the actual machinery and shit isn't very well known, and this is worth mentioning because it's not like you can go spin up an emulator for this shit to test out your massively devastating two-country worm on.

Stuxnet of course made the best of this by using lots of different exploits in different situations, giving it the biggest attack surface it could, that's low hanging fruit anyways.

I think stuxnet doesn't impress people because maybe they think it's just a bunch of bugs in old shitty software, but it's so much more than that. It's bugs in software that only a few hundred or maybe a few thousand people have ever seen, much less pentested, on machinery that's rare and sometimes even unique to the location, the infrastructure of the place is based on rough intel at best, and oh by the way, your spy hackers need to coordinate with this other group on the other side of the planet.

Start brainstorming how you'd pull it off, and I think it'll become more imrpressive as you do.

Personally, I think it's the most incredible display of skill and prowess in malware thus far. The years I've spent disassembling, reversing, tracing, filtering, researching... A lifetime of hacking doesn't even knock the dust off of a project like that.

That is an utterly braindead assertion.

I wrote the quoted article about Stuxnet. And I've helped write multiple operating systems.

Your argument not an argument. It's just a random assertion with no technical knowledge of either Stuxnet or how to write an operating system.

Stuxnet specifically took advantage of Windows's lack of sophistication in order to replicate.

Stuxnet changed history. Any "game of chicken" style equilibria is broken if the probability a nuclear actor's command and control drops below 100%. If there is even a 1% chance that when a Big Red Button is pushed the missiles fail to launch the game becomes unwinnable. Simulations of imperfect information in dynamic brinkmanship where both players are known to have advanced cyber capabilities results in a single dreaded endgame: general nuclear exchange.

Thermonuclear Cyberwar


We have moved into uncharted domains. And herein lie demons. Past Rules of Engagement universally agreed upon regarding the use of kinetic weapons no longer apply. For wiser heads to prevail in the current global climate, the voice for peace must become the loudest one.

Rules of engagement for cyberspace operations: a view from the USA


After the first couple of sentences I came to the opposite conclusion: perhaps the greatest nuclear powers have all had their launch systems compromised already, and if people started pushing buttons nothing would happen. Comforting thought.

While a nice thought, that is only true if:

a) at least two of these secret security teams exist, funded by different political superpowers

b) the one team behind this worm was funded by a group insisting on global denuclearization.

Imagine if what you say is true for every nuclear power, except one (likely the one that is behind this worm).

Have we heard of any other enrichment facilities that have been targeted? There are still a ton out there[0].

[0] https://en.wikipedia.org/wiki/Enriched_uranium#Global_enrich...

> the probability a nuclear actor's command and control drops below 100%.

It's never been at 100% anyway. Read the book "Command and Control" if you are not convinced.

Since the probability is below 100% IRL, this is trivially seen to be false. Evidence: Here is a hand. It is not irradiated...

Yes, it has the potential to break the MAD equilibrium. It's very ironic to me that offensive technologies do not threaten world peace as much as defensive technologies.

MAD is a local maximum of peace, but only if we define "peace" to include "tense standoff". It appears to be the best we can do in the presence of overwhelmingly powerful offensive capabilities.

Imagine the different "peace" if instead we had overwhelmingly powerful defensive capabilities.

But everyone would have to have those, or else the first country to develop them would have an insurmountable advantage. A country that has both nuclear weapons and the ability to block all attacks including nuclear would rule the world, or at least dominate it without opposition.

Or, everyone would have to have a strong sense that nuclear launches weren't possible.

I'm thinking along the lines of grey-hat anarchists constantly attacking everyone's nuclear capabilities. "If nobody is super, everybody is super."

Hence the local maximum.

...as if “world peace” is a thing.

>It's very ironic to me that offensive technologies do not threaten world peace as much as defensive technologies.

ummm... what?

> If there is even a 1% chance that when a Big Red Button is pushed the missiles fail to launch the game becomes unwinnable.

Why does that 1% make such a big difference?

Very interesting comments especially conclusion that voice for peace must be loudest. Generally, personally agree. However, what you think of current USA stance of beefing up defense and probably nuclear arsenal while others are not allowed such weapons? Won't it make them want these weapons even more?

That's all very well but theory and maths aren't too relevant when the man behind the button is an illiterate cretin.

The launch is a two person process and while in theory the second person is only there to verify the identity of the President this has never been put to the test and there truly is no way to tell what would Mattis do in a situation like this.

Removal from position due to medical incapacity is not only a real transfer of power mechanism but also fairly easy for the Vice President to do, assuming that the Secret Service agrees.

Remember, the Secret Service works for the US Treasury.

The problem is not everyone on the world stage is a rational game theory nerd that links to papers like we want them to be. Also 60 years of treating every other country as a box in a threat model to be manipulated against other boxes has been a completely unmitigated disaster, so maybe we should stop that.

> 60 years of treating every other country as a box in a threat model to be manipulated against other boxes has been a completely unmitigated disaster

Has it? The last 60 years have been relatively peaceful, by historical standards.

You want to ask all those who lost their lives in any of the numerous conflicts. Only, because the big clash didn't happen (it nearly happened three times in 1983), this doesn't mean that there weren't any conflicts. E.g., the biggest air war in history, in Laos, even was conducted in secrecy, African politics is just a single mess, etc.

Please consider orders of magnitude - we really do live in peaceful times.

Example: there's far fewer cold war deaths than from traffic accidents, let alone real wars.

WW2 killed 3% of all humanity: https://en.wikipedia.org/wiki/World_War_II_casualties

The end of the Han dynasty killed a large percentage of all chinese people.

For perspective: https://en.wikipedia.org/wiki/List_of_wars_by_death_toll


Hm – Consider this piece (you may or may not appreciate the political perspective): https://www.globalresearch.ca/us-has-killed-more-than-20-mil...

In brief, it concludes that the US alone (there have been lots of conflicts without any manifest involvement of the US) has been involved since WWII in conflicts causing "the deaths of between 20 and 30 million people".

[Edit] You may also consider the ongoing war in Syria, which is not only exceeding WII in duration already, but also overshadows the total death count of the US in the WWII Pacific "theatre" (161,000 dead, including 111,914 in battle and 49,000 non-battle) by 500,000+ dead (last consensual figure was 470,000, issued by the Syrian Center for Policy Research in 2016).

we ought to ask people whose family members have died of cancer lately how they feel about our so-called advanced medical science as well

The world also became a lot more connected, so we're exposed a lot more to wars and such abroad, and we've matured morally speaking and can see that all wars are unnecessary.

I thought I read that people said nearly that identical thing shortly before the start of World War I. That world economies are so intertwined and peace was lasting.

Only in a western welfare bubble. Morality over war is what you can afford having other security needs taken care of.

> Only in a western welfare bubble. Morality over war is what you can afford having other security needs taken care of.

That's actually a false hypothesis of people in the 'western bubble' who haven't experienced war. Research I've seen shows that people in places that have experienced it, such as in Syria, are much more opposed to it.

And that bears out in Western experiences: Who created the UN, with the stated purpose to prevent another war? Who created (the ancestor institutions that became) the EU to prevent another European war? Who enforced the Geneva conventions and prosecuted war criminals after WWII?

The answer is, the people who had lived WWI and WWII. They knew far more of war than anyone today in the West, and they thought it was the worst scourge of humanity which must be prevented from happening again at almost any cost. Who are we to disagree?

Do note the same countries still wage war (and trade in realpolitik). What you're saying is not mutually exclusive with what I'm saying, it's on a continuum.

The western world has developed institutions that give it security. Once those parts of the world ravaged by war do the same, they too will enjoy peace and stability.

Well, we're undoing these institutions with an ever-growing fervor, as we appease the insecurities of a few rich and powerful people by handing them yet another election and yet another few units of currency on top of their almost unimaginably huge mountain of existing wealth.

Right, but it's difficult to establish those institutions when your country is ravaged by war and corruption...

For all the evidence we have, MAD has worked. Still here. Still no WWIII. At the beginning of the twentieth century it was looking like we'd have another world war every twenty years or so for the rest of time.

As best we can tell, nukes actually did end large-scale war. I would call that at least a partially mitigated disaster.

You're not wrong, but there were multiple incidents during the cold war where the US and Soviet Union came very, very, very close to a nuclear exchange, and it was only dumb luck (and sometimes the heroic actions of individuals who were in the right place at the right time) that saved us. We were lucky, and luck is not a plan.

The unwillingness of people to start a nuclear exchange is exactly the plan. In the examples you speak of, common soldiers and technicians refused to launch the missiles. It's noteworthy that the idea of a full-scale thermonuclear exchange is so horrifying that even soldiers who assume they are already dead and have orders to launch still refuse. I think that speaks a lot about the inherent goodness of humanity, but that's a different story.

I know it came very close, but the assumption was that it would come to the brink yet nobody would want to go through with it and begin open aggression. Seems dubious and risky, but once again it worked. And let's not discount the negative risk--a conventional world war with the Soviet Union, China, the US and Britain would have been so terrible.

With such a small data set we should still be dubious, but I don't know if we can consign it merely to luck. I think it was a good plan. Terrible publicity, though. Nightmarish. But so is war.

I think that weapons range, combined with either precision targeting or large numbers, gets that job done. Picture a world without nukes, but with lots of highly accurate ICBMs and SLBMs. The attacker can expect to be attacked, with the destruction being stuff like the Kremlin or Whitehouse or parliament building.

In the days of WWII, an attacker could rightly feel confident that there could not be an immediate response that strikes anything of importance. The attacker might even believe that such a response could not be possible ever in the future. Poland could be invaded without any realistic worry that Berlin would be attacked that same day, and a bit of optimism turns that into Berlin being safe.

Maybe so, I have no idea. I think they'd just hardened bunkers and go to war anyway. But really it doesn't matter; the genie is out of the bottle, so MAD is really the only plan available. Can't un-invent the bomb.

Actually, the bomb and its design and construction require so much tacit knowledge that we might be on the verge of uninventing it already.




I don't see how that argument is justified. Maybe the Marshall Plan and the European Community saved Europe from war. Maybe Maybe globalization and general civilization-wide wealth creation made it so the elite's and xommonners businesses are less profitable in large scale war. (Note that even today, wars happen in poor countries, not rich countries). Maybe democratic evolution made the "send peasants to war" model obselete, and maybe modern communication made it harder to sell the fascist lies that motivated WWII. Nuclear Weapons didn't prevent Vietnam or Korea or the Middle East wars.

While my humble opinion is that MAD was effective, let's be careful not to infer causation from a sequence of events (the rooster crows and then the sun rises). And the events of 'MAD' and 'peace' are not in sequence: WWII ended in 1945. MAD wasn't an idea until the 1960s and not implemented in a treaty until the 1972 Anti-Ballistic Missile (ABM) Treaty, AFAICT.[0]

It makes more sense if you remember that nuclear weapons and delivery technology didn't reach the 'assured destruction' stage for awhile. Remember that in the Korean War, in the 1950s, General MacArthur was pushing to use nuclear weapons (IIRC); it wasn't as taboo then. Finally, remember that MAD applied only to the Soviet Union and U.S. (or the Warsaw Pact and NATO), while major international wars ended worldwide, for the most part. Remember that WWI and WWII were fought between future NATO members; the later peace between them wasn't due to MAD.

> At the beginning of the twentieth century it was looking like we'd have another world war every twenty years or so for the rest of time.

The victors of WWII were very concerned about that, and began planning to prevent it before the war ended. That resulted in the UN, the institutions that became the EU, a rejection of nationalism (as a significant cause of war), the spread of democracy and universal human rights as a peace-making policy (democracies generally don't start wars with each other), and U.S. leadership in the international order to maintain those things and to provide stability. My understanding is that those are the reasons for the relative but extraordinary peace. Here's a Churchill speech about it in Zurich in 1946 (the speech focuses on the future EU; remember he also was one of the architects of the United Nations):


(I'll also note that they seemed to have worked so well that now people take the peace for granted and are tossing aside the things that make it happen.)

[0] The best credible source I can find quickly. If you hit a paywall, access it via a search engine: https://www.britannica.com/topic/nuclear-strategy#ref1224926

EDIT: Added a detail

I partly agree with you but a less charitable interpretation is that we had a bipolar hegemony that prevented full-scale wars. Many regions experienced significant violence, often caused or abetted by Western powers that turned out to be less committed to democracy and human rights when they got in the way of power politics. (Edit - or other powers that barely bothered paying lip service to human rights)

Again, I broadly agree with you and would definitely prefer to see a continuation of the past 60 years over whatever’s on the horizon, but let’s not get too rose tinted about Weatern benevolence.

I agree. The reason I didn't go into the detail you did was that I just had to draw a line on the length of the comment, for my sake and for the sake of the reader. I'm glad you added your comment.

Your timeline is wrong. Bernard Brodie came up with Nuclear deterrence in 1946. The Soviets would have read his work or understood the implicatikns. There was a reason they raced to get the bomb.

> Bernard Brodie came up with Nuclear deterrence in 1946. The Soviets would have read his work or understood the implicatikns. There was a reason they raced to get the bomb.

Mutually Assured Destruction is not the same as Brodie's Nuclear Deterrence AFAIK (which admittedly isn't much). In 1946, neither side could come close to assuring destruction of the other; the Soviets didn't have any atomic bombs until 1949, the U.S. didn't have the H-bomb until 1954, and of course neither had ICBMs, The best production rocket was probably the V-2.

I haven't actually read Brodie, I'm just going on my memory from my strategic studies class. But my recollection is that he more or less fully fleshed out nuclear warfare theory in 1946. The tech wasn't there, but the logic of the weapons was.

MAD is less a strategy than a reality. As long as each side has weapons that can't be credibly destroyed in a first strike, you have MAD, whether theorists explicitly call for it or not.

Though of course submarines make this easier to achieve in practice.

I might be wrong about this though, perhaps there were significant differences between Brodie's 1946 theory and later MAD developments.

Thanks. A couple things I don't think are accurate, based on my limited knowledge:

> As long as each side has weapons that can't be credibly destroyed in a first strike, you have MAD

With the significant qualifier that you need enough weapons to survive to completely destroy the enemy.

> MAD is less a strategy than a reality

I'm pretty sure that's incorrect. It was and is a specific strategy and implementing it was the reason for the ABM treaty and others - defensive weapons would make destruction less "assured". See the source I linked above.

The thing with defense is that even without the ABM treaty, we don't have an effective defense. MIRVs will always be cheaper than single shot ABMs, and their reliability is too low to rely on in the event of a second strike. That's what I meant by it being a reality.

People do of course adopt it as a strategy as well. But if effective defense tech existed I don't think the strategy would hold. The US abandoned the ABM treaty even without such technology.

As for your first point, it's true that MAD didn't really conclusively come into force until submarines. But there were efforts before then to maintain a second strike capability, such as keeping a certain percentage of bombers in the air, preparing them for fast takeoff, etc

Maybe not 100% assured second strike, but the basic idea was the same

Do we really have enough data to draw conclusions? A nuclear war could break out in ten years time and be orders of magnitude worse than WWII. That would invalidate MAD in an instant.

Nuclear weapons create a requirement that you safety depends on the pragmatism and sanity of leaders and government. Not only of your own country but your enemies.

The patterns of history suggest we are heading for unclear war. Power (manifested as interest) has been present in every conflict - no exception. Every nation eventually gets the war it is trying to avoid - nuclear war too. Decision-makers delude themselves that the course they are on will not lead to annihilation, but it always does. World leaders are deluding themselves now. Read more at: http://www.ghostsofhistory.wordpress.com/

The scary thing about this is: Stuxnet is one of the "most sophisticated" pieces of malware we have discovered up until now.

Who knows what kinds of software are still out there quietly doing their thing in the shadows.

Not scared of what “we” are doing to “them”, but rather what “they” are doing to “us”... our power plants, dams, electrical grids, gas pipelines, traffic light networks, air traffic control systems.........

The above comment makes no mentions of who is doing what to whom. I think the idea of “us” doing things worse than I could imagine is just as scary as the other way round. You are indirectly part of the responsible bodies and may face the backlash without having been able to not only do but even know anything about it. Imagine for exemple being a Russian citizen whose quality of life is diminished due to political/economic sanctions as a result of your country's espionage activity being revealed.

> Imagine for exemple being a Russian citizen whose quality of life is diminished due to political/economic sanctions as a result of your country's espionage activity being revealed.

I know quite a few Russians. Almost all act defensive over how people treat Russia as a politically homogeneous (evil) unit, when it's mainly a few oligarchs at the top. To the point of defending the political explanations espoused by state TV, which of course is heavily biased towards the narrative said oligarchs want the Russian people to believe.

Remind you of anything? I for one have stopped trusting Dutch news for "being honest with itself".

Don't forget cars, even in their current state they still have modems connected to the celluar networks.

What year did this start?

When we split the world into the "us" vs "them" then it causes issues like this.

Your comment doesn't specify the us or them so it can apply to virtually any group.

"The best ninjas are the ones you've never seen"

"The best liar you know is not the best liar you know."

We're quickly heading toward the age of the first Virtual WMD. The implications can be as wide as your imagination, but possibly worse than existing WMDs.

I'm having a hard time imagining a virtual WMD that is worse than the instant obliteration of millions of people.

You know what's worse than the instant obliteration of millions of people? The slow obliteration and starving of millions of people.

Imagine Venezuela, but much much worse.

Picture a society that doesn't know how to create institutions, conduct trade and collaborate with the people around them without the aid of a computers.

Now, I don't know if disabling their computers would result in an incredibly dysfunctional society that would starve, but it's not unthinkable. If it did, the suffering could be far beyond the instant obliteration of millions of people.

Who stands to benefit by destabilizing the western world to such a degree? Clearly some big players like Russia and China, as well as some smaller players can benefit from destabilizing the western world a little bit. But if they destroy it to the point where millions of people are suffering, they'll bring suffering on themselves as well. It seems to me that they're probably motivated to level the playing field and gain dominance more than completely destroying or starving people on a large scale.

To be clear, I wasn't making the case for a motive or even suggesting this was a plausible scenario. My point was death from a nuclear weapon is not the worst form of death.

Modern history is littered with examples of millions of people starving or being slaughtered because societies collapsed economically or politically.

> Who stands to benefit by destabilizing the western world to such a degree?

Who's limiting the conversation to the western world? Let's think beyond ourselves for a second. Wouldn't it be just as tragic if cyber attacks were used to destabilize other places in the world? Imagine an African country that has become entirely dependent on some sort of mobile money transferring platform. Maybe their neighbor launches an attack on that platform to destabilize the country for whatever nefarious reasons.

There are thousandss of brilliant but poor people in the world...

> You know what's worse than the instant obliteration of millions of people? The slow obliteration and starving of millions of people

Yeah, I'm going to have to sort of disagree with you there. Once you are dead, you are dead. If you are starving, things can still change and you can still have free agency.

To be clear, I'm not comparing death to starvation.

I'm comparing death by obliteration to death by starvation.

Well it's about statistics rather than what an individual might possibly be able to accomplish.

Sure, maybe you'll find a way to survive a famine, but on average most will die because the math just doesn't add up. Not enough food for everyone. And it ends up killing far more than bombs and bullets, even nukes. Disease and famine are far worse than WMD when the numbers are in.

Well I don't think my dead self would mind being dead with it being dead and all. So I don't really see how any kind of suffering is better than death. In a way death doesn't really hurt you, since you stop living.

> So I don't really see how any kind of suffering is better than death.

So maybe we should round up all the poor people and gas them to put them out of their misery?

You joke, but from a certain perspective and within certain parameters, there are people who would find this acceptable, and even preferred. I recall the movie Solace (which wasn't great, but I digress) where the premise is that a serial killer has psychic powers that allow him to see others' futures. When he detects a future that is particularly horrible (disease, injury, etc) with no hope of survival, he makes sure their last moments are wildly happy and kills them painlessly with no warning.

If we were to regard the life of the average poverty stricken human as being _terrible_, then killing them painlessly and suddenly becomes less abhorrent. Of course, then we need to define criteria for whose lives are of sufficiently bad quality where sudden death is a superior option.

Some nihilists might say all humans satisfy that criteria. Even if you're a wealthy and generally happy person, you will become old and die. If you were suddenly dead at this second with no warning, you would not care - the only downside would be those who remained alive, who presumably would care. Let's recurse until no one cares.

And the universe moves along just the same.

I am not advocating for this at all, but I felt your comment justified some sort of explanation. And I have been thinking about these existential questions quite a bit recently.

Actually, in a capitalist country it might be easier to survive such an attack. If there is demand for a product or service, people and businesses will find a way to meet that demand. Millions of people working independently to satisfy their local market demand. It would probably hurt centralized socialist or communist countries more since it severs their control, surveillance, and communication mechanisms.

I agree that markets tend to buffer the effects significantly.

The problem is in times of crisis, the appreciation of market dynamics and rule of law tend to wane. Even if those things are intact, the flow of goods and services can be undermined by well-intentioned but misguided politicians.

My point was simple. Despite the systems of trade, a catastrophic shock in trade or production systems could literally kill millions in a way that is more brutal and horrific than instant obliteration.

I’m intrigued - how does that play out in your head? There’s a disaster causing social collapse but a free market for food remains. Demand outstrips supply so it becomes too expensive for many to buy. What do people do before they can go back to the land and sow their own food? What about areas with a lack of suitable available land (as referenced in a sister post by the potato famine)?

I honestly didn't invest too much time playing out scenarios out in my head, rather I was mentally recalling events in modern history where we've simply allowed millions of people to starve. From a BBC article:

The scarcity, Mukherjee writes, was caused by large-scale exports of food from India for use in the war theatres and consumption in Britain - India exported more than 70,000 tonnes of rice between January and July 1943, even as the famine set in. This would have kept nearly 400,000 people alive for a full year. Mr Churchill turned down fervent pleas to export food to India citing a shortage of ships - this when shiploads of Australian wheat, for example, would pass by India to be stored for future consumption in Europe. As imports dropped, prices shot up and hoarders made a killing.


I guess if I was to assume a scenario that could lead to the starvation of millions, I'd imagine a poorer country making the mistake of relying too much on some sort of electronic platform to trade and save their money. Let's say this country/region also relied too much on exporting some agricultural commodity that was being affect by a change in climate.

A catastrophic attack on their banking platform could theoretically destroy the local populations confidence in the trading currency as well as scare away foreign lenders. It may create incentives where it's more advantageous to hoard food and sell it on the international markets rather than distribute it to local customers who can't pay.

Free markets tend to create the most value in the long run. In some situations, hoarding can create incentives to distribute to underserved areas. In scenarios where the underserved areas do not have a means of payment (monetary, barter, indentured servitude, etc.), free markets and hoarding can simply be horrifyingly cruel.

What are your thoughts?

Should there be some catastrophic collapse in society, I would far prefer that the government requisitioned food and rationed it out. While it’s definitely open to abuse, I think it would do a better job in the short term of keeping people alive. A free market response to a national emergency sounds dreadful to me

I don't disagree. Most times, I would prefer the decisions of how people get the things they need are made by a network of people with incentives to provide and profit rather than central planning. However, if the situation is dire and the incentives create a deadlock, I think thought-out, extraordinary measures to help people are warranted.

> My point was simple. Despite the systems of trade, a catastrophic shock in trade or production systems could literally kill millions in a way that is more brutal and horrific than instant obliteration.

Systems have trade have made the market economy of the US more vulnerable to many kinds of "a catastrophic shock[s] in trade or production systems." IIRC, there are only a few days of slack in the US food supply chain. That's down from a month or two during the cold war (where I think there were mandates for reserves).

Supply will be lower than what’s currently available. Prices could go up. If so, lower income households may not survive.

So what happened during the Irish Potato Famine?

The British.

Laissez-faire in practice doesn’t seem so desirable

Capitalist countries still centralize their control, surveillance, and communication into few hands with little diversity. The market forces you describe only apply to the early days of capitalism. Most capitalist societies are long past that, at a stage where the strong early players have re-written the rules and formed quasi-state monopolies.

Just look at how many communications companies the US has. And the government had to step in and break that up because there used to be just one. Even now they are quietly conglomerating back together, and there are no significantly different options available. It's still very vulnerable to an attack or flaw due to lack of diversity.

Not sure how much that really buffers anything. In a lot of mature markets, the fact that the whip-hand is held by another corporation doesn't make much practical difference - what happens if someone finds an attack that bricks every Caterpillar tractor and hauler?

It doesn't matter if C&C is corporate or state, they break the same way.

In 20 years, a virtual WMD may well instantly obliterate millions of people.

I'm already unsure of what the most possible damage someone could do with over-the-air automobile firmware updates is today, just to take one example. What would it be like if someone put out a virus that at 11:32:42am on March 3rd, 2036 causes every GM, Ford, and Tesla self-driving car to lock all the doors, floor the accelerator, and let the chips fall where they may?

Consider not just the immediate impact of the crashes, but the fact that you just completely obliterated emergency services (they couldn't hope to serve but a tiny fraction of the victims), choked every major road and most of the minor roads with wreckage, wrought a catastrophe so large that while I don't predict what the effects would be, we're talking something more defining for a generation that would handily compete with both World Wars combined for psychological effect, with the Great Depression tossed in for good measure... it would be astonishing.

I'm not even sure we couldn't get close to that in 2018, to be honest. What if by some horrors the Stuxnet authors were set the task of making this happen? How close could they get?

The problem all virus authors have is escaping detection. 2036 is too far out for them to count on not being detected, and on cars being the same. Release it today, and even if you infect all cars and are undetected, GM and Ford's normal update cycle is likely to change things such that by accident your virus cannot spread. You can expect to get a handful of cars to accelerate out of control - and odds are the door locks don't work on them so you failed to lock the door.

Infecting a cars is hard for other reasons. Radios tend to be easy to updated (they can sell you new features - maps if nothing else). All other controllers tend to be more locked down such that it is likely that a virus couldn't actually spread to anything that can take control.

Maybe, who knows who GM will change over the next 20 years. GM only has guesses.

"2036 is too far out for them to count on not being detected, and on cars being the same."

Sorry, I conflated two things here. I meant someone in 2036 setting a logic bomb for something like a month in advance in their time, and as a separate question, how close one could get to such a virus today. As we keep wiring up our cars to networks (not necessarily "the Internet", but networks), it's only going to get easier.

One of the problems I think will happen with cars, only accelerated by self driving cars and the high probability that people will largely lease them rather than own them, is that the governments of the world are going to see a big pot of real-time surveillance data and real-time person control mechanisms and won't be able keep their hands off, mandating that cars start getting very connected and that cars have backdoors for authorities to take over and redirect them, etc. My scenario in 2036 may not even be a brilliant virus designer, but just one person with Python scripting skills and a bit too much access to the government control system.

It's not even that hard to imagine such a disaster happening accidentally. I'm sure, no sarcasm, that protections will be put into place, but there always has to be a developer back door mechanism of some sort, and there may be enough controls added, or they may not be added competently enough.

(And in terms of the protections of the cars themselves, remember that Stuxnet included the use of not one, but two code signing certificates that the Stuxnet authors clearly did not have true authority to use. If there's a way from the Internet to the control mechanism, even if it requires signed code, there's no guarantee a particularly capable and motivated enemy won't penetrate the protections.)

My scenario in 2036 may not even be a brilliant virus designer, but just one person with Python scripting skills and a bit too much access to the government control system.

After the LocationSmart vulnerability, that seems very plausible. (If you haven't seen it: https://www.robertxiao.ca/hacking/locationsmart/)

I think you would need to think in the line of a global economy/technology/infrastructure collapse (no power production/utilities, (global) transport, financial crisis), millions of first world being thrown into third world conditions (no access to water, food, medicine) due to the large cities depending on technology. Also see: https://archive.org/details/james-burke-connections_s01e01

Virtual weapons are worse when it comes to proliferation. And they are worse when it comes to identifying attackers. Both of these could make them more likely to be used.

Software which say opened the throttle and disabled the brakes on millions of vehicles simultaneously would be in the ballpark for total destruction in a short time. With self-driving cars, the total destruction can be optimized, hunting down pedestrians and hitting vulnerable infrastructure.

Blackouts. Do what Stuxnet did to the control rooms of a large number of power plants, spinning up the machines to hard, coordinate this attack so it triggers in a large number of places.

If you can pull this off for a continental scale, you're looking at potentially months to restore power to everywhere.


Actually for power plants it isn't quite so simple. Because turbines and turbogenerators at nominal speed are quite close to the limit of what the material can support, they have multiple independent fail-safes. For example, if you tried to spin a turbine above safe speed, quick acting valves redirect the steam elsewhere in a fraction of a second.

However, there are absolutely things you could do in a power plant that greatly accelerate wear. It might be possible to accelerate wear enough to achieve failure of some parts before the next maintenance happens.

"Just shutting a plant off" on the other hand is not too difficult; for most plants and upstream systems "off" is the safe state, so all systems are designed to fail into that state, if they really have to.

Without power, logistics & supply lines stop working, no more groceries after a few days, riots and plunderings in the streets a week later...

Anything that doesn't require refrigeration would still work though, trucks run on diesel after all.

Which makes me concerned about the future; if the transport network becomes electric, a power outage will cripple things even more. Unless we build self-contained, internet-disconnected charging stations maybe, but that's not going to be done at any kind of significant scale.

I think your missing the point.

You can maybe drive a truck, but there is a lot more to a logistics pipeline then actually driving the truck. Without power the whole scheduling automation etc. has to be done manually, and it's by now simply impossible to do every automated job manually again.

Your logistics pipelines could maybe run at a few percent efficiency if every step is manual, but by then your trucks will be raided on the streets.

Just a guess, but: Gas stations use pumps that run on electricity.

They are still pumps: a physical system. Give me a couple hours and I can get gasoline out of any gas station without needing electricity. Of course I will destroy large parts of the machine in the process. I'll get my gas, and so will anyone else while that turns the crank. The pump will need to be replaced to use normally afterwards.

The bigger worry is gasoline in tanks is good for at most a month. The refinery is much harder to start/operate without power. They have their own backup power on site (I assume) so this might or might not be a real worry. If it is I'll just brew some ethanol.

Maybe you would, would people everywhere? Also, the tanks are underground, encased in concrete, with a submersible pump, and there are no power tools.

Might be possible to get to them. But that's solving one problem. If we're talking continent scale blackout it's also unclear what that gets you.

Typical backup power is measured in hours or days. Black start time scale for a blackout that scale is potentially significantly longer than that.

Long before petrol in refineries runs out our logistics system that distributes food to people has failed. The water system probably has failed. Now if you're in the countryside with a full pantry and a stream next to you that might not matter too much. You'll sit this one out. But if you're in the middle of New York?

I said I could, not that I would.

My first thought when this happens is society will get itself going again in a few weeks. I wouldn't want to be the looter who robbed a gas station in the trouble. By time it was obvious that society isn't getting back together the gas in their tanks is bad so I wouldn't want it. I'd be more interested in robbing the hardware store to get shovels, and other supplies for gardening so that I can live long term. Hopefully my neighbors are helping as well, division of labor is helpful.

This assumes I survive. Anyone who is this interested in destroying society is probably going to use other means as well to kill people at the same time.

This was the subject of the novel Blackout by Marc Elsberg (https://en.wikipedia.org/wiki/Blackout_(Elsberg_novel))

This article is relevant:


It highlights the shipping "chokepoints" where disruption causes potential food crisis for where the ship had intended to deliver its payload. If the infrastructure which manages these pathways is attacked, the security of these regions is in jeopardy.

> I'm having a hard time imagining a virtual WMD that is worse than the instant obliteration of millions of people.

Incidentally, my impression has always been that, at least with the comparatively low-yield atomic weapons that have actually been used, it's not the instant obliteration that's the biggest problem, but rather the lingering effects of fallout and radiation sickness.

Presumably the person you replied to is including the potential takeover of nuclear weapons by terrorist hackers, etc.

I would consider that to be an 'existing WMD' rather than some new 'virtual WMD'

If you can shut down enough utilities (electricity, mobile telecommunications, television & radio station, water treatment plants, access to water...) at the same time on a wide enough area, it would be devastating.

Imagine a delayed killswitch in Intel's ME and AMD's PSP.

Think of controllers for say a dam, or an autopilot system in a jet.

How would either of these be worse than a nuke going off in Hong Kong or NYC?

If you took out a dam in Montana the following chain of dam failures would cut the United States in half all the way to the Gulf of Mexico and destroy US agricultural output. The US produces 40-50% of the world Soybean and Corn supply. Long term you're probably talking billions of deaths due to food shortages.

Dams are dangerous beasts. The KMT breached some dikes in the 1930s for the purpose of environmental warfare, and in the process killed half a million peasants and displaced millions more.

Which you consider worse than the instant obliteration of millions of people?

I'm not sure it would be. I don't think we've considered all that can happen with a sophisticated worm.

The problem with leveraging nukes is MAD. With worms, you can do a lot of damage without even knowing who did it. Think the Anthrax attacks in 2001 x 1000.

With worms, you can do a bunch of damage over a long period of time without getting discovered. What's the US going to do, start a nuclear war over it? No, see MAD above.

I mean if a worm could figure out how to stop shipping (say simultaneously disable control / start systems of vehicles or gas pumps), people will start to starve after a few days, then probably total chaos will happen leading to a bunch of deaths. That's just a single scenario.

How about if a worm took control of all the air traffic towers simultaneously and change the information so that controllers would start crashing planes everywhere.

I know nuclear war has been played out on tv and in movies for the last 70 years or so, but I don't see an all out nuclear war between two states lobbing hundreds of warheads at each other ever happening. At least not intentionally. Any type of nuclear detonation would either be accidental, or very isolated.

Mosul dam in Iraq was in serious trouble and some argued that it might collapse after the second Iraq war. If it's ever breached the disaster could kill as many as 1.5 million people living in the city below and displace a further 5 million. It's not beyond the realms of possibility that a Stuxnet aimed at the dam's control systems could kill more people than at attack with nuclear or chemical weapons.

A significant amount of how our society approaches technology issues almost seems like everyone has agreed that they WANT a gigantic catastrophe. Like they want to see an action-movie-scale real life supervillain to emerge who uses technology to severely harm people. I won't be surprised when one nutcase has prison doors flying open, planes falling from the sky, ATMs nationwide spilling into the streets, stock market prices spinning randomly, electrical grids frying themselves and everything attached, all at the same time.

Yep. Our (global) civilization carries within it the seeds of its own destruction.


Corollary: It is vital (no pun intended) that we learn to live in harmony with Nature or we will destroy ourselves.

You're 100% right, but you should use the word cyber, not virtual.

I know hackers hate the word cyber because grandma uses it, but it's the right word for it. The stand-in "computer based" almost works, but it doesn't cover things like hacking radios.

Eh, I'm not so sure. One of the biggest appeals of our current WMDs is they can take out the enemy's WMDs in addition to infrastructure. An attack that paralyzes an entire nation's computer network and sends self-driving cars crashing into substations doesn't mean anything to the group of guys in a bunker/submarine with the keys to the 50 year rocket powered by vaccum tubes

Even more interesting and along the same lines of thought, Stuxnet was probably considered a failure in the eyes of its creators. The fact that we're discussing it, analyzing it, and patching its exploits is probably the exact opposite of what its creators wanted for it, even if at a point it did have the desired affects.

But now, everyone's wiser, so the game just got more complex.

No, I'm sure whoever created it doesn't consider it a failure.

Its mission was to destroy some expensive industrial centrifuges and set back Iran's nuclear program. And it destroyed some centrifuges precisely as it was designed to. At that point discovery is inevitable, but whatevs because "mission accomplished".

> It's mission was to destroy some expensive industrial centrifuges and set back Iran's nuclear program. And it destroyed some centrifuges precisely as it was designed to. At that point discovery is inevitable, but whatevs because "mission accomplished".

I think it might be considered a partial success, but mostly failure. It did successfully set back Iran's nuclear program and destroy some centrifuges, but it spread too widely so it was probably detected much more quickly than desired.

Also, if it had been discovered only at the nuclear fuel plant, Iran might have kept quiet about it out of embarrassment, allowing it to be deployed elsewhere. Instead it was picked up by a major AV vendor and dissected very publicly.

^ Exactly

If Stuxnet was as successful as I'm sure its creators wanted it to be, we wouldn't be discussing it.

Not necessarily. It would be bound to be have found at some point in time. The article mentions it took at least a year before anyone knew about it.

And perhaps there is an even more technical worm out there still hidden and stuxnet was merely a first draft.

perhaps, but perhaps the creators don't know of any more holes. Or perhaps the creators knew about more, but the focus on security that this created resulted in not just the holes stuxnet used being closed, but the others. Or perhaps the creators know about more, but their targets have added layers of security and so they can't actually get their next worm where they want it.

A lot of unknown. Those who's job it is to secure systems have their own tricks.

Kinda makes you wonder why they didn't build a way to detect when it had reached its target so that they could have the remnants on other machines removed.

"Russia has hacked into many of our government entities and domestic companies in the energy, nuclear, commercial facilities, water, aviation and critical manufacturing sectors"


The same was also reported by MI5, Europol and of course within Ukraine.

And, no doubt, we (USA/Western democracies) have hacked theirs.


Forbes, definitely an objective and impartial source about Russia.

Is there a reason to believe it's not? I'm not familiar with any particular Russian bias from Forbes.

Forbes doesn't have a Russian bias. It has a business model bias. In short, it milks its good name while paying a low per click rate.

This combination is very, very attractive to propaganda operations. People who see themselves as writers cannot make an actual living there, but writing for a publication their parents recognize feeds their ego.

This can work either overtly or more subtly. They can simply reward useful perspectives with clicks, or they can offer additional steady money work through other channels. Either way they have an entirely deniable useful idiot.

Forbes is particularly full of this.


I happen to realise all social media apps are kind of malware.

I've been arguing about this for the last three days. Mostly around the reason that "complexity" is not strictly the same thing as "sophistication" when it comes to software. Noobs will conflate the two, but experienced programmers will agree that -- just to illustrate my point -- some code which solves a complex problem in a very clever way while also being very clean and easy to maintain will be considered strictly more sophisticated than some other code solving a similar problem which simply has a higher degree of complexity than the former. There is a subtle difference when it comes to software, and this subtlety needs to be considered in this question. Now, I think Stuxnet is a fantastic suggestion to this question, for a number of reasons:

1) The legal, ethical, technical challenges of creating the software.

2) The ability of the software to remain hidden in (sophisticated) environments rich with (sophisticated) organizations looking for exactly this kind of thing.

3) The stealth of the entire research, design, development, and deployment phases of the project.

4) The highly specialized nature of the target.

5) The scale of the entities involved.

6) All of this sophistication and we can't even see the source code (decompilation doesn't count).

This is frankly some impressively sophisticated software. Also, incidentally, the Quora poster's company looks like a fun place to work (with good programmers on the team). Some of his other answers are thoughtful and interesting to read, too, if you get the chance.

On complexity vs sophistication: During the cold war, a US company noticed the USSR had stolen the plans for a natural gas pipeline system, but not the software.

In response, the US introduced an integer overflow bug that was uptime dependent, and took something like 6 months to hit. The bug simultaneously cranked up the pumps and closed all the valves in the network.

It was known that the Soviet economy would crash in under a year without the ability to cheaply move natural gas, so they couldn’t test long enough to find this.

A year or so later, the DoD’s seismographs detected the largest non-nuclear explosion in human history.

The main impact wasn’t the explosion or the short-term economic damage. The main impact was that the USSR stopped trusting stolen software, which set them way further back, economically and militarily.

Arguably, that ~one line of code was infinitely more sophisticated than stuxnet.

Wow I have never heard this story before! I had to look it up, for anyone else interested here's a quick wiki about it https://en.wikipedia.org/wiki/Siberian_pipeline_sabotage

This didn’t actually happen, but it makes a good story that is often repeated.


Also, this story seems to be taking on a life of its own. You have some details that were not in previous rounds. Integer overflow based on uptime was not in the original unverified story.

Do you have a source for this? I can only find reports from "At the Abyss", which are uncorroborated.

somewhat described in the book Victory, by Peter Schweizer

How do you even define sophistication in context of computer software? Let me try by defining the opposite. What would be unsophisticated computer program? I would say it is brute force algorithm. It is unsophisticated because it uses simple logic and iterative approach to achieve aim in inefficient manner. So sophisticated program is:concise(non-iterative), efficient (doesnt use too many steps to arrive at destination) and uses non-simplistic modeling of given problem. Anyone has anything better :)?

Do you have a link to more info? Would love to read more but can't seem to find anything.

> On complexity vs sophistication: During the cold war, a US company noticed the USSR had stolen the plans for a natural gas pipeline system, but not the software.

I think this account might be a bit wrong. The one I read said that the CIA acquired a "shopping list" of Western technology that the USSR wanted to acquire. It included the pipeline control software, so they arranged for a trojaned version to become available to the Soviet shoppers.

Apparently this was a pretty common Soviet activity. Their electronics technology was behind the West's, in general. IIRC, many US semiconductor designs has little cartoons on the dies to taunt the Soviet reverse engineers.

I want to second this. The things I wrote throughout my career that I'm most proud of aren't very complex.

I find it very satisfying to understand a problem so well, up to the point you can find a simple and elegant solution to it. It makes the solution easier to reason about with other team members, and easier for the team to maintain it later. I see this as making your domain expertise available as a framework for the other team members.

This is my idea of sophistication in the software development world.

Simplicity is always a sign of a quality solution. I’m not sure why anybody would ever conflate complexity with quality or sophistication. I wrote some very complex code in college, but it wasn’t very high quality.

To date, my most complex piece of software in terms of how it was implemented was a simple calculator written in my first programming course. About a week before I started it, I had learned about regular expressions (the programming construct, not the formal language construct).

The basic implementation was a calculator that could add, sub, mul, div, pow, and sqrt. Bonus points were awarded for adding additional functionality including lettered variables. It started with a relatively clean shunting-yard implementation, but my use of regular expressions quickly fixed that.

My calculator worked under most circumstances, but that thing is an eye sore. I'll never get rid of that source code. I like to go back and look a few key pieces of software I've written through the years.

Yes, in my experience, solving a problem / feature well involves three steps:

1. Get the code to work 2. Clean up the code 3. Simplify the code

1 is self explanatory. 2 involves removing any logical redundancy, separating and cleaning the logic into methods, etc. 3 involves simplifying logic and logical mechanisms.

Most developers only do step 1 and maybe step 2. Step 3 is where beauty comes in.

If I could organize my thoughts around this concept, it would probably make a pretty good article. I don't know who the source is but a good quote goes something like this, "real genius isn't solving the complex, it's solving the complex in a simple way."

“Any fool can make something complicated. It takes a genius to make it simple.”

I made a poster with this for my office wall.

This fits perfectly with my own experience. The 3rd pass is always the one that makes the difference in terms of code that just feels nice/clean.

Indeed. I think that the mark of sophistication is not complexity but irreducible complexity. Sophisticated solutions are those that have been reduced to their simplest form.

I'm wondering why do you think this is true? Using stuxnet as an example, would it not be better if they used every feature within reason that would add to the likelihood of success? Why does reductive elegance make an application of technology superior to an inelegant solution? Does final effectiveness not overrule the developer's aesthetics?

I'm asking in honesty, not using the question to merely attack your opinion. I recognize there are things I have probably not considered.

Referring back to my natural gas pipeline example, just install the nasty centrifuge control software at the factory.

It is easier and more likely to succeed. Also, stuxnet has now been repurposed by multiple governments and criminal organizations. So, it’s creators built a symmetric capability, and gave it to their adversaries, when they could have used asymmetric capabilities that are more expensive to reproduce.

[edit: Also, it is still well within the Iranians’ capabilities to build a bomb, and our recent foreign policy greatly increases the chances that hard liners will take over and restart the program.

Contrast this to the outcome of the broader cold world strategy, which was a regime change to a relatively US-friendly government.]

Irreducible complexity sounds like an incredibly high bar.

If you are a state actor, it's not difficult to gather a couple of programmers from the "antivirus" and security field and build something that is hard to detect.

It would be impressive if it was the work of a teenager but it's not.

The SCADA stuff was novel and interesting though and goes above and beyond your average malware both in what it did and the idea to do it in the first place.

A really creative hack, so to speak. (or destructive? anti-destructive? shrug)

And just to reiterate. They built a rootkit that stealthily sabotaged an effort to build nuclear weapons in a way that just made it look like the people who were trying to do it were just incompetent...

If that's not cool, I don't know what is...

You also somehow need access to the industrial grade target platform in some way shape or form which is not something a teenager has lying around in their room nor extensive knowledge about to this extent.

You can buy the Simatic PLCs they targeted on ebay for a few hundred bucks.

Industrial automation stuff like this is a common part of the "technical high school" curricula where I live. (of course only on the "press this button and then that happens" level)

> not something a teenager has lying around

Ha, generally true probably, but in my case as a teen I got my hands of a bunch of old industrial PLCs and built all kinds of interesting things in my room. :-) Ladder logic is how I got started programing, I didn't even have my own computer back then.

Yeah, you understand exactly what I was trying to convey when I wrote the article. Thanks.

If this short read piked your interest in Stuxnet, I can recommend the book "Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon".

It explains in great detail how Stuxnet worked and, which I found the most exciting, how it was discovered and reverse engineered.

I read this book a while ago.

Whilst I enjoyed the multiple viewpoints it provides (some claim that Stuxnet was actually quite sloppily written, depending on numerous factors), it happened to be one of those books which wrote 100 pages worth of information in 400 pages instead and dragged every little point on. YMMV.

If you read this book purely for its informational value, I agree with your assessment.

That being said, I read it mostly for entertainment and I think the author did a good job of packaging a lot of factual information into a captivating story.

That being said, not all parts are created equal. There are quite a few pages dedicated to looking at the number of centrifuges Iran was installing and amount of gas they enriched, as this were the metrics Stuxnet was affecting. To me, that was as exciting as reading a company's monthly inventory report.

But I guess that's to be expected in a book that tells a true story instead of just being based on true story.

I hadn't read the book when I wrote the article. I tried to get all the salient facts in, with as few words as possible. "Omit needless words" - Wm. Strunk.

The movie felt similar to me. It's been sitting on my In Progress list for years. I loved what I saw but it lost it's hooks in me.

If you are not a book type of person, I recommend watching the movie Zero Days :)

IMDB link: https://www.imdb.com/title/tt5446858/ (2016)

Looks good! Might have to check it out tonight.

Thanks, I always enjoy HN recommendations for books. Even better there is an Audio Book version!


i second this, i really enjoyed that book.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact