Stuxnet recorded various data points while the cascades and centrifuges operated normally, in order to replay this data to operators once the sabotage began. They must have had a working system to test this on?! The budget for something like this is probably in the tens of millions if not more. The HR requirement must have been pretty large too. Analysts to gather information, managers, programmers, qa, siemens hardware experts, physicists, deployment, monitoring, etc, etc.
Absolutely. This was a massive defense spending project by any measure. How many people do you think worked on it? Assuming the project was highly compartmentalized, I would estimate that there are at least SIX subteams currently working on the next Stuxnet.
- 0-Day exploitation of PCs. How big is the team responsible for discovering / purchasing 0-day exploits?
- Hardware/firmware-level infection. This would require expert knowledge of the specific control systems.
- Networking / infrastructure. This requires an intimate knowledge of target network topology.
- Boots-on-the-ground payload delivery (nontechnical).
- Spear-phishing payload delivery. Perhaps the points of entry were several levels removed from the actual target facility (e.g., security guards' wives' laptops).
- Testing / QA.
All of this of course has to be backed up by world-class intelligence support, which I shan't address further. The technical feats of developing this alone are astounding and intriguing.
Given the speculation that it was the US behind Stuxnet, this one is a cheap and easy one. The US has been buying up ready-made exploits for a good while now (there's a reason that the likes of Raytheon are hiring exploit devs left and right) and have nice stockpiles of them just ready and waiting for the likes of Stuxnet.
If you're curious what companies are actually committing to vulnerability dev you can search any cleared jobs site for "offensive"; the companies that have listings are who you'd imagine them to be (minus a couple placement firms that just put people right at the Fort).
What's going to happen when the first Chinese/North Korean/... company succeeds at actually doing this ? When will we have the first startup doing it ? Startups are known for creativity, both in technical development and interpretation of the law, so why not ?
The cost for things like this needs to go up, by a lot, fast. Or we're going to be in a deep hole.
"One day, toward the end of Mr. Bush’s term, the rubble of a centrifuge was spread out on the conference table in the Situation Room, proof of the potential power of a cyberweapon. The worm was declared ready to test against the real target: Iran’s underground enrichment plant."
And i don't mean to stray off Stuxnet here, but just really quickly: The chosen-prefix collision attack used in signing the Windows Update malware (FLAME) also suspected of being from the US was a never before published variant.
The computing power alone was on the order of $200k, and makes you wonder what else the NSA or the national labs have up their sleeves.
Is anyone aware of a somewhat comprehensive auto-update cryptography survey anywhere?
I am often alarmed by the number of updates pushed through desktop software, often with little explanation. (I'm looking at you, Adobe.) .. not just for security, but for bandwidth management too.
Many open source products seem to just query a URL and direct you to go download stuff. With SSL essentially broken, that's gotta be a bit risky vs. MITM.
Gentoo for one combines pre-distributed SHA256, SHA512 and Whirlpool checksums with file size, which feels secure enough against collisions. But the pre-distribution is decentralized through potential MITM (non-trusted parties), and the cryptography around that process - if any - is less than transparent, and integrity checking is apparently not made upon locally extracted package database.
Perhaps we need a standard, cross-platform solution in the software update query space that is cryptographically paranoid and well-reviewed enough by multiple parties to be considered secure, meets the generalised need and has some OS-level integration features more advanced than "secretly do things in the background".
There's nothing stopping one from linking against their own copy of an SSL lib, and supplying their own list of trust anchors/trusted CAs. I've been wondering for a while why lots of apps (e.g. mobile apps) don't do this more often.
Given the above, perhaps all distribution maintainers can realistically do is say "it hasn't changed since I first saw it" which is what happens when they provide multiple checksums of a file, which is probably lower CPU and software library overhead than performing a cryptographic signature check.
I recall reading somewhere recently that the NSA has their own entire chip fab... this is to say nothing of the output of such a facility.
We could just as easily nuke, bomb or invade Iran - we would easily overwhelm them.
But sabotage is a hell of a lot easier, cheaper, faster and less risky, with no civilian deaths.
But seriously, what about hyper-velocity "rods from god" thrown at those nuclear facilities from orbit?
I don't trust the US just like I don't trust these Muslim states. Neither are really pro-freedom.
If it walks like a duck ..
Oh, I forgot, "they" are all rag head, terrorist, fundamental nut job ay-rabs.
Pretty sure the US armed a lot of the terrorists in the first place. When you suck the resources out of the world and push people into starvation whilst living in the land of plenty, you're obviously going to become a target.
Maybe the US is ALSO the problem here?
There are millions of Muslims representing hundreds of view points.
"All Christians are homophobic retards." What? There's nothing wrong with that statement. It describes those Westboro morons and therefore can be extrapolated to every Christian, right?
The speculation is that Stuxnet was tested on P-1 centrifuges that the US acquired when Libya dismantled its nuclear program, set up in Israel's nuclear arms facility in Dimona. 
Given the success of Stuxnet, it's nearly certain that such offensive cyberwarfare programs have gotten increased funding and support from the highest levels of command. From the article, Stuxnet 0.5 C&C servers first went online in 2005. 2005! George W. Bush ordered the deployment of Stuxnet!
I personally cannot wait to hear about what the cyberweapons fo 2013 look like.
For example, let us consider the development and deployment of Stuxnet to be analogous to a miniature Manhattan Project. What proportion of the physicists and engineers in the Manhattan Project do you think were aware that they were building a large bomb? I would guess as much as 20% of the personnel directly involved with development knew what the project was. This includes the "integrators" - project managers and people in similar roles that need to know how different pieces fit together. I imagine the same is true of Stuxnet.
The thing that struck me most was the use of the word "weapon". Jeff Moss warned in his 2011 BlackHat opening speech that blurring the line between cyberwarfare and actual warfare is inevitable. Wired's use of "weapon" here signifies that shift, and really reinforces the fact that each one of us who is writing software may play a part in cyber wars, even if inadvertently.
It may have been an unintentional use of "weapon," as Stuxnet is referred to as a "cyberweapon" throughout the article, but the point that we are moving towards describing cyber warfare as actual warfare still stands.
* a pseudofile that resides in memory
* use standard file functions
* cannot be larger than 424 bytes when sent between computers
* can broadcast messages within a domain
I could see using mailslots as a mechanism to disguise traffic and potentially thwart NIDS. SMB broadcast traffic is considered "noise" by a lot of admins and might well be excluded from traffic monitoring to prevent "chatty" traffic from filling the logs. Using mailslots, as opposed to rolling a custom broadcast-based protocol, makes the traffic sink into the normal SMB noise floor.
Is this sort of functionality still present in Windows? If so, are they idiots or what?
(actually, I'm not really wondering, it's probably naive to assume they wouldn't)
Today, if you search for the specific phrases used in the navigation bar, Google returns only 3 websites:
The terms are:
"media planning" philosophy "creative services" "search solutions" ecrm "ad serving"
Sadly, these sites just look spammy rather than fake sites set up by the CIA (and Alexa shows some SEO work has been done.... but that could be part of the facade).
Still, fishing for CIA CNC servers sounds like a fun game, they must be out there today. Anyone have any ideas how to find them?
Follow the malware. Dan Danchev  used to be quite forthcoming with his analysis until he wasn't anymore. If you set up a malware aquarium  you can see the C&C servers these things use. Although not all malware reproduces in captivity.
The example I gave to a politically minded friend: Imagine a political drama with dialog like this:
"We've found a bug in the parliamentary procedure! Call the senator!"
"Oh no! Quick, we've got to omnibus the filibuster before the cloture overflows and the whole bill crashes!"
… that sorely needs to incorporate refactoring.
I remember reading about the COTS (Commercial Off the Shelf) program in the late 90's and the use of Windows NT 4 on AEGIS vessels. Supposedly, there was a protocol for rebooting everything, every two weeks. Hopefully, nothing critical would be down the moment there was an attack. (To be fair, the NT4 kernel is rock solid, so long as you leave it unmolested, which Microsoft didn't.)
Well I've been on a boat that used NT4 for stupid office tasks and HP-UX somewhere in the actual Combat Control System.
Guess which one shit the bed in the middle of our graded inspection when we were supposed to be tracking a simulated enemy in a life-or-death situation? (Hint: MS didn't write the OS).
To rephrase it a bit, there are vanishingly few pieces of gear that the Navy assumes that must work in the middle of an attack, and most of the pieces that do fall under that assumption have manual overrides/backups/inherent redundancy/etc. In our situation, we switched over to paper-based methods and managed to keep the contact situation until the system could be rebooted.
So if the Navy builds a ship that is single-point-of-failure on any commodity-OS-driven computer they deserve what they get. We've known since before WWII that survivability in combat requires redundancy.
That's reassuring to know. What are the "must work" bits?
With the move toward computerization and long-range missile-based combat there's probably a lot of risk with the Fire Control System, Radars (e.g. AEGIS), stuff like that. But even blind you can at least run away, and the CIWS has an independent fire-control radar for last-resort self-defense.
Submarines are more problematic. There's only the one pressure hull, only the one reactor, only the one main propulsion train, and watertight compartmentalization only exists for the reactor compartment.
This makes everything about subs more expensive since all work that affects these things has to be formally controlled and QA'ed, re-tested, etc. to avoid losing more subs like we lost Thresher and Scorpion.
And from what I remember it was SOP to reboot at least once a week. Daily wasn't uncommon either.
I suppose development of the software could have started without knowing which PLC's it would target eventually, but that seems doubtful to me. Of course, the easiest explanation is that I'm missing something in the timeline.
I'm not so sure that nowadays with all this Stuxnet insight people would be so hard-pressed to label these people conspiracy theorists.
Also, no more Windows source code did leak out with all the comments and variable names in the clear etc.
One has to wonder how "open" Windows actually is to the NSA and if all these 0-days so commonly found are really honest mistakes or not...