edit: reading in more detail around there, i am pretty sure that section of the article is referring to the CSPRNG vulnerability above. the article covers a lot of ground and not all of it is about problems with ssl. that particular section seems to be arguing that the nsa is trying to put backdoors into standards wherever it can.
> Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.
> Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology, the United States’ encryption standards body, and later by the International Organization for Standardization, which has 163 countries as members.
> Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”
> “Eventually, N.S.A. became the sole editor,” the memo says.
Now, that may not have been an effective technique, as you point out it's so slow that no one is ever going to use it, and this vulnerability was discovered not long after it was published.
So, that's obviously not a vulnerability that they are actively exploiting. If they are actively exploiting a vulnerability that they introduced, it must be something else. It wasn't clear from the article that that's actually the case; it may be that the vulnerabilities they are exploiting are ones they've found, not introduced deliberately.
But it does appear to be an example of a vulnerability that they were able to get standardized, in the hopes of being able to exploit it. Until now, it has been only speculation that it was a deliberate vulnerability, but it now seems clear that it was.
The NSA seems to be really divided between SIGINT and COMSEC. COMSEC wants to provide good, strong encryption, that can help secure US government and corporate communication. SIGINT wants to be able to read everyone's traffic.
For example, they changed the DES s-boxes in a way that made it more secure against differential cryptanalysis. They've released SELinux. There is a part of the NSA that does actually try to make encryption standards stronger.
But then there's the part that advocates for the Clipper chip, advocates for controls on exporting strong crypto, or strongarms NIST into standardizing Dual-EC DRBG. And that part does real damage, as everyone suffers from the weak export crypto (either people overseas have to work on strong crypto, or products are released with weak or no crypto because regulatory compliance is too complicated), or people stop trusting US software and hardware.
The NSA seems to be doing some real damage to technology companies in the US. I had thought that they had gotten better about it, after they gave up on the clipper chip and lifted most of the export controls, but it looks like I was wrong, they've just decided to take more covert routes to do the same thing, with the hope that none of the tens or hundreds of thousands of people who could find out about it would leak that information.
I think maybe it's the fact that I started in the industry during the era of Clipper that stuff like this doesn't faze me much.
The more you use a secret cipher, the easier it is to break. It is simply good operational practice to use a different cipher for a small fraction of communications -- namely, the most secret ones. Just like certain antibiotics are reserved for drug-resistant organisms. You don't want it to lose its effectiveness through overuse.
First, security-by-obscurity does indeed buy you some additional time. Because the cipher is secret, your opponent has to figure out the algorithm as well as the attack.
Second, this reduces the amount of traffic that the opponent can analyze. For example, suppose that only 1% of messages use Suite A, and 99% of messages use Suite B. With fewer messages to analyze, the job of breaking the cipher becomes much harder.
Third, the reduced volume also makes known-plaintext attacks more difficult. Especially if you avoid committing the cardinal sin of repeating the same message using two different ciphers.
There is some information known about some of the algorithms. Wikipedia has pages on BATON https://en.wikipedia.org/wiki/BATON and SAVILLE https://en.wikipedia.org/wiki/SAVILLE. You may notice that these are frequently used for hardware implementation in radios, smart cards, encrypting video streams, etc; devices that are probably fairly resource constrained, and would be hard to replace with new hardware if attacked.
If you look at the description of BATON, it has a 96 bit Electronic Code Book mode. Yes, ECB, the one that is famous for leaking information, as you can tell which blocks are identical and get a good deal of information out of that.
But even with fairly efficient hardware implementations, adversaries have been able to use off-the-shelf software to intercept Predator drone video feeds because encryption was disabled for performance reasons: http://www.cnn.com/2009/US/12/17/drone.video.hacked/index.ht...
The NSA has approved both Suite A and Suite B for top-secret material. I really don't think that they have any worries about the security of Suite B (though as Schneier points out, you may want to be a bit paranoid about their elliptic curves, as it's possible that they have ways of breaking particular curves that other people don't, like they did with the Dual EC DRBG that they promoted). I suspect that Suite A is around for legacy reasons, as they have been implementing it for longer than Suite B has existed and many of the implementations are in hardware or otherwise difficult to update.
It is possible to break an unpublished cipher. Just more difficult, because you've got to figure out the algorithm as well as the key. As long as it is similar to existing algorithms, you can try and look for differences.
For example, American cryptanalysts broke the Japanese Purple cipher during World War II entirely from encrypted messages. It was only at the end of the war that they managed to recover parts of one machine from the Japanese embassy in Berlin. No complete machine was ever found.
(In contrast, Enigma machines were captured, so cryptanalysts could directly examine the mechanism and use this knowledge to look for weaknesses.)
Of course, if the algorithm is completely novel, and bears no resemblance even to any principle used in published cipher, then it's a lot more secure. It would be hard to even begin to analyze it.
Anybody care to guess which group was responsible for the FUBAR that gave Snowden the keys to the kingdom?
Unlikely, though. More likely that Snowden was just acting on his own. And he didn't really have "the keys to the kingdom"; just more access to a fileserver that had lots of PowerPoints on it than he should have had. If you note, almost everything that has leaked so far is PowerPoints where various branches of the NSA describe to each other and other government agencies what capabilities they have, but not the actual details of those capabilities. He probably had access to some fileserver used by the higher level executives at the NSA, but they do compartmentalize information and as they mentioned were very secretive about exactly what those vulnerabilities consist of, so there's a good chance that he didn't have access to systems where that was described.
Something this blatant does seem like a severe misstep, but perhaps what led to discovery of this case is the wide body of public knowledge on number theoretic crypto. The energy of the public sphere seems mostly devoted to studying problems with interesting mathematical structure. Symmetric crypto has been around a lot longer, and is sufficient for state security purposes, so one would expect the NSA to have a deep analytic understanding of it (hence the differential analysis olive branch). It's not hard to imagine that they'd have ways of creating trapdoor functions out of bit primitives, generating favorable numbers with plausibly-impartial explanations, etc.
I think you got it backwards... shouldn't you reduce hard problems down to the problem whose difficulty you're trying to understand?