The NSA is dual purpose. Both as a signals spy agency and a signals counterintelligence agency.
And ultimately, what's the difference between a publicly vetted algorithm proposed by the NSA and a publicly vetted algorithm proposed by someone else?
Everyone points at the Dual EC fiasco, but if vulnerabilities are possible either way it seems like throwing the baby out with the bathwater.
Any food could be poisonous, possibly with a poison you don't know how to detect. Would you rather eat food prepared by someone known to have poisoned you before, with obvious motivation to poison you again, or someone with no history of poisoning people? In no case are you guaranteed to not have poisoned food, but I know which I'd prefer.
There's a strain of anti-intellectualism here. It's actually not true that any algorithm might be backdoored. Algorithms aren't unknowable.
Furthermore, people act like there's a track record of super-sneaky NSA backdoors, and that Dual EC shows we can't trust anything NSA produces. I don't trust the NSA at all. But Dual EC wasn't sneaky. There were only two surprising things about Dual EC:
(1) That they actually managed to get anyone to use it, despite how clunky and slow and unreasonable the design was.
(2) That having produced a design that stuck out like a sore thumb for clunkiness, slowness, and unreasonableness, that design would be their backdoor; we gave them a lot more credit for tradecraft than that.
People knew there was something shady about Dual EC almost from the jump. It is a random number generator that works by encrypting random state with a public key for which the private key is undisclosed. It's obviously weird.
That's not the case with SPECK and SIMON. They're mainstream designs, and, more importantly, if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.
Should you use SIMON and SPECK? Of course not. You shouldn't go anywhere near lightweight ciphers unless you know exactly what you're doing, and if you know exactly what you're doing, there are less politically controversial lightweight ciphers to use. The problem here isn't that the world is being deprived of SIMON and SPECK; it's that we're getting too practiced at turning our brains off.
In the past there was a common idea that NSA's classified research might be a decade or more ahead of the academic world, even in an era when the academic world had gotten interested in crypto in earnest, and that they might know of entire classes of vulnerability that other people didn't. Probably the clearest precedent in support of this concern was differential cryptanalysis.
Recently I've heard more of a suggestion that we understand crypto dramatically better than we used to, that cryptanalysis that appears to be hard generally is hard, and that spy agencies have been migrating largely to side-channel attacks and exploitation of vulnerabilities (and maybe also supply-chain attacks). (The weak Diffie-Hellman thing is apparently not a counterexample because there was open literature giving appropriate defensive guidance about that for many years.) Famously Snowden said that "crypto works" and reportedly leaked very little information about novel cryptanalysis, although there's also the counterargument that he might not have access to the relevant compartments to know how crypto doesn't work.
I've even heard the claim that most of the unknowns for pure cryptanalytic attacks are in some way now known unknowns. At least, academic understanding of the mathematical problems has really matured significantly.
What do you think about this question? If someone said "but what if NSA has the next mathematical breakthrough akin to to differential cryptanalysis, which Nadia Heninger is only going to discover in 2027?", would you say "that's really implausible nowadays given the maturity of our understanding of the math here"?
(And I know Nadia mainly works on number theory and algebra rather than block ciphers.)
This is a carefully written question that deserves a better response than I can give it. My basic answer is, "I don't know". I can give you some hunches, but you should be aware that they're based on the same kinds of first-principles reasoning that I'm so skeptical about in other people's HN comments. So:
* I have been very unimpressed with the quality of NSA "TAO's" tooling, and, when I have conversations with people closer to NSA offensive cyber stuff than I am, what shakes out generally is that they people at the tip of that particular spear tend to be super young people in basically their first ever software job.
* But Stuxnet was sort of impressive.
* But then, what was impressive about Stuxnet was domain knowledge about a particular piece of industrial equipment, not hardcore computer science. The software engineering details of Stuxnet were kind of unimpressive.
* But there's Flame, which is impressive in a CS kind of way.
* NSA's original advantage in these kinds of systems probably stems largely from the fact that they were a monopsony buyer of cryptographic talent for many decades. You'd expect there to be a period of catching up.
* But cryptography is now one of the better known pipelines for applied mathematics research, particularly for people who cross over between math and CS, and there's a lot of those people, so it's hard to see why NSA's advantage would be sustainable over the long term.
* But also NSA does have a sustainable advantage in the kinds of cryptanalytic work that can only be done with massive, specialized compute resources, and for all I know when you can casually conduct research on gigantic ASIC clusters as easily as I can fire up Sage and add generate a curve point, you learn a whole bunch of stuff that is broadly applicable.
* But academics get to collaborate directly with everyone in their field and NSA not so much, which is a mitigating factor.
I think it's very healthy that we assume that NSA has space-alien capabilities. It adds rigor to our security models. And again I don't think we should use things like SIMON and SPECK.
I just don't think we have to stop talking about the engineering aspects of SIMON and SPECK simply because they came from the NSA.
For most people the reaction to something proposed by a spy agency is to immediately assume there is an ulterior motive. Your rational explanation helps control this thought process and is much appreciated.
> I have been very unimpressed with the quality of NSA "TAO's" tooling, and, when I have conversations with people closer to NSA offensive cyber stuff than I am, what shakes out generally is that they people at the tip of that particular spear tend to be super young people in basically their first ever software job.
Are you trying to find out who the TAO employees on HN are? They'll be itching to defend themselves.
Just add random data so every encrypted has to be brute forced and then you'll see the NSA applying for more Bluffingdales, a bigger budget, and more side channel attacks, like the Intel ME.
One thing that tends to get overlooked in these discussions is what being 'a decade ahead' really means. Suppose you go back to 2007. What ciphers are you able to break now that you couldn't then?
In the past decade, I can think of 2 sort of new general cryptanalytic attacks: invariant subspace attacks, and the division property. The division property didn't really break anything; it claimed a full break of MYSTY1 with complexity 2^70, in case you happen to have the entire codebook already. It doesn't work well mostly for the same reason cube attacks haven't: most cipher designers know how dangerous a low algebraic degree can be.
The invariant subspace attack has been more successful, mainly in the lightweight space, mostly because it exploits the symmetries that tend to make a design smaller and elegant. But once again, against vetted ciphers it has not done so well.
So let's posit the NSA does have another couple of attacks in hand we don't know about. Chances are they're not going to be very useful. Do they specifically affect SIMON and/or SPECK? It would require an intersection of conditions that seems very implausible, and it seems tricky to have it affect both designs at the same time without being noticeable. But I guess we'll know in 2027.
In another note, if I was going to make a cipher with a hidden weakness to dupe the world into using, I probably wouldn't go with a block cipher---literally the most heavily analyzed kind of primitive in the public sphere. I would probably go with a stream cipher, or a stream-like dedicated authenticated cipher, whose security is much less studied than block ciphers, and can still be used in most places a block cipher would be.
Probably not, I don't know. There's a better paper from a couple of years ago [1], which even manage to include code [2]. One of the attacks on NORX [3] was essentially exploiting a bigger-than-expected invariant subspace, it might be easier to get the gist of it that way.
By the way, I have the feeling that these attacks are more the consequence of these sort of designs becoming more popular than any particular breakthrough in cryptanalysis. For example, the symmetry properties of the AES round were already well known long ago, but it wasn't until people started taking the AES round and building primitives out of it without adding symmetry-breaking constants that this became a problem.
Referred from your reply to my comment[0]; Algorithms can be backdoored due to having a novel technique to defeat them that you have not disclosed and has not otherwise been discovered yet. We are constantly adopting and discarding encryption algorithms that have not withstood the test of time.
If someone has gotten a jump on research and found a novel attack against their math, but the math looks good enough to convince others to use, that is an enormous advantage.
And my rebuttal to that notion is that if the NSA has secret math that breaks a simplified, stripped down standard ARX/Feistel design, we probably have bigger problems than the NSA's preferred lightweight cipher. I'm not fond of citing Schneier, but he's an authority to a lot of people here, and look what he has to say about Speck: that it's basically an improved version of Threefish.
The "unknowable secret math" argument works both ways. As I said upthread: if you believe this, how do you rule out the possibility that ARX designs are the ones NSA can't break, that they have secret math that only works against iterated ciphers built solely on bitwise primitives, and that they published this particular cipher --- something they rarely do! --- precisely to create the kind of suspicion we're seeing on the thread?
If you want to play Kremlinology instead of talking about engineering, arguments like that are fair game too. I'd rather rule both of them out.
Of course, this could be NSA's test of community trust and an attempt to gain some goodwill. Surely they know they are not the most popular kid on the block... :)
> if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.
It wasn't a backdoor, but doesn't this sound a lot like SHA-0? The NSA fixed the mistake and published SHA-1, but they didn't say why. They might as well have designed the cyphers a similar issue and kept it hidden.
SIMON and SPECK have been analyzed by the wider cryptography community, but in principle something like the above wouldn't be surprising from the NSA...
I'm reasonably confident, as I assume you are, that if NSA actually has a practical weakness in SPECK, they're not going to disclose it, despite having promoted it as a standard.
> That's not the case with SPECK and SIMON. They're mainstream designs, and, more importantly, if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.
Isn't the bigger concern that the NSA may be proposing these because they already know how to break them, and not necessarily that they'll sneak back doors into implementations?
I'm not really distinguishing between NSA knowing how to break a conventional block cipher and NSA having snuck a backdoor into it. To me, the implications are basically the same either way, as is my rebuttal.
Can you point me to some other publicly analyzed lightweight ciphers that are better choices than speck/simon? From my review of that space I got the impression that speck and simon are the more reasonable (and probably also more reviewed) designs.
If we have a fairly robust system for detecting poison in the food, and we successfully detected the poisoned food the last time they tried to poison us, and the food looks really tasty because the chef was pretty good, even if he was a poisoner...... I might eat it.
I'd eat it for the same reason I prefer more used open source packages: my threat model ranks "not enough smart people looked at this" a lot higher than "only a few people in the world know a secret about this."
And ultimately, what's the difference between a publicly vetted algorithm proposed by the NSA and a publicly vetted algorithm proposed by someone else?
Everyone points at the Dual EC fiasco, but if vulnerabilities are possible either way it seems like throwing the baby out with the bathwater.