Hacker News new | past | comments | ask | show | jobs | submit login
Linux Cryptography: Speck's real standing with the academic community (spinics.net)
230 points by zx2c4 on June 2, 2018 | hide | past | favorite | 108 comments



I must be misunderstanding something - why would a standards group (or anyone with a functioning brain) accept a "secure" cipher standard developed by an organization who's foundation in part is to be able to break encrypted communications?


The cryptographic community used to have a lot of trust in the NSA, and that trust was well-deserved. One of the most well-known contributions was improving the resistance of DES to differential analysis, which hadn’t even been discovered in the academic world yet. The NSA recommended an S-Box change, and provided next to no rationale for doing so. Only years later did we understand why.

Recommendations from the NSA find their way into standards that are implemented by the government itself, and that is one of the reasons the community trusted the NSA to a degree. It wasn’t seen as being in their long-term self-interest to insert backdoors that others could find into their standards.

Obviously that trust was wiped out by the DUAL_EC_DRBG debacle, although many cryptographers still believe it’s worthwhile to analyze and consider NSA ciphers on their own merits just as you would a cipher by DJB.

In my personal opinion, as a general rule, I would wager that symmetric ciphers published by them can be reasonably assumed to be free of back doors, since there’s fewer degrees of freedom to insert them except through real cryptographic weaknesses which others could independently discover. Asymmetric algorithms probably shouldn’t be given this benefit of the doubt, since there are enough degrees of freedom to be possible to have a private master key of sorts (e.g., DUAL_EC_DRBG) which isn’t be able to be discovered independently and can only be revealed if it’s stolen directly.


> The NSA recommended an S-Box change

Which is true and commendable; however, in the exact same system they proposed weakening the key from 64bits to 48bits. IBM split the difference and we got 56bit DES.


The shorter key length better reflects the actual security of the cipher - the longer key lengths were misleading.

Or at least, that was the later justification.


Theoretically, the NSA has a dual mission:

1. Gain access to information where that would be in US interests.

2. Protect US information from being accessed by others.

From their web site ( https://www.nsa.gov/about/faqs/about-nsa-faqs.shtml ):

> NSA/CSS has two interconnected missions: Signals Intelligence (SIGINT) and Information Assurance (IA). Through SIGINT, we respond to customer requirements for information relating to the plans, intentions, capabilities, and locations of foreign powers, organizations, terrorist groups, or persons, or their agents, who threaten America’s national security. Under Information Assurance, we protect our nation’s vital national security systems and information from theft or damage by others.

Whether you trust them is another question. Many argue that this dual mission creates an inherent conflict of interest. But if you believe they take both halves of it seriously, you might see them as having a legitimate interest in ensuring good crypto is available. (I'm not advocating for that position, but it doesn't seem totally irrational to me either.)


When I looked at it, they were only required to protect defense systems. For civilians, they had no strong, legal requirements. They did have a mandate to spy on them, though.This is why you can't get ahold of their TEMPEST- or Type 1-certified gear despite the nation's enemies using attacks those block with high confidence. The stuff they recommend is the stuff rated by them to be easy to moderately-difficult to attack.

So, dual nature is misleading unless you're a defense contractor or agency. The SIGINT group has all the money, power, and executive backing. That last part means even IAD's honest folks will get overruled at some point forced to compromise somehow. There's definitely people trying with the security guides, funding things like Cryptol, and so on.


Yes, in theory. In practice, they've proven they are much more interested in everyone having vulnerable computers vs everyone having super-secure virtually unhackable computers.

This "dual-mission" thing reminds of some tv networks presenting "equal sides" on a debate on climate change. In 99% of the cases, the NSA will choose to keep its vulnerabilities for computers rather than try to fix or disclose them.


This comment doesn't really add anything to the discussion; it simply restates the concern that started the subthread.


Because you don't have to believe them or even trust them, it's maths. If their design was sound and well justified I don't see why it should be discarded. As pointed out in this email it seems far from being the case however.

After all AES is endorsed by the US government as well (although not designed by them, admittedly), yet we trust it because we have no reason not to.

While the NSA has an obvious incentive to be able to break encryption they also have an incentive to be able to use ciphers than are fast and not easily broken. Well, I suppose ideally they'd like a cipher that only they can break, which might be what's going on here.

So while I don't think NSA's proposals for a new cipher shouldn't be dismissed merely because it comes from them it should obviously be met with the highest amount of skepticism and scrutiny. No stone left unturned. Fortunately it seems that's exactly what happened there and the ciphers were rejected.


I think you forgot the bit that AES was created by a competition involving academics while SPECK is directly from the NSA. I think that is where people are having the issue of trust due to the NSA muddying the waters with their recent behavior.


> Because you don't have to believe them or even trust them, it's maths

That is not true, the only way we know how to assess a cipher's security is to basically attack it and add a margin of security on top of the best attack you find. If the NSA has built its cipher on top of a flaw that only they know, and if third party research hasn't found the flaw then we're doomed.


> it's maths

That's a joke. First off, we don't even know how NIST obtained this "magical" large number that we're supposed to trust it creates a safe formula for the P-256 curve:

y^2 = x^3-3x+41058363725152142129326129780047268409114441015993725554835256314039467401291

https://safecurves.cr.yp.to/

Second, the NSA refused to reveal certain technical details that they should have been able to reveal to the ISO: https://www.theregister.co.uk/2018/04/25/nsa_iot_encryption/

So it's not all "just maths". Otherwise all of these people wouldn't be so suspicious about it.

It's not all about the design, either, even if it was 100% transparent. But about how secure it actually is. We don't really know how secure an algorithm is, even as it passes a competition or standardization process. We have to see it in the real world, but once it's in the real world and everyone adopts it, it could take at least 10-15 years to get rid of it from most places.

The NSA refused to reveal how Simon and Speck would resist against certain attacks. They kind of did the same with IPSEC, where they made it super-complex so that the implementers would almost always get it wrong, which means they'd leave holes in there that the NSA could exploit. This is how they muddied the waters in the standardization processes. It's their MO when they can't introduce an actual backdoor - they just design a crypto algorithm that looks okay on the surface, but hides high potential dangers:

https://www.mail-archive.com/cryptography@metzdowd.com/msg12...

Also this seems to perfectly describe how I've already thought the NSA would act. Why would anyone ever trust them when they act like this? It's quite strange to still see so much support here despite of this:

> When some of the design choices made by the NSA were questioned by experts, Ashur states, the g-men's response was to personally attack the questioners, which included himself, Orr Dunkelman and Daniel Bernstein, who represented the Israeli and German delegations respectively.

> Ashur further alleged that the NSA had plied the relevant ISO committee with "half-truths and full lies" in response to concerns, and said that if the American delegation had been "more trustworthy, or at least more cooperative, different alliances would have probably been formed."

> Instead, he says, "they chose to try to bully their way into the standards which almost worked but eventually backfired."

https://www.theregister.co.uk/2018/04/25/nsa_iot_encryption/


The NSA didn't make IPSEC super complex; the NSA didn't design IPSEC. The IETF did a perfectly serviceable job making IPSEC inscrutable, difficult to analyze, and complicated to implement all on its own. Accomplishing stuff like that is one of the IETF's great talents.


Because if it is possible to ascertain they are acting in good faith, there is no better collection of minds, money and official secrets suited to the task of designing a robust cipher. The problem is that it's somewhere between difficult and impossible to ascertain whether they are acting in good faith.


Their recent behavior suggests that they are not acting in good faith and, even if they were, the "between difficult and impossible to ascertain" bit should mean that they are, by default, not trustworthy.


It's important to remember the desired goal - secure cryptography, and in that case even if the NSA is a known-bad actor, they might still be the best option to achieve an optimal outcome.

There is a name for the kind of table below but I have forgotten it. Essentially on the balance of outcomes, a situation (of engineered choices, of course!) where the NSA is involved still produces, given unknown factors, a potentially better outcome than a situation where the NSA is excluded.

    Bad = 0.1, Fair = 0.5, Excellent = 1.0

    Cipher Author   |  Known Weak       |   Security    |  Likelihood   |  Security*Likelihood |
    Academia        |  No               |   Excellent   |  Bad          |  0.1
    Academia        |  Nationally       |   Fair        |  Fair         |  0.25
    Academia        |  Internationally  |   BAD         |  Fair         |  0.05
    NSA             |  No               |   Excellent   |  Fair         |  0.5
    NSA             |  Nationally       |   Fair        |  Fair         |  0.25
    NSA             |  Internationally  |   BAD         |  Bad          |  0.01

    SUM(academia) = 0.31
    SUM(nsa) = 0.76
(Does anyone know what this kind of table is called? Can someone do a version that makes more sense? It's 4am and quite a few beers were involved :))


Security professional here.

This isn't how security works. When the term "secure" is used, it is relative only to a threat model. Typically these threat models are implied, but different perspectives, levels of experience and communication barriers often see that this non-precise term causes confusion and mistakes.

In the case of cryptography, the implied "security" of a cipher differs wildly across many different properties (the security margin the design, the caveats to its correct use, the modes it is used with, it's likelihood to be implemented properly, etc.

One of the properties is how the integrity or confidentiality of the schemes fail to different types of adversaries - and who are trusted parties to the data security layer.

All of this is to say that - if our threat model includes intelligence agencies and mass surveillance - the NSA is not able to provide encryption that can be trusted to be secure.


Why should academia only produce excellent security algorithms with "Bad" likelihood?


That's the wrong way to read it - their likelihood is bad compared to an agency with secrets that gives it a better chance, and even then, that agency only receives a fair likelihood. At least this part is close to reality - per another comment, the NSA understood differential cryptanalysis years before academia

I guess we could add more score levels to make it more explicit, but I'm pretty sure that beer-addled table has bigger problems


There is also no real need. Reduced round Salsa/ChaCha (12 rounds) is tiny and likely fast enough for the quantities of data any microcontroller would need to push over a network. Those ciphers were designed in the open and have been heavily analyzed with no attacks known against more than a few rounds. Why not use those?


In general that's true, but if I understood the discussion correctly the reason Speck was being considered here is because this specific application needed a fast block cipher - stream ciphers like Salsa/ChaCha were not suitable.


Multiple reasons, but a huge one is simply that, to be taken seriously, international standards bodies can't really ignore the biggest kid on the block. Even if the motives for doing so were pure as the driven snow, it would still be seen as a political decision, and now you're a national or maybe regional standards body.


It's not the ISO that has tried to politicize the standards, but the NSA and FBI, by trying to put backdoors in encryption to "catch the bad guys" = a political motive used to break everyone's security.


speck and simon actually seem to be very good ciphers. It's unfortunate that NSA has tarnished its own reputation with earlier crypto mis-contributions.


The NSA has always had a shitty reputation. They are one step above the CIA who will fuck up a cup of coffee.


Well, according to coffee snobs it's really easy to fuck up a cup of coffee.


Not according to this email.


I'm not qualified to challenge Tomer Ashur's analysis of Speck and Simon, and I don't think ISO has any business standardizing those algorithms, so Ashur and I reach the same conclusion about the NSA ciphers.

But I think Ashur is overstating his case.

Ashur repeatedly alludes to the potential for backdoors in Simon and Speck, but never really engages that subject in any detail. That's unsurprising, because the NSA ciphers are both extremely simple, streamlined block cipher designs. There isn't a lot of room in them to insert a "backdoor". Ashur notes that these ciphers were sponsored by the same person that sponsored Dual EC. But Dual EC was self-evidently compatible with backdoors (that was, weirdly, a reason some people --- myself included! I was wrong! --- thought it couldn't be --- the tradecraft was just too clumsy). Dual EC is a PKRNG; embedded in the design is a public key for which we were expected to trust no private key was known. You can't really say that about Speck, or even Simon with its U, V, and W matrices.

Could you deliberately weaken something pretty close to a trivial ARX cipher? I'm sure you could. But NSA doesn't want arbitrary weaknesses; it wants NOBUS backdoors, where it gets cryptographic protection for its own access. It's hard to see how you'd get there with a design as simple as Simon's.

As with previous NSA cipher debates, Ashur takes NSA to task for low security margins. Why so few rounds? Why such simplistic designs? Well, that should be easy to answer: Simon and Speck are "lightweight" ciphers intended for embedded designs. They come with variants with 32-bit block sizes! This also addresses a concern I saw on this HN thread, that NSA was trying to surreptitiously replace AES with a breakable cipher. No, they're not. Even if these had been standardized, you shouldn't be using them; they have a highly specific purpose.

I think the background to this drama is mostly political. I get the sense that NSA has never really complied with the norms of academic cryptography (they're openly derisive of it). Standards groups have, in the past, given some deference to NSA's "alternative norms". That ended with Dual EC, and Simon and Speck are the first post-Dual EC ciphers where NSA is being told they're going to have to follow academic cryptography's norms. Good!

But I'd be careful reaching any conclusions past that.

Corrections welcome!


> But NSA doesn't want arbitrary weaknesses; it wants NOBUS backdoors

The NSA's handling[1] of differential cryptanalysis suggests they could consider an entire analysis technique to be (pseudo-)NOBUS. If they believed they were the only people that knew of a new cryptanalysis technique, I could see them recommending a cypher that is weak to it.

(This is only a hypothetical concern. I have not studied the specifics of Speck/Simon)

[1] Don Coppersmith, in his 1994 paper[2]: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography."

[2] (pdf) http://simson.net/ref/1994/coppersmith94.pdf


This argument doesn't work. The NSA is pretty universally believed to have optimized the DES s-boxes for resistance to differential cryptanalysis. Obviously, at the time differential was a NOBUS weakness --- NSA would have deployed it against competing cipher designs, which is sort of the whole point of employing cryptanalysts in the first place --- but it was never a NOBUS backdoor in NSA's own design.


> DES ... it was never a NOBUS backdoor in NSA's own design

> Second, you can't argue that something is an NSA backdoor when NSA's only role in standardizing crypto relative to it is to make the standard resistant to it.

I'm not suggesting there was a backdoor in DES; I probably should have made that clearer.

> at the time differential was a NOBUS weakness

Exactly, this is what I'm talking about. This would (hypothetically) incentivise them encourage the use of designs that are vulnerable to the cryptanalysis method that only they know about.

> Dual EC is still a NOBUS backdoor. The "nobody but us" part of it is ensured cryptographically; breaking it likely involves solving the ECDLP

The fundamental problem with that type of NOBUS backdoor is that breaks when someone steals/blackmails/leaks the private key. This should be a concern to any agency that already experienced many of their secrets being exfiltrated by a contractor.

edit:

> (I think people generally used to assume that was true and I think that's less and less so every year)

I generally agree - this seems unlikely in practice, although I don't have good evidence for that belief.


It occurs to me that the NSA's behavior in this case is consistent with this being a situation like the DES s-boxes. If they designed Simon & Spec to be resistant to an attack that the open crypto community doesn't know about, they'd be really reluctant to talk about that design process or their internal cryptanalysis of them, for fear of giving away something about their secret attack(s).

So Simon & Spec might be ok. Maybe.


Wasn't that a past NSA under different leadership? Today's NSA may behave differently.


It's not my argument that we should trust NSA. We shouldn't. It's that differential cryptanalysis isn't an example of how NSA would backdoor a cipher design, and that the nature of a NOBUS backdoor is more complex than "a weakness only the NSA knows about". Dual EC doesn't have a "weakness" so much as it has a cryptographically secure access mechanism.


If I were to place a bet I'd bet that Speck in particular is fine. It's so damn simple there is just no place to put a lever. But... it would not be a very large bet. :)


I don't think you understood the argument above.

My attempt to clarify it as I understood it:

The argument is that a unique-to-NSA analysis technique could be used to get NOBUS backdoor by virtue that only NSA would be able to use the analysis to get key/message recovery, etc. This is especially true if such a technique were combined with a unique-to-NSA ability to store or process characteristics of a cipher.

The NSA likely has both a unique cryptanalytic techniques and a considerable computational advantage.

The NSA has, before, refused to disclose the motivation for its designs because doing so could leak a unique-to-NSA analysis technique.

The argument (not fact) of the parent comment is that this could be the reason for NSA not disclosing its design rationale now. It could be a backdoor.

The entire purpose of the parent comment is to convey that it is conceivable to NOBUS backdoor simple symmetric ciphers with the resources of the NSA. As such, it is an appropriate response and an argument that _does_ work.

Side note: I remember (different username at the time) discussing with you here, on hacker news, whether DUAL_EC was a backdoor. Of course, that doesn't mean that there is a backdoor in this case.


I understand the argument fine; it just doesn't cohere.

First, it's self-evident that differential cryptanalysis is not a good example of a NOBUS backdoor, since it was independently discovered multiple times, including by a pair of academics. By contrast, even after BULLRUN, Dual EC is still a NOBUS backdoor. The "nobody but us" part of it is ensured cryptographically; breaking it likely involves solving the ECDLP. It's not "nobody but us until somebody figures it out".

Second, you can't argue that something is an NSA backdoor when NSA's only role in standardizing crypto relative to it is to make the standard resistant to it.

Once again: the NSA may well have many cryptanalytic techniques unknown to the academic literature (I think people generally used to assume that was true and I think that's less and less so every year). But a NOBUS backdoor isn't simply a cipher that NSA can break and nobody else can break yet. It's a cipher that the NSA can break and cannot foresee any adversary breaking within the frontiers of its own knowledge. Whatever cryptanalytic techniques NSA can discover, so too can the Russians, Israelis, French, and Chinese.

The point of a NOBUS backdoor is that it's safe to deploy on your own systems.


The NSA is a pragmatic organization and is concerned only by "nobody but us until somebody figures it out", so long as there is an operational advantage it provides to the United States.

We should also note that "nobody but us until somebody figures it out" is true also of ECDLP. I'm not really interested in that argument, because we both know exactly where it goes. Is it different than symmetric cryptography? Yes. But we're talking about symmetric cryptography.

> Second, you can't argue that something is an NSA backdoor when NSA's only role in standardizing crypto relative to it is to make the standard resistant to it.

Nobody is.

Yes, the NSA almost certainly has unique-to-NSA analytic techniques. New techniques are discovered in the academic community on a regular basis - they aren't rare in some way that it would be reasonable to believe that a professional community of the world's best cryptographers with their own cryptanalytic communities, working full time, would have discovered and honed their own. In fact, it would be pretty unusual to believe they don't.

You misunderstand NOBUS. It's purely a pragmatic expression meaning advantage. NSA is under no illusions that their advantage in cryptography on every standard is going to last forever.

Please don't spread FUD on NOBUS.


I don't understand the distinction you're trying to draw between block ciphers and the ECLDP. Dual EC was an RNG, which is (among other things) an important symmetric primitive. I think the definition of NOBUS that you're working is the one that's broken, not mine.

I also think you're overestimating the frequency with which powerful new techniques for attacking block ciphers occur in the literature. It's 2018 and the debate about Speck's security still hinged in part on Matsui. Fundamental new cryptanalytic techniques against block ciphers are, in fact, pretty rare.


> I don't understand the distinction you're trying to draw between block ciphers and the ECLDP. Dual EC was an RNG, which is (among other things) an important symmetric primitive. I think the definition of NOBUS that you're working is the one that's broken, not mine.

I'm not sure where we'll get with it. I'm okay with your misunderstanding.

Please ignore that part of the comment and read the remaining content. Please don't let an objection on a small part of the content distract you from the remaining content of the argument.

> I also think you're overestimating the frequency with which powerful new techniques for attacking block ciphers occur in the literature.

You are clearly underestimating the frequency.

We both understand how heavily caveated your argument referencing Matsui is.

My fear is that your misunderstanding and underestimation will lead to FUD about backdoors, like it did last time with DUAL_EC.

It's also important to note that your argument has shifted significantly in this short exchange.

NSA only cares about NOBUS weaknesses for operational and national security advantages.

Thanks for your time.


I don't understand what you're trying to say about how "caveated" the Matsui reference is.


But didn't they do the exact opposite, and strengthen DES against the attack?


But they did not say why they wanted the S-Boxes changed.


That makes sense even if you assume the NSA is adhering to its mission: if you don't publicly disclose the existence of differential cryptanalysis and the enemy hasn't figured it out the enemy may standardize on a weak cipher. The NSA wins in both of its missions: make the US more secure, make everyone else less secure.


- I wonder how the NSA's compute resources translate into an inherent advantage over academics in terms of ability to break these codes. Especially if we are talking about pushing the limits of small margins.

- I don't think it's fair to say the background is "political" if NSA is currently not cooperating with the process. Calling it political makes it sound like their treatment is a function of the broader context or history, but it sounds like they are being treated fairly given how they have acted in this situation.


It's political because NSA never complied with academic norms, but received deference before the BULLRUN scandal.


> Well, that should be easy to answer: Simon and Speck are "lightweight" ciphers intended for embedded designs.

You are providing rationale which the NSA is, according to Ashur, unable to provide in the design documentation/rationale. So I think you are missing the point he's trying to make: If you can't tell me why and are doing hand waving you are either incompetent or deliberately dismissive in which case you should not be trusted.


I think that part of the issue is that people (probably including the WG2) hear NSA and then ignore the lightweight part.

To some extent the lightweightness implies lower standard for the security margins. This is in this case compounded by the fact that WG2 had thrown out most of the truly useful NSA recommended Speck/Simon parameter combinations (things like 32b blocks) even before the current turmoil on the basis that they obviously provide insufficient level of security, which is obviously true for general purpose block cipher, but not for lightweight ("IoT-ish" ;)) applications.

On the other hand I somewhat wonder what motivation exactly NSA had for trying to push these algorithms through ISO.


No, if you re-read my first sentence, you'll see that I'm not suggesting we should trust NSA.


Maybe. NSA wants algorithms to be secure against anyone but them. Perhaps this algorithm cannot contain something in the form of Dual EC_DRBG, where they're effectively handing out a public key as part of the algorithm and they have the private key. However, it could also be that there are particulars of the algorithm that make hardware that the NSA has and doesn't expect anyone else to be able to obtain, work especially well.


For us, "IOT" means "Internet-connected refrigerator". From NSA's vantage point, though, it includes billions of dollars in embedded industrial equipment that is taking advantage of the same technology that's allowing us to wire up our toasters. If that equipment is insecure simply because vendors can't get AES working on a PIC or MSP embedded SOC, you have a nation-scale problem. It then starts to make sense that they'd work to provide an "officially blessed" cipher that can work in those environments.

We also forget that though we don't trust NSA (and shouldn't), American industry sure does.


From the email:

"So I think that as a first step, no-encryption is better than using Speck."

Thats a damned near heretical position to take as a cryptographer, but Tomer did his homework and has done nothing less than nailed his 95 theses to the door of the church of the NSA. ISO is passing on Speck for a very good reason: The NSA still believes it can cash its unapproachable arrogance with Speck in a shroud of paternal intellectualism that Edward Snowden ripped the mask off years ago. The NSA forgets itself. The ISO WG2 is headed by no less than Berenstein and Paillier. They are not the fucking peanut gallery to be fecklessly dismissed with bureaucratic elucidations such as "i will not answer that question."


I'm normally not the type to embark on wild conspiracy theories, but in this case I don't think the NSA actually believed it could push Speck into the ISO. The way Tomer depicts it, it sounds as if they almost wanted the process to fail; they made it a point to behave especially shadily.

Perhaps, and here begins my wildy unfounded speculation, perhaps Speck was an attempt at setting a carefully crafted example of how the NSA pushes a compromised algorithm in an attempt to lower the expectations of shady NSA operations - thus making their next, real attempt more credible.

But who knows.


Possible but unlikely. Bureaucracies generally aren't that subtile.

It is a good idea to be paranoid in crypto though. If I thought my adversary would be a nation state or anything else so powerful I would take a defense in depth approach of layering multiple independently developed systems using different algorithms and constructions.


Can someone point me to a concrete use case for speck? As far as I understand, Speck is designed to use less resources than AES. However, even on very constrained devices I believe one needs at worst hundreds of kB of RAM and even a low end micro controller should be able to deliver 1 kb/s encrypted bandwith. So I have a hard time to think of any application were a chip that is designed now does not have enough power to encrypt using AES, but has a need for somewhat high bandwidth communication.


AES is extremely slow when implemented in software. Many mobile and embedded processors do not have AES hardware acceleration. With hardware acceleration, AES becomes crazy fast, but that's because hard silicon can do crazy things.

However, Speck does not have a user-case, as there are sensible options available. If you don't have AES hardware acceleration, ChaCha20 can get very close in software, which is one of the primary reasons ChaCha20Poly1305 is being adopted as a common TLS cipher suite. Cloudflare has some nice benchmarks on it.


The linked mail talks about XTS constructions, so the thread is about block encryption commonly used in disk encryption schemes. They also mention that they'd have to handroll such a scheme with ChaCha. I assume that Speck offers a native wide block mode which would make a better candidate for software disk encryption if it weren't for the backdoor concerns.


That is fair, but that is by no means an argument for Speck, but instead just an argument for a block mode cipher.

What they would need to do would be to find a different sensible block mode cipher, rolling a cipher-text stealing operation mode like XTS for a stream cipher (which is not that complicated, although one should do their homework first), or using a construct to convert a stream cipher to a block mode cipher so it can use XTS directly (which is possible, but care must be taken).

#2 would probably be best. I am not sure if it a trend, but I seem to mostly see new stream ciphers popping up, so this solution is likely more sensible.


We have a stream cipher based proposal, but it's a much bigger change and needs much more review, so it can't be landed for P. A drop-in replacement for AES is a much simpler change to make.


ChaCha20 gets very close in software on machines that have fast 32-bit ARX and can handle ChaCha20's key size. Part of the point of lightweight ciphers is to provide "as secure as possible" variants with smaller keys and tiny block sizes, workable on 8- and 16- bit embedded processors.

Speck is far from the only lightweight cipher design, and the other designs don't get this kind of shade from academic cryptographers. In fact, Bernstein designed one, based on the ChaCha20 design: Gimli.

https://gimli.cr.yp.to/gimli-20170627.pdf


It's worth bearing in mind that it's poorer people and people in poorer countries who don't have AES-supporting hardware, and are currently without encryption as a result.


Note that this is only true for smartphones, and it the gap is likely fading.

However, even with AES-supporting hardware arriving in cheap crap android phones, there's too little focus on performance on cheap crap phones and terrible network coverage that is the reality in by far the majority of the world.


This is a super niche use case, but maybe this:

For DRM, you may be interested in using a cipher that is both easy to implement (though the spec makes this surprisingly confusing because they can't keep their orders of stuff straight), relatively fast (because you might be running it through a VM) and has no known constants that could be used to identify it quickly.


> hundreds of kB of RAM

You're not looking low enough! (and I don't think the author is either). There are tons of super low power devices going into service with 1kB of RAM or less. Think of remote sensing, where 5-10 years of battery life is common.


Just the heritage and reputation of the NSA in such matters should be enough to avoid using NSA ciphers.

You don’t go back to a restaurant if you see someone spitting on your food.


No, no. You don't understand. That's the other side of the kitchen that was spitting in people's food last week. Your friendly waiter obviously got your food from the clean side this week! And it's clearly offensive to even ask whether your plate could have gotten near the known spitters, which is why he refuses to answer.


LOL -- you didn't read the article did you?

> The NSA (in particular, the exact same person who previously promoted DUAL_EC in ISO) proposed to include Simon & Speck

The author is pointing out that the same person spit in your food before.


I'm nearly certain the parent commenter was being sarcastic.


The original analogy was good - the commenter diminishes it by trying to portray the NSA person responsible as a different actor. This is the same person who has burned the community before. That old adage about fooling someone once applies here, and for good reason.


Focusing responsibility on the individual NSA agent is a red herring - the entire organization is the hostile party. Would you judge the situation differently if it were a different operative pushing Speck?

The asset themselves probably isn't even privy to the actual existence of a backdoor, due to compartmentalization and for the sake of their own sincerity.

They're more akin to the waiter. A waiter that doesn't want to think too hard about whether your food was spit in, as their job depends on not knowing. How are you doing today? smile


This is my entire original point. Literally being associated with the organisation taints the product if it is 100% legitimate or not.


That's the waiter.


Unfortunately, the NSA also happens to be the single largest, best-funded collection of cryptographers in the world (probably - who knows how big 3PLA/4PLA are), and do have a history of mostly-beneficial relationships with the crypto community. If you ignore NSA ciphers, you may just be trading vulnerability to the NSA for vulnerability to everyone.


Really? Name one NSA cipher that is better than an equivalent academic-designed cipher? DES? SHA2?


You get original DES, I get DES with the NSA-designed S-boxes. Want to make a bet about whose messages get cracked first?


By the NSA?


By anyone, but if adversary was NSA at the time of adoption of their suggested S-boxes then I still win.


> If you ignore NSA ciphers, you may just be trading vulnerability to the NSA for vulnerability to everyone.

Rijndael is not an NSA cipher.


These days I would not use any cipher pushed by any government unless the entire design process itself was done in the open. Chinese and Russian ciphers inspire even less confidence.


You have to do 3 rounds with these different ciphers, so all 3 stooges have to get along to decrypt your communication.


It seems to me that the NSA has someone who came up with this cipher, didn't justify it very well, and then pushed hard for it. The NSA's reputation is so bad that this just won't fly, and unless they justify themselves then they just won't get a look in.

Which is fine really, because it's no different to anyone else proposing a cipher to the ISO committee. In this case, the issue could well be that the NSA tried to use it's preexisting muscle to shove through their standard without a proper design and rationale document.

Of course, it could also be the case that they deliberately weakened their own cipher. Either way, it's not a good look.


There's some missing context if you read just Dr. Ashur's email and not the rest of the thread. The reason I added Speck to the Linux kernel's crypto API is unrelated to the proposed/rejected ISO standard, but rather because Speck128-XTS is being considered for disk/file encryption on low-end Android devices. This is a very important use case which has, regrettably, received much less attention than it deserves. Currently the only options allowed to Android vendors are AES-CBC-ESSIV and AES-XTS, which are much too slow on low-end processors, especially when AES instructions (ARM CE) are absent. Therefore, currently encryption isn't mandatory until 50+ MB/s AES performance. This disproportionately penalizes people who can't afford the higher end devices, who end up with no encryption. This is wrong: encryption should be for everyone.

Some have argued this problem will go away with new CPUs that can do AES faster. This is probably the "right" solution. But in practice this will require ARM CE (AES instructions). Unfortunately, this is an optional processor extension and it will be _at least_ several years before all relevant processors have it, if they ever all do. Note that this requires moving the whole industry, including not just device vendors but also the SoC and processor vendors they rely on; and devices are usually planned years in advance, with price, performance, and power efficiency being the main concerns, rather than encryption. So, it is tough and very slow, and a software solution could be in place much sooner. Plus, in any case it would be valuable to have an efficient cipher in software, in case a weakness is found in AES.

Why Speck128-XTS? Well, after extensive research it actually seems to be the best option from a technical perspective, considering many security and performance aspects; see e.g. https://www.spinics.net/lists/linux-crypto/msg33000.html for details. Again, this is specifically for the disk/file encryption use case on processors without AES instructions. The fact that there isn't a less controversial option is really a consequence of the current state of the art, and not (as far as I can tell) just because we haven't done our homework. Most critically, in the disk/file encryption use case there is no space to store a nonce; thus, stream ciphers such as ChaCha20 are inappropriate, as IVs are reused when data is overwritten, and with flash storage and/or f2fs an attacker may even be able to recover from the "disk" multiple versions of data written to a particular logical block offset, even after only a single point-in-time offline compromise. Stream ciphers fail much more catastrophically than XTS here. (It's unfortunate how many "crypto people" seem to be unfamiliar with the problems and constraints of practical disk encryption.)

Of course, even with kernel support available, no Android device will actually use Speck until it is actually added to the CDD. That may or may not actually happen, and isn't my call. Given the increased level of controversy, it may very well be punted on for this year's Android release. Still, the alternative of no encryption is not okay, so in parallel we've also designed a new length-preserving encryption construction ("HPolyC") based on XChaCha and Poly1305, which will be published soon. Hopefully the wider crypto community will also step up to help review this construction and even publish other new software-optimized disk encryption algorithms, which are greatly needed. (And separately, perhaps the Speck team can better rise to the occasion of the, arguably disproportional but perhaps well-deserved, level of scrutity they are receiving and really set the gold standard for crypto proposals. Although I still find their latest paper to be of higher quality than you find from other designers, it evidently still has room for improvement; and crypto needs to be held to exceptionally high standards in any case.)


The headline should be "Ashur advocates that it is better that a device be unencrypted, than encrypted with Speck". Ashur argues for that position and you can read his reasoning in detail in the above link, but that's the question you want to be thinking about when thinking about this.


Fool me once, same on you. Fool me twice ...


I know this one - that's a replay attack, right?


That's not the saying. Its, "Fool me once, shame on — shame on you. Fool me — you can't get fooled again."

It's an old Tennessee saying. Or I think it's in Tennessee, I know it's in Texas somewhere...


Why are they changing?


Because Speck is said to be more performant than AES.This is discussed in the mail.


Why not use ChaCha20? I believe it's twice as fast, at least in software.


Because ChaCha20's block migh not even fit into memory on platforms that is Speck/Simon meant for. The question should be: Why not use XTEA or RC5/6.

And the answer (ignoring the NSA and it's opaqueness) is that Speck (in its reference implementation form) is slightly faster than both XTEA and RC5 and can be trivially implemented such that it is twice as fast and also uses 64b words in 128b block size variants which on typical 64b platforms leads to another essentially free 2x speedup. (Not that it makes sense to use Speck for bulk encryption on typical 64b CPU)

On the other hand team related to DJB published algorithm called Gimli (https://gimli.cr.yp.to/) whis is essentially narrower ChaCha 20 transformation with intentionally slow diffusion as to fit onto registers of register-constrained CPUs (like Thumb) that is intended as Keccak-style sponge permutation. On the other hand in my opinion authors' paper and slides also have similar issue as what part of the original article criticizes NSA for, the security arguments for Gimli are considerably better than NSA's arguably non-existent ones but then the bigest issue I intuitively have with Speck is slow (and somewhat one-way) diffussion of the key schedule and Gimli's 4-round function is somewhat similar or even worse in this manner (although it does not have the "onewayness", for lack of a better word).


We need a drop-in replacement for AES, and Gimli doesn't have a 128-bit block size.


See ebiggers' comment above - ChaCha20 isn't on its own suitable for disk encryption. We need something that is a drop-in replacement for AES to use in XTS mode.


They're not. These are "lightweight" ciphers intended for embedded applications with very little processing power and memory for program code; they're for applications where AES wouldn't be workable.


They are not. These ciphers were rejected. AES is fine and we have great alternatives like Salsa/ChaCha too.


It seems like everyone wants a set of keys to the world.


>So I think that as a first step, no-encryption is better than using Speck

Yeah so the guy just throws everything out of the window.


Ashur points out the reasoning here just before this statement: 'I would also like to point out that including an algorithm because "it's better than nothing" result in something that is not better-than-nothing, but stands in the way of good solutions.' and 'From the end-user point of view when they get something bundled into Android, they don't know that it was included there as something that is "better than nothing". They think of it as "good enough; endorsed by Android/Google/Linux". What you give them is a false sense of security because they don't know of all the question marks surrounding Speck (both technical and political).'

Edit: also some very good comments in there along the lines of 'why not use AES?', and that the problem that Speck is solving might not be with us for very long.


He is only throwing out Speck and saying it is worse than no-encryption.


He's throwing out all of his reputation with that claim also.

He should not hold any Senior Researcher positions if this message is anything to go by.


Could you please address and refute his reasoning (probided in the prior paragraph), so as to encourage discussion?


Oh there are so many reasons. The easiest is the adoption problem, the same as hit HTTPS: at least 40% of top websites are still unencrypted even after a massive push from browsers to muscle them into submission; compare to the fact that with no massive push 90% of the ones that do support HTTPS don’t support SSL (they only support TLS, the old standard having severe security flaws). In other words there are concrete numbers to back up the point that when an algorithm is learned to be broken, it is easy to convince panicking folks to upgrade it. But when it simply doesn't exist and hasn't ever, it is harder to convince folks that they need it.

Even more to the point, attacks do not exist in isolation and there is no “secure/insecure” dichotomy. You can see this with AES where the AES-256 key schedule is much weaker than the AES-192 key schedule, but the place where this really shines is in related-key attacks, where AES-192 has about twice the bit-strength of AES-256. Does that mean you should switch over? Well, probably not: related key attacks are structurally harder to pull off.

The qualitative differences make comparison hard. Probably if the NSA had an exploit for Speck for example, it would take the form of "steal the device running Speck for an hour to get some carefully chosen challenge-response pairs, return it back to the user, budget a day or two of time on the supercomputer to look for patterns in that data, and finally you can recover the key." (I say this in part because it’s hard to hide backdoors in symmetric crypto.) Many applications would be more secure than nothing if that were the only applicable threat model.

But even quantitatively, in theory security is not a binary. It's much closer to a dollar amount: here is how much it costs to launch attacks against this system. And $0 on this scale is certainly lower than a serious vulnerability at the $100,000 level.

Basically this is a solid argument why you should not roll your own crypto but those arguments are categorically not transferable to crypto whose specifications are known openly and subjects of active cryptanalysis.


the ones that do support HTTPS don’t support SSL (they only support TLS, the old standard having severe security flaws)

Isn't TLS the successor to SSL?


Yes, but I think your parent's point is that the sites which enable HTTPS _did_ choose to remove support for known-broken protocols versions, so you've got on the one hand people who cared at all, all doing something vaguely modern and secure, and people who did nothing (plain HTTP), with no security.

You can think of TLS 1.0 as essentially SSLv3.1, with TLS 1.1 and TLS 1.2 then as SSLv3.2 and SSLv3.3

And you might think of this as the normal course of any versioned system - tinnier and tinnier changes, except TLS 1.3 (now awaiting publication) is basically completely fresh, it only looks similar on the wire until encryption switches on, in order to maximise compatibility with legacy middleboxes, once it penetrates the middleboxes it's nothing like SSLv3.


I believe what he meant by that was that without encryption, you are aware everything you send might be intercepted and read by a third-party, whereas with a flawed encryption algorithm you might send sensitive information under the wrong assumption that it is safe from attack.


Out of curiosity, were Speck to be known as vulnerable to key recovery attacks, would you still endorse it as better than no encryption?

Personally, I agree with his point - for applications where AES is unsuitable and no encryption is currently in use, it is better to wait for hardware to support AES (or an alternative to Speck to become available) rather than implement an algorithm that may lead to expanded vulnerability of the device via key recovery.


Sorry to barge in from the outside, but your question is vaguely specified.

> Out of curiosity, were Speck to be known as vulnerable to key recovery attacks, would you still endorse it as better than no encryption?

My answer to this would be: "it depends". In this case, it depends on the security margin and just how much easier the key recovery attacks would be compared to brute force. For a cipher with 128-bit key, if the best known attack, after years of careful and aggressive cryptanalysis, required 2^122 work I probably would use it. If the best attack required 2^90 work, I wouldn't use it.

After all we are talking about security margins and risk tolerance.


So just after ISO rejected Simon and Speck, NIST is announcing new crypto competition just for algorithms like Simon and Speck, how convenient for the NSA, lets see how it goes, maybe this time they will be able to shove it just like DUAL_EC:

"NIST Issues First Call for ‘Lightweight Cryptography’ to Protect Small Electronics"

https://www.nist.gov/news-events/news/2018/04/nist-issues-fi...


Since it's unlikely that Simon and Speck will win this competition, given that there are other credible lightweight designs which will probably be better-analyzed since they're open, it's hard for me to see this as anything but a good thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: