Hacker News new | past | comments | ask | show | jobs | submit login
SHA-3 NIST announcement controversy (wikipedia.org)
77 points by tete on July 20, 2014 | hide | past | favorite | 33 comments



This was a manufactured controversy if I ever saw one. The controversial changes were proposed by the Keccak team sometime after Keccak was announced as the SHA-3 winner [1], and did not originate from NIST.

The idea was to decouple the security of the hash function from its output size, and have a single parameter determining its security (the capacity). At the moment, when you have a hash function, you expect to have 2^n (second-)preimage security and 2^(n/2) collision security, where n is the output size. In the case of sponges (and Keccak), the security level also depends on c, the capacity, which is a parameter that also happens to affect performance of the hash function.

To avoid generic preimage attacks, the capacity parameter in Keccak must be 4 times the size of the desired security level; for 128 bits of security we need c = 512, for 256 we need c = 1024. Achieving collision resistance requires smaller c, only 2 times the desired security level. This results in a very slow hash function at high security, more than twice as slow as SHA-512 on x86 chips.

So the proposal was to set c = 2n, where n is the security level. This puts the preimage resistance of Keccak at the same level as its collision resistance, i.e., 2^128 preimage security for a 256-bit output, and 2^256 security for a 512-bit hash. That is, the strengths of the 3 main properties of the hash function, preimage, second-preimage, and collision-resistance are all the same. This is not what is expected out of a perfect hash function, but this is very reasonable nonetheless, and the performance of Keccak is otherwise lacking.

After the leaks, however, there was a lot of attention focused on NIST and these changes to Keccak got confused with attempted backdooring. Much protesting ensued, and the decision ended up being reverted back to having a Keccak that has 512-bit preimage security at 512 bits of output, but is disappointingly slow.

[1] http://csrc.nist.gov/groups/ST/hash/sha-3/documents/Keccak-s... (Slide 47 onwards)


> After the leaks, however, there was a lot of attention focused on NIST and these changes

There were people (myself included) complaining about this to NIST prior to the Snowden leaks.

SHA-256 pre-image == collision resistance property and there are many cryptographic protocols (e.g. hash based signatures) where pre-image security is much more important than collision security, and where the additional size of being forced to switch to a 512-bit hash function would result in burdensome communications overheads, just to keep the same formal security as SHA-256 (while staying in a standard hash function).

There is a lot more complexity in this subject in that some of the candidates chose to not meet the security requirements for these performance reasons (e.g. cubehash) and were rejected for that reason. Had the goal been something with N/2 security across the board all along the designs and choices from all the candidates would have likely been pretty different.


There was also the proposal to fix the capacity to c=512 among all output sizes and thus reach a security level of 256bit throughout all instances (given the output size is large enough to support the guarantee). Unfortunately this proposal didn't make it to the draft.


I liked that proposal... I think that probably if not for the Snowden-assisted concerns that proposal would have prevailed. The reduction of pre-image security to 128 bits would be a step backwards compared to sha256, but I think no one was seriously arguing for security levels >256 bit— even assuming QC 256 bits should be adequate against all generic attacks.

But once tampering was seriously raised as a concern, doing something as close to the original proposal as possible was much more attractive than other options.


It didn't make any sense to me to replace a reasonable hash algorithm with one that was, in a technical and eventually (given Grover & Brassard) a practical sense, worse.

Keccak's software performance was not top of the pack; c=2n would still not have beaten Skein or BLAKE, let alone BLAKE2.

Prudence dictated standardising the parameters that had actually been analysed, rather than changing them after the race was won.


I think the proposed changes were a bad idea for political reasons, and arguably for technical reasons as well, but the controversity for the internet at large seems to stem from an insinuation that the changes were done to make it possible for the NSA to break SHA-3, and I suspect that is why this was submitted to HN.

In that sense I think this is overblown. I don't think anyone thinks NSA has a preimage attack that works for 256 bits capacity but not 512 bits. This is an argument over performance vs. future-proofing that should be fairly boring to non-cryptographers.


The quantum scenario is not particularly applicable here. Generic preimage attacks on sponges are essentially collision-finding on the capacity. Grover does not beat classical algorithms at this; Brassard-Høyer-Tapp doesn't either, if you take into account the full cost of the algorithm.

In any case, having a c = 512 ceiling for the capacity would put every attack at over 2^256 cost, which presumably is enough, and would keep the function reasonably fast. Note that c = 2n would have rivaled Skein and possibly BLAKE in terms of speed.

As for analysis, what matters is the analysis performed on the permutation. The permutation was not touched, and touching that would indeed raise alarms.


It depends on the chip you use, and the implementation; it would have rivalled Skein (due to the 80 rounds) but BLAKE was faster, and BLAKE2 is faster still.


> To avoid generic preimage attacks, the capacity parameter in Keccak must be 4 times the size of the desired security level

Even though I support your conclusion of it being a manufactured controversy, this part of your post is not correct - it's actually 2 times - and is at the heart of the discussion. A citation from the Keccak site:

> The capacity is a parameter of the sponge construction (and of Keccak) that determines a particular security strength level, in the line of the levels defined in [NIST SP 800-57]. Namely, for a capacity c, the security strength level is c/2 bits and the sponge function is claimed to resist against all attacks up to 2^c/2, unless easier with a random oracle. - http://keccak.noekeon.org/yes_this_is_keccak.html

NIST requested a preimage resistance of 512bit for the SHA512 replacement which was a standard assumption for hash function following the usual design patterns. These classical constructions try to guarantee a preimage resistance of n bit and collision resistance of n/2 bit for an output size of n bit. These are the theoretically possible maximum values. Anything above can be broken with generic attacks (this is what the citation refers to with "unless easier with a random oracle").

And then came Keccak with its new design. This new design was exactly the reason why it was chosen as SHA3, to provide as much diversity as possible in case of larger cryptographic breakthroughs. Remember that MD5 and SHA1 simultaneously suffered progressive attacks relying on the same newly discovered attack principles.

The new design of Keccak contains one security parameter for all attacks: The capacity c. If the output size is large enough to support the desired security then all security levels are c/2 bits. Additionally the speed of Keccak is directly proportionally to 1600-c. That means the speed for c=512 is roughly doubly the speed for c=1024. And as pointed out by others, the speed for c=1024 is rather lousy.

Now because of the requirements set forth by NIST in the original selection process the Keccak authors chose c=512 for the SHA256 replacement and c=1024 for the SHA512 replacement.

I fully agree with this part:

> So the proposal was to set c = 2n, where n is the security level. This puts the preimage resistance of Keccak at the same level as its collision resistance, i.e., 2^128 preimage security for a 256-bit output, and 2^256 security for a 512-bit hash. That is, the strengths of the 3 main properties of the hash function, preimage, second-preimage, and collision-resistance are all the same. This is not what is expected out of a perfect hash function, but this is very reasonable nonetheless, and the performance of Keccak is otherwise lacking.

The argument for why it is very reasonable is that the difference between a 256bit security level (i.e. c=512) and 512bit security level is theoretical at best. Remember that a 256bit security level means that there ought to be no attack using less than 2^256 many steps. Currently it is believed that 2^128 many steps is unachievable for human kind in the near future. Now 256bit security pushes the requirement to a number of steps that is physically impossible with the available energy in our solar system, whatever kind of fancy computing technology you invent. My favorite quote of Applied Cryptography (page 158): "brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space."

Now make this requirement 2^256 times harder and you end up with the absurdly high requirement of 512 bit security.

I find it somewhat reasonable to ask for 256 bit security to be on the safe side regarding future advancement in quantum computing and to allow for some leeway in case there are some cryptanalytical advances that might be spoiled with a higher capacity. But this extra margin of security is already fully covered with c=512. If you want increase the likely hood it resists against future cryptanalytical advances there are other nobs to turn. Specifically each iteration of Keccak - which takes in a chunk of input - is build out of several rounds where in each round a permutation is applied to the current state. Many cryptographic primitives have a similar designs, AES is build of several rounds as well as the SHA3 finalist Skein. A property of symmetric cryptography that seems somewhat magical from the outside is that even though a single round is in no way secure if you take enough of these rounds the attacks start to fail. Often attacks are developed that can break k out of the n rounds, that means assuming it only had k rounds, we can break the primitive. A common rough measurement for the security margin of a cryptographic algorithm is how large the quotient k/n of breakable rounds vs actual rounds is.

So if one wants to increase the prophesied security against future attacks, then the best place to decrease the efficiency in is in the number of rounds.


> Even though I support your conclusion of it being a manufactured controversy, this part of your post is not correct - it's actually 2 times - and is at the heart of the discussion. A citation from the Keccak site:

Depends how you're counting. A 512-bit hash function is supposed to have 2^512 preimage resistance, but its overall security level is necessarily 2^256 due to collisions.


There is more to the story then is in the linked article. DJB contributed Cubehash, which had limited preimage resistance due to some design decisions made for speed. This was controversial, and one of the reasons for Cubehash being eliminated. But at the end of the competition, NIST lowered the preimage resistance requirement for the eventual standard to that of Cubehash.

In practice I don't think it would matter: the additional speed of the reduced capacity version would be nice to have. However, many competition entries would look different to take advantage of this.


As I recall, this was a significant complaint. If NIST was OK with lowering the security requirements then they should have done it at the outset and examined all the algorithms with that standard.

The move itself wasn't shady, it was the timing that raised eyebrows.


A few years back I had the chance to talk with Joan Daemen after he gave a presentation about keccak, which hadn't won the competition yet. This was way before Snowden. He was very sceptical about the use of his work. He thought it was fun doing it, but it didn't have any use, since everything has backdoors anyway. That's what he said. Sounded a bit paranoid to me back then, but now it sounds a lot more plausible.


It seems to be the standard for cryptographers to never endorse or recommend any particular solution, because they know that it will eventually have holes.

It's probably just that.


It's similar to how scientists won't advocate their findings as anything more than probable. Seeing as for something to be scientific, it must be falsifiable. Therefor there's always the possibility of unknown errors in their methodology negating their findings.


As you'll note, they went back on these changes and the final (currently draft) SHA-3 in the FIPS-202 draft is Keccak pretty much as it was entered. They've proposed using the Keccak team's own Sakura padding - which is a pretty simple padding, also ready for use with tree-hashes.

See also: http://keccak.noekeon.org/a_concrete_proposal.html

I have no security concerns with the proposed SHA-3 drop-ins.

I am not entirely satisfied with the SHAKE XOF functions, as they didn't specify SHAKE512(M,d) = KECCAK[1024](M || 1111, d) but instead the weaker SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't think they hold up to post-quantum well enough for use with, say, Merkle signatures.

As usual, they strongly favour hardware implementations; that's internal culture at work, there.

Software performance of SHA-3 is unfortunately not very good. The other finalists like BLAKE (or its faster successor BLAKE2), or Skein, are much more viable software contenders (and make excellent tree hashes), and no-one's particularly rushing towards SHA-3 anyway as except for the length-extension attack common to all Damgård-Merkle hashes, the SHA-2 functions seem okay for now (except for the not-entirely-undeserved stigma of having come from the NSA - that said, I don't think they're 'enabled' in any way).

Bigger problems exist than our hash algorithms, but it's good to have a few good ones under our belts for the future.


> Software performance of SHA-3 is unfortunately not very good. The other finalists like BLAKE (or its faster successor BLAKE2), or Skein, are much more viable software contenders (and make excellent tree hashes), and no-one's particularly rushing towards SHA-3 anyway as except for the length-extension attack common to all Damgård-Merkle hashes, the SHA-2 functions seem okay for now

The main reason for the choice of Keccak was not speed but diversity to the SHA2 family. Which is consistent with the motivation to start with the contest in the first place: It was feared that SHA2 would fall soon after the cryptanalytical advances against MD5 and SHA1 were published. As such Bruce Schneier, being one of the authors of Skein, did welcome the decision for Keccak:

> It's a fine choice. I'm glad that SHA-3 is nothing like the SHA-2 family; something completely different is good. - https://www.schneier.com/blog/archives/2012/10/keccak_is_sha...

A few days earlier he wished the outcome to be "no award" with pretty much the same argument you gave: https://www.schneier.com/blog/archives/2012/09/sha-3_will_be...

> I am not entirely satisfied with the SHAKE XOF functions, as they didn't specify SHAKE512(M,d) = KECCAK[1024](M || 1111, d) but instead the weaker SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't think they hold up to post-quantum well enough for use with, say, Merkle signatures.

As I have written above ( https://news.ycombinator.com/item?id=8062952 ) even a security of 256bit is astronomically high. What attacks do you have in mind that will more than half the strength of the hash function? And in any way, you do have SHA3-512 for exactly this high capacity requirements. The choice of the SHAKE values was part of a compromise to allow implementations to use the smaller capacity that SHA3-512 did not offer in case you need larger output sizes.


TL;DR

SHA-3 (with very specific parameters) won the brutally audited NIST hash competition. NIST announces official SHA-3 will use different parameters that were never evaluated in the competition phase. Warning bells go off. NIST backpedals. Cue conspiracy theories due to precedent for backdoored crypto algos.


Not really. The question was which capacity to use for which hash variant. Shuffling a handful parameter choices, not switching to significantly new ones.


In all fairness, people like me who are not security researchers cannot assess whether or not such a change is problematic.

The only reasonable thing we can do is to trust the audit. Post-audit changes, from a pracitcal point of view, mean that it re-gains the "black box" attribute.


tangential: if you're worried about nsa-backdoored algorithms, instead of betting hard on one particular that you happen to judge beyond reproach, you'd be better off incorporating algorithm agility into your design (ofc in such a way that rules out downgrade attacks by construction).


Sorry if this is a silly question, but by "algorithm agility" do you mean abstracting away the algorithm into some module?


Protocol designs that specify which algorithms to use as part of the protocol itself (conveyed through existing trust relationships, avoiding downgrade attacks), and making as many algorithms available in your software as possible (that you believe are secure; no need for crc32). Then users are able to pick which one(s) they're willing to trust, both presently and in the future (of course this preference will most likely be provided by their operating system or other packaged environment).


Can the title be edited back somewhat toward the original? "Can SHA-3 be trusted?" is definitely editorializing, but the new title wipes away the context for discussing SHA-3's security.

Even a title of "SHA-3 NIST announcement controversy" would be good.


What kind of discussion of SHA-3's security are we looking for here?

Do you want HN commenters to determine if SHA-3 would have been cryptographically secure with the proposed changes? I don't think most people here are qualified to answer that.

Or do you want to know people's feelings about the NSA/NIST connection? I think that's fairly covered ground.


This is an internet message board, not a dissertation, I just want to hear people's take on it. I dunno where you stand, but I learned stuff from the comments above.


I think the title editing is unacceptable.

The problem is that I don't know of a really open system like Hacker News that has the same content and community.

Same thing with reddit.

RSS is the right type of idea but you can't comment and it has other limitations compared to things like reddit and Hacker News.

Its not that the overloads are malicious, its just that those types of efforts are completely misguided.

I wonder if anyone knows of any open distributed unedited unmoderated systems that have features and communities like reddit or Hacker News. Ideally not even a single web site, but more like an open peer-to-peer distributed protocol with multiple clients.


The commenting system on Google Reader (despite being neither open or distributed) is the closest I've seen to a system that acts the way I'd like online. I can imagine a distributed version that would operate e.g. with each of us publishing a feed of things we read and commented on, to which friends could subscribe and publish their own comments on... there are some interesting scaling/complexity issues but it's not insoluble. Some combination of FOAF, RSS, and trackbacks conceptually.

I think the bigger problem is that the era of the semantic web/community standards like RSS, etc has largely passed us by. Participation now occurs on unmoderated sites like Twitter/Tumblr/etc, or on moderated community forums (and in a context where there's interesting content but I can't choose who specifically to follow, I prefer moderation).


Usenet.

Or you could cryptographically sign every single one of your posts so that people could see whether they've been edited or not. I suspect a lump of ASCIIArmor to your posts would not be popular.


Blake2 is better and much faster anyway:

https://blake2.net/


NIST has demonstrated to such a high degree on multiple occasions that it isn't trustworthy, so I think it should be ignored, practically speaking.


would that mean AES (rinjdael) is not trustworthy ?


Adam Langley thinks that NIST continually pick algorithms that aren't suited to software implementations. https://www.imperialviolet.org/2012/10/21/nist.html

(In a NIST paper about the selection of AES they state: “table lookup: not vulnerable to timing attacks.” Oops. If you're building your own hardware, maybe.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: