The idea was to decouple the security of the hash function from its output size, and have a single parameter determining its security (the capacity). At the moment, when you have a hash function, you expect to have 2^n (second-)preimage security and 2^(n/2) collision security, where n is the output size. In the case of sponges (and Keccak), the security level also depends on c, the capacity, which is a parameter that also happens to affect performance of the hash function.
To avoid generic preimage attacks, the capacity parameter in Keccak must be 4 times the size of the desired security level; for 128 bits of security we need c = 512, for 256 we need c = 1024. Achieving collision resistance requires smaller c, only 2 times the desired security level. This results in a very slow hash function at high security, more than twice as slow as SHA-512 on x86 chips.
So the proposal was to set c = 2n, where n is the security level. This puts the preimage resistance of Keccak at the same level as its collision resistance, i.e., 2^128 preimage security for a 256-bit output, and 2^256 security for a 512-bit hash. That is, the strengths of the 3 main properties of the hash function, preimage, second-preimage, and collision-resistance are all the same. This is not what is expected out of a perfect hash function, but this is very reasonable nonetheless, and the performance of Keccak is otherwise lacking.
After the leaks, however, there was a lot of attention focused on NIST and these changes to Keccak got confused with attempted backdooring. Much protesting ensued, and the decision ended up being reverted back to having a Keccak that has 512-bit preimage security at 512 bits of output, but is disappointingly slow.
 http://csrc.nist.gov/groups/ST/hash/sha-3/documents/Keccak-s... (Slide 47 onwards)
There were people (myself included) complaining about this to NIST prior to the Snowden leaks.
SHA-256 pre-image == collision resistance property and there are many cryptographic protocols (e.g. hash based signatures) where pre-image security is much more important than collision security, and where the additional size of being forced to switch to a 512-bit hash function would result in burdensome communications overheads, just to keep the same formal security as SHA-256 (while staying in a standard hash function).
There is a lot more complexity in this subject in that some of the candidates chose to not meet the security requirements for these performance reasons (e.g. cubehash) and were rejected for that reason. Had the goal been something with N/2 security across the board all along the designs and choices from all the candidates would have likely been pretty different.
But once tampering was seriously raised as a concern, doing something as close to the original proposal as possible was much more attractive than other options.
Keccak's software performance was not top of the pack; c=2n would still not have beaten Skein or BLAKE, let alone BLAKE2.
Prudence dictated standardising the parameters that had actually been analysed, rather than changing them after the race was won.
In that sense I think this is overblown. I don't think anyone thinks NSA has a preimage attack that works for 256 bits capacity but not 512 bits. This is an argument over performance vs. future-proofing that should be fairly boring to non-cryptographers.
In any case, having a c = 512 ceiling for the capacity would put every attack at over 2^256 cost, which presumably is enough, and would keep the function reasonably fast. Note that c = 2n would have rivaled Skein and possibly BLAKE in terms of speed.
As for analysis, what matters is the analysis performed on the permutation. The permutation was not touched, and touching that would indeed raise alarms.
Even though I support your conclusion of it being a manufactured controversy, this part of your post is not correct - it's actually 2 times - and is at the heart of the discussion. A citation from the Keccak site:
> The capacity is a parameter of the sponge construction (and of Keccak) that determines a particular security strength level, in the line of the levels defined in [NIST SP 800-57]. Namely, for a capacity c, the security strength level is c/2 bits and the sponge function is claimed to resist against all attacks up to 2^c/2, unless easier with a random oracle. - http://keccak.noekeon.org/yes_this_is_keccak.html
NIST requested a preimage resistance of 512bit for the SHA512 replacement which was a standard assumption for hash function following the usual design patterns. These classical constructions try to guarantee a preimage resistance of n bit and collision resistance of n/2 bit for an output size of n bit. These are the theoretically possible maximum values. Anything above can be broken with generic attacks (this is what the citation refers to with "unless easier with a random oracle").
And then came Keccak with its new design. This new design was exactly the reason why it was chosen as SHA3, to provide as much diversity as possible in case of larger cryptographic breakthroughs. Remember that MD5 and SHA1 simultaneously suffered progressive attacks relying on the same newly discovered attack principles.
The new design of Keccak contains one security parameter for all attacks: The capacity c. If the output size is large enough to support the desired security then all security levels are c/2 bits. Additionally the speed of Keccak is directly proportionally to 1600-c. That means the speed for c=512 is roughly doubly the speed for c=1024. And as pointed out by others, the speed for c=1024 is rather lousy.
Now because of the requirements set forth by NIST in the original selection process the Keccak authors chose c=512 for the SHA256 replacement and c=1024 for the SHA512 replacement.
I fully agree with this part:
> So the proposal was to set c = 2n, where n is the security level. This puts the preimage resistance of Keccak at the same level as its collision resistance, i.e., 2^128 preimage security for a 256-bit output, and 2^256 security for a 512-bit hash. That is, the strengths of the 3 main properties of the hash function, preimage, second-preimage, and collision-resistance are all the same. This is not what is expected out of a perfect hash function, but this is very reasonable nonetheless, and the performance of Keccak is otherwise lacking.
The argument for why it is very reasonable is that the difference between a 256bit security level (i.e. c=512) and 512bit security level is theoretical at best. Remember that a 256bit security level means that there ought to be no attack using less than 2^256 many steps. Currently it is believed that 2^128 many steps is unachievable for human kind in the near future. Now 256bit security pushes the requirement to a number of steps that is physically impossible with the available energy in our solar system, whatever kind of fancy computing technology you invent. My favorite quote of Applied Cryptography (page 158): "brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space."
Now make this requirement 2^256 times harder and you end up with the absurdly high requirement of 512 bit security.
I find it somewhat reasonable to ask for 256 bit security to be on the safe side regarding future advancement in quantum computing and to allow for some leeway in case there are some cryptanalytical advances that might be spoiled with a higher capacity. But this extra margin of security is already fully covered with c=512. If you want increase the likely hood it resists against future cryptanalytical advances there are other nobs to turn. Specifically each iteration of Keccak - which takes in a chunk of input - is build out of several rounds where in each round a permutation is applied to the current state. Many cryptographic primitives have a similar designs, AES is build of several rounds as well as the SHA3 finalist Skein. A property of symmetric cryptography that seems somewhat magical from the outside is that even though a single round is in no way secure if you take enough of these rounds the attacks start to fail. Often attacks are developed that can break k out of the n rounds, that means assuming it only had k rounds, we can break the primitive. A common rough measurement for the security margin of a cryptographic algorithm is how large the quotient k/n of breakable rounds vs actual rounds is.
So if one wants to increase the prophesied security against future attacks, then the best place to decrease the efficiency in is in the number of rounds.
Depends how you're counting. A 512-bit hash function is supposed to have 2^512 preimage resistance, but its overall security level is necessarily 2^256 due to collisions.
In practice I don't think it would matter: the additional speed of the reduced capacity version would be nice to have. However, many competition entries would look different to take advantage of this.
The move itself wasn't shady, it was the timing that raised eyebrows.
It's probably just that.
See also: http://keccak.noekeon.org/a_concrete_proposal.html
I have no security concerns with the proposed SHA-3 drop-ins.
I am not entirely satisfied with the SHAKE XOF functions, as they didn't specify SHAKE512(M,d) = KECCAK(M || 1111, d) but instead the weaker SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't think they hold up to post-quantum well enough for use with, say, Merkle signatures.
As usual, they strongly favour hardware implementations; that's internal culture at work, there.
Software performance of SHA-3 is unfortunately not very good. The other finalists like BLAKE (or its faster successor BLAKE2), or Skein, are much more viable software contenders (and make excellent tree hashes), and no-one's particularly rushing towards SHA-3 anyway as except for the length-extension attack common to all Damgård-Merkle hashes, the SHA-2 functions seem okay for now (except for the not-entirely-undeserved stigma of having come from the NSA - that said, I don't think they're 'enabled' in any way).
Bigger problems exist than our hash algorithms, but it's good to have a few good ones under our belts for the future.
The main reason for the choice of Keccak was not speed but diversity to the SHA2 family. Which is consistent with the motivation to start with the contest in the first place: It was feared that SHA2 would fall soon after the cryptanalytical advances against MD5 and SHA1 were published. As such Bruce Schneier, being one of the authors of Skein, did welcome the decision for Keccak:
> It's a fine choice. I'm glad that SHA-3 is nothing like the SHA-2 family; something completely different is good. - https://www.schneier.com/blog/archives/2012/10/keccak_is_sha...
A few days earlier he wished the outcome to be "no award" with pretty much the same argument you gave: https://www.schneier.com/blog/archives/2012/09/sha-3_will_be...
> I am not entirely satisfied with the SHAKE XOF functions, as they didn't specify SHAKE512(M,d) = KECCAK(M || 1111, d) but instead the weaker SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't think they hold up to post-quantum well enough for use with, say, Merkle signatures.
As I have written above ( https://news.ycombinator.com/item?id=8062952 ) even a security of 256bit is astronomically high. What attacks do you have in mind that will more than half the strength of the hash function? And in any way, you do have SHA3-512 for exactly this high capacity requirements. The choice of the SHAKE values was part of a compromise to allow implementations to use the smaller capacity that SHA3-512 did not offer in case you need larger output sizes.
SHA-3 (with very specific parameters) won the brutally audited NIST hash competition. NIST announces official SHA-3 will use different parameters that were never evaluated in the competition phase. Warning bells go off. NIST backpedals. Cue conspiracy theories due to precedent for backdoored crypto algos.
The only reasonable thing we can do is to trust the audit. Post-audit changes, from a pracitcal point of view, mean that it re-gains the "black box" attribute.
Even a title of "SHA-3 NIST announcement controversy" would be good.
Do you want HN commenters to determine if SHA-3 would have been cryptographically secure with the proposed changes? I don't think most people here are qualified to answer that.
Or do you want to know people's feelings about the NSA/NIST connection? I think that's fairly covered ground.
The problem is that I don't know of a really open system like Hacker News that has the same content and community.
Same thing with reddit.
RSS is the right type of idea but you can't comment and it has other limitations compared to things like reddit and Hacker News.
Its not that the overloads are malicious, its just that those types of efforts are completely misguided.
I wonder if anyone knows of any open distributed unedited unmoderated systems that have features and communities like reddit or Hacker News. Ideally not even a single web site, but more like an open peer-to-peer distributed protocol with multiple clients.
I think the bigger problem is that the era of the semantic web/community standards like RSS, etc has largely passed us by. Participation now occurs on unmoderated sites like Twitter/Tumblr/etc, or on moderated community forums (and in a context where there's interesting content but I can't choose who specifically to follow, I prefer moderation).
Or you could cryptographically sign every single one of your posts so that people could see whether they've been edited or not. I suspect a lump of ASCIIArmor to your posts would not be popular.
(In a NIST paper about the selection of AES they state: “table lookup: not vulnerable to timing attacks.” Oops. If you're building your own hardware, maybe.)