
SHA-3 NIST announcement controversy - tete
https://en.wikipedia.org/wiki/Keccak#NIST_announcement_controversy
======
pbsd
This was a manufactured controversy if I ever saw one. The controversial
changes were proposed by the Keccak team sometime after Keccak was announced
as the SHA-3 winner [1], and did not originate from NIST.

The idea was to decouple the security of the hash function from its output
size, and have a single parameter determining its security (the capacity). At
the moment, when you have a hash function, you expect to have 2^n
(second-)preimage security and 2^(n/2) collision security, where n is the
output size. In the case of sponges (and Keccak), the security level also
depends on c, the capacity, which is a parameter that also happens to affect
performance of the hash function.

To avoid generic preimage attacks, the capacity parameter in Keccak must be 4
times the size of the desired security level; for 128 bits of security we need
c = 512, for 256 we need c = 1024. Achieving collision resistance requires
smaller c, only 2 times the desired security level. This results in a very
slow hash function at high security, more than twice as slow as SHA-512 on x86
chips.

So the proposal was to set c = 2n, where n is the security level. This puts
the preimage resistance of Keccak at the same level as its collision
resistance, i.e., 2^128 preimage security for a 256-bit output, and 2^256
security for a 512-bit hash. That is, the strengths of the 3 main properties
of the hash function, preimage, second-preimage, and collision-resistance are
all the same. This is not what is expected out of a perfect hash function, but
this is very reasonable nonetheless, and the performance of Keccak is
otherwise lacking.

After the leaks, however, there was a lot of attention focused on NIST and
these changes to Keccak got confused with attempted backdooring. Much
protesting ensued, and the decision ended up being reverted back to having a
Keccak that has 512-bit preimage security at 512 bits of output, but is
disappointingly slow.

[1]
[http://csrc.nist.gov/groups/ST/hash/sha-3/documents/Keccak-s...](http://csrc.nist.gov/groups/ST/hash/sha-3/documents/Keccak-
slides-at-NIST.pdf) (Slide 47 onwards)

~~~
AlyssaRowan
It didn't make any sense to me to replace a reasonable hash algorithm with one
that was, in a technical and eventually (given Grover & Brassard) a practical
sense, worse.

Keccak's software performance was not top of the pack; c=2n would still not
have beaten Skein or BLAKE, let alone BLAKE2.

Prudence dictated standardising the parameters that had actually been
_analysed_ , rather than changing them after the race was won.

~~~
pbsd
The quantum scenario is not particularly applicable here. Generic preimage
attacks on sponges are essentially collision-finding on the capacity. Grover
does not beat classical algorithms at this; Brassard-Høyer-Tapp doesn't
either, if you take into account the full cost of the algorithm.

In any case, having a c = 512 ceiling for the capacity would put every attack
at over 2^256 cost, which presumably is enough, and would keep the function
reasonably fast. Note that c = 2n _would_ have rivaled Skein and possibly
BLAKE in terms of speed.

As for analysis, what matters is the analysis performed on the permutation.
The permutation was not touched, and touching that would indeed raise alarms.

~~~
AlyssaRowan
It depends on the chip you use, and the implementation; it would have rivalled
Skein (due to the 80 rounds) but BLAKE was faster, and BLAKE2 is faster still.

------
wbl
There is more to the story then is in the linked article. DJB contributed
Cubehash, which had limited preimage resistance due to some design decisions
made for speed. This was controversial, and one of the reasons for Cubehash
being eliminated. But at the end of the competition, NIST lowered the preimage
resistance requirement for the eventual standard to that of Cubehash.

In practice I don't think it would matter: the additional speed of the reduced
capacity version would be nice to have. However, many competition entries
would look different to take advantage of this.

~~~
B-Con
As I recall, this was a significant complaint. If NIST was OK with lowering
the security requirements then they should have done it at the outset and
examined all the algorithms with that standard.

The move itself wasn't shady, it was the timing that raised eyebrows.

------
HansHarmannij
A few years back I had the chance to talk with Joan Daemen after he gave a
presentation about keccak, which hadn't won the competition yet. This was way
before Snowden. He was very sceptical about the use of his work. He thought it
was fun doing it, but it didn't have any use, since everything has backdoors
anyway. That's what he said. Sounded a bit paranoid to me back then, but now
it sounds a lot more plausible.

~~~
kzrdude
It seems to be the standard for cryptographers to never endorse or recommend
any particular solution, because they know that it will eventually have holes.

It's probably just that.

~~~
rectangletangle
It's similar to how scientists won't advocate their findings as anything more
than probable. Seeing as for something to be scientific, it must be
falsifiable. Therefor there's always the possibility of unknown errors in
their methodology negating their findings.

------
AlyssaRowan
As you'll note, they went back on these changes and the final (currently
draft) SHA-3 in the FIPS-202 draft is Keccak pretty much as it was entered.
They've proposed using the Keccak team's own Sakura padding - which is a
pretty simple padding, also ready for use with tree-hashes.

See also:
[http://keccak.noekeon.org/a_concrete_proposal.html](http://keccak.noekeon.org/a_concrete_proposal.html)

I have no security concerns with the proposed SHA-3 drop-ins.

I am not entirely satisfied with the SHAKE XOF functions, as they didn't
specify SHAKE512(M,d) = KECCAK[1024](M || 1111, d) but instead the weaker
SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't
think they hold up to post-quantum well enough for use with, say, Merkle
signatures.

As usual, they strongly favour hardware implementations; that's internal
culture at work, there.

Software performance of SHA-3 is unfortunately not very good. The other
finalists like BLAKE (or its faster successor BLAKE2), or Skein, are much more
viable software contenders (and make excellent tree hashes), and no-one's
particularly rushing towards SHA-3 anyway as except for the length-extension
attack common to all Damgård-Merkle hashes, the SHA-2 functions seem okay for
now (except for the not-entirely-undeserved stigma of having come from the NSA
- that said, I don't think they're 'enabled' in any way).

Bigger problems exist than our hash algorithms, but it's good to have a few
good ones under our belts for the future.

~~~
Perseids
> Software performance of SHA-3 is unfortunately not very good. The other
> finalists like BLAKE (or its faster successor BLAKE2), or Skein, are much
> more viable software contenders (and make excellent tree hashes), and no-
> one's particularly rushing towards SHA-3 anyway as except for the length-
> extension attack common to all Damgård-Merkle hashes, the SHA-2 functions
> seem okay for now

The main reason for the choice of Keccak was not speed but diversity to the
SHA2 family. Which is consistent with the motivation to start with the contest
in the first place: It was feared that SHA2 would fall soon after the
cryptanalytical advances against MD5 and SHA1 were published. As such Bruce
Schneier, being one of the authors of Skein, did welcome the decision for
Keccak:

> It's a fine choice. I'm glad that SHA-3 is nothing like the SHA-2 family;
> something completely different is good. -
> [https://www.schneier.com/blog/archives/2012/10/keccak_is_sha...](https://www.schneier.com/blog/archives/2012/10/keccak_is_sha-3.html)

A few days earlier he wished the outcome to be "no award" with pretty much the
same argument you gave:
[https://www.schneier.com/blog/archives/2012/09/sha-3_will_be...](https://www.schneier.com/blog/archives/2012/09/sha-3_will_be_a.html)

> I am not entirely satisfied with the SHAKE XOF functions, as they didn't
> specify SHAKE512(M,d) = KECCAK[1024](M || 1111, d) but instead the weaker
> SHAKE256 and SHAKE128. Those functions won't have a problem now, but I don't
> think they hold up to post-quantum well enough for use with, say, Merkle
> signatures.

As I have written above (
[https://news.ycombinator.com/item?id=8062952](https://news.ycombinator.com/item?id=8062952)
) even a security of 256bit is astronomically high. What attacks do you have
in mind that will more than half the strength of the hash function? And in any
way, you do have SHA3-512 for exactly this high capacity requirements. The
choice of the SHAKE values was part of a compromise to allow implementations
to use the smaller capacity that SHA3-512 did not offer in case you need
larger output sizes.

------
truffleze
TL;DR

SHA-3 (with very specific parameters) won the brutally audited NIST hash
competition. NIST announces official SHA-3 will use different parameters that
were never evaluated in the competition phase. Warning bells go off. NIST
backpedals. Cue conspiracy theories due to precedent for backdoored crypto
algos.

~~~
Dylan16807
Not really. The question was which capacity to use for which hash variant.
Shuffling a handful parameter choices, not switching to significantly new
ones.

~~~
sentenza
In all fairness, people like me who are not security researchers cannot assess
whether or not such a change is problematic.

The only reasonable thing we can do is to trust the audit. Post-audit changes,
from a pracitcal point of view, mean that it re-gains the "black box"
attribute.

------
mindslight
tangential: if you're worried about nsa-backdoored algorithms, instead of
betting hard on one particular that you happen to judge beyond reproach, you'd
be better off incorporating algorithm agility into your design (ofc in such a
way that rules out downgrade attacks by construction).

~~~
andrewchoi
Sorry if this is a silly question, but by "algorithm agility" do you mean
abstracting away the algorithm into some module?

~~~
mindslight
Protocol designs that specify which algorithms to use as part of the protocol
itself (conveyed through existing trust relationships, avoiding downgrade
attacks), and making as many algorithms available in your software as possible
(that _you_ believe are secure; no need for crc32). Then _users_ are able to
pick which one(s) they're willing to trust, both presently and in the future
(of course this preference will most likely be provided by their operating
system or other packaged environment).

------
lumpypua
Can the title be edited back somewhat toward the original? "Can SHA-3 be
trusted?" is definitely editorializing, but the new title wipes away the
context for discussing SHA-3's security.

Even a title of "SHA-3 NIST announcement controversy" would be good.

~~~
ilaksh
I think the title editing is unacceptable.

The problem is that I don't know of a really open system like Hacker News that
has the same content and community.

Same thing with reddit.

RSS is the right type of idea but you can't comment and it has other
limitations compared to things like reddit and Hacker News.

Its not that the overloads are malicious, its just that those types of efforts
are completely misguided.

I wonder if anyone knows of any open distributed unedited unmoderated systems
that have features and communities like reddit or Hacker News. Ideally not
even a single web site, but more like an open peer-to-peer distributed
protocol with multiple clients.

~~~
pndmnm
The commenting system on Google Reader (despite being neither open or
distributed) is the closest I've seen to a system that acts the way I'd like
online. I can imagine a distributed version that would operate e.g. with each
of us publishing a feed of things we read and commented on, to which friends
could subscribe and publish their own comments on... there are some
interesting scaling/complexity issues but it's not insoluble. Some combination
of FOAF, RSS, and trackbacks conceptually.

I think the bigger problem is that the era of the semantic web/community
standards like RSS, etc has largely passed us by. Participation now occurs on
unmoderated sites like Twitter/Tumblr/etc, or on moderated community forums
(and in a context where there's interesting content but I can't choose who
specifically to follow, I prefer moderation).

------
higherpurpose
Blake2 is better and much faster anyway:

[https://blake2.net/](https://blake2.net/)

------
ilaksh
NIST has demonstrated to such a high degree on multiple occasions that it
isn't trustworthy, so I think it should be ignored, practically speaking.

~~~
jokoon
would that mean AES (rinjdael) is not trustworthy ?

~~~
fedor_brunner
Adam Langley thinks that NIST continually pick algorithms that aren't suited
to software implementations.
[https://www.imperialviolet.org/2012/10/21/nist.html](https://www.imperialviolet.org/2012/10/21/nist.html)

(In a NIST paper about the selection of AES they state: “table lookup: not
vulnerable to timing attacks.” Oops. If you're building your own hardware,
maybe.)

