Hacker News new | past | comments | ask | show | jobs | submit login
Debunking NIST's calculation of the Kyber-512 security level (cr.yp.to)
468 points by bumbledraven on Oct 3, 2023 | hide | past | favorite | 201 comments



An important detail you really want to understand before reading this is that NIST (and NSA) didn't come up with these algorithms; they refereed a competition, in which most of the analysis was done by competitors and other academics. The Kyber team was Roberto Avanzi, Joppe Bos, Léo Ducas, Eike Kiltz, Tancrède Lepoint, Vadim Lyubashevsky, John M. Schanck, Gregor Seiler, Damien Stehlé, and also Peter Schwabe, a collaborator of Bernstein's.


Absolutely, but NIST ultimately choose the winners, giving them the option to pick (non-obviously) weak/weaker algorithms. Historically only the winners are adopted. Look at the AES competition - how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael by most accounts?


> Historically only the winners are adopted. Look at the AES competition

Often, yes. But also consider the SHA-3 competition.

BLAKE2 seems more widely used than what was chosen for SHA-3 (Keccak). What was submitted for the SHA-3 competition was BLAKE1 (it didn't have a number back then but I think this is clearer) so it's not like NIST said that Keccak is better than BLAKE2, they only said it's better than BLAKE1 (per their requirements, which are unlikely to align with your requirements because of the heavy weighing of speed-in-hardware), but still this is an example of a widely used algorithm that is not standardized.

> how often do you see Serpent being mentioned, despite it having a larger security margin than Rijndael

The goal of an encryption algorithm is not only to be secure. Sure, that has to be a given: nobody is going to use a broken algorithm when given a choice. But when you have two secure options, the more efficient one is the one to choose. You could use a 32k RSA key just to be sure, or a 4k RSA key which (to the best of my knowledge) everyone considers safe until quantum. (After quantum, you need something like a 1TB key, as djb humorously proposed.)

Wikipedia article on Serpent: "The 32 rounds mean that Serpent has a higher security margin than Rijndael; however, Rijndael with 10 rounds is faster and easier to implement for small blocks."

I don't know that nobody talks about Serpent solely because it was not chosen as winner. It may just be that Rijndael with 256-bit keys is universally considered secure and is more efficient at doing its job.


Re: BLAKE2, I'm not sure it's fair to say that BLAKE2 is more widely used overall. But I do agree BLAKE2 is a bit of an outlier in terms of adoption. I think part of the reason is that SHA2 remains the go-to option, else I'd expect the ecosystem to consolidate around SHA3.

Re: Serpent, there are many things to unpack here but, in summary, you don't know a priori how large of a security margin you need (given the primary function of a cipher, you want to pick the conservative option), efficiency concerns become much less relevant with hardware-accelerated implementations and years of Moore's law performance uplifts, low-power devices can take advantage of much lighter algorithms than Rijndael OR Serpent, ease of implementation does not equal ease of correct/secure implementation vis-a-vis side channel attacks, and certainly if Serpent was chosen you wouldn't see Rijndael talked about much.


Blake2 also uses a very SHA2-like construction (a HAIFA construction, which is based on Merkle-Damgard). I believe this was the main reason SHA3 was chosen to be something completely different (a sponge construction). If SHA2 was found to be insecure, Blake2 would be at more risk of also being broken than Keccak.

Speculatively, if SHA2 is broken without breaking Merkle-Damgard hashes in general, Blake2/3 could well become SHA4.


> I'm not sure it's fair to say that BLAKE2 is more widely used overall

Ah, maybe my experience is biased then. I keep coming across BLAKE2 implementations, but rarely hear so much as people considering to use SHA-3 somewhere. If anyone has actual numbers on this, that would be interesting.

It would be good if SHA-3 is being used because then chip makers have a reason to bake it into their hardware, which is exactly where the biggest gain over SHA-2 is expected. If that happens, and all else being equal (no cracks appear in Keccak), I'd be surprised if BLAKE2 remains as popular!

> you don't know a priori how large of a security margin you need

True, so this can be argued to be an educated guess at first. But then confidence increases over time. It seems to be expected that, more than 20 years later, people aren't considering Serpent anymore. Is it because it wasn't chosen as AES? Certainly partially, but BLAKE2 (I'll admit it does seem like an outlier) likely will still be talked about in the future so standardization is not the only factor.

I didn't see actual benchmarks, but Serpent sounds at least three times slower than Rijndael for, by now, no tangible benefit. What would be interesting is if there were AES competitors that are also fully unbroken and are more efficient than Rijndael, or easier to implement, etc.


> I didn't see actual benchmarks, but Serpent sounds at least three times slower than Rijndael

You can read the original report on the candidates: https://nvlpubs.nist.gov/nistpubs/jres/106/3/j63nec.pdf

To save you some time, one software evaluation is on page 531. Serpent performs worse in software than Rijndael (AES) just about everywhere (note they use categories rather than precise metrics, but you can dig into that if you want). By contrast, one hardware evaluation is on page 539. Serpent has the highest throughput for the lowest "area" i.e. required. These results repeat on p541.

So it depends on whether you are comparing a software or hardware implementation.


I wish chip makers would bake the elliptic curve used in Bitcoin and Ethereum (secp256k) as well, instead of the entire industry coalescing around secp256r, which many suspect was somehow weaker (since its parameters are some weird large number X instead of a hard-to-game number like 15, leading some to believe that the first X-1 candidates were tried and X was found to be weaker).

The real reason I would have liked that to be the case is so that one could use WebAuthn and Subtle Web Crypto to sign things and have the blockchain natively verify the signature.

As it is, I am hoping that EVM will roll out a precompiled signature verifier for secp256r, which is on the roadmap — they say!


There are a few different on-chain implementations of secp256r1 signature verification for use with passkeys, my favorite of which is demoed at https://p256.alembic.tech

Work is also being done on SNARK-based cross-curve signature verification

But I fully agree, especially with the growing popularity of account abstraction, the EVM desperately needs a secp256r1 precompile!


> BLAKE2 seems more widely used than what was chosen for SHA-3 (Keccak)

There's also BLAKE3 and it is amazing. How is its adoption going?

https://github.com/BLAKE3-team/BLAKE3



SHA-3 is vastly more widely used than BLAKE.


I fully admit to having a weak spot for Serpent - it is self-bitslicing (see the submission package or the linux kernel tree), which in hindsight makes constant time software easier to write, and it was faster in hardware even when measured at the time, which is where we have ended up putting AES anyway (e.g. AES-NI etc).

BUT. On security margins, you could argue the Serpent designers were too conservative: https://eprint.iacr.org/2019/1492 It is also true that cryptanalytic attacks appear to fare slightly better against AES than Serpent. What does this mean? A brute force attack has the same number of operations as the claimed security level, say, 2^128 for 128-bit. An attack is something better than this: fewer operations. All of the attacks we know about achieve slightly less than this security level - which is nonetheless still impossible to do - but that comes at a cost: they need an infeasible amount of memory. In terms of numbers: 9000 TB to reduce 2^128 to 2^126 against full-round AES according to a quick check of wikipedia. For reference, the lightweight crypto competition considered 2^112 to be sufficient margin. 2^126 is still impossible.

In practice, the difference between Serpent and AES in terms of cryptanalytic security is meaningless. It is not an example of NIST picking a weaker algorithm deliberately, or I would argue, even unintentionally. It (AES) was faster when implemented in software for the 32-bit world that seemed to be the PC market at the time.


Implemented correctly, I agree the difference in security margin may not be too important. Otherwise, Serpent is more resistant to timing attacks. Weaknesses in implementation are as important as weaknesses in design.

Regardless, the comparison wasn't intended to argue for a meaningful difference in security margin, but to show that that the winner of the competition, well, wins (in adoption).


> BUT. On security margins, you could argue the Serpent designers were too conservative: https://eprint.iacr.org/2019/1492

Thanks for digging that paper out again. It is really telling that AES only gets a bit of a bump (10-30%) while the other ones gain like 2x or more.

I was about to comment that the competitors to AES were definitely too conservative, and it bit them because of how much slower it made them in software and larger in hardware.


Blowfish has a continuing existence as the basis for bcrypt.


It works as a password hash for reasons having in part to do with why it isn’t a great general purpose cipher.


Can you expand, or link to an explanation?


The Blowfish key-schedule algorithm is equivalent to encrypting 4kB of data with it. This isn't a problem for some use-cases (e.g. transferring a large file over HTTPS), but terrible for others e.g. a encrypting lots of short messages using different keys without being able to cache > 30x larger result of the key-schedule result. To make it worse the cipher uses four large 256 x 32bit S-boxes with data (key and plaintext) dependent indexes making it very hard to implement fast without adding a timing side-channel on anything more complex than a simple microcontroller. It also does very little computation per memory access. Blowfish is a fast cipher on a very simple 32bit CPU with >= 4kiB of fast memory, but modern CPUs offer a lot more compute throughput than memory throughput. There is also very little opportunity to exploit for even the most expensive OoO CPUs because almost every depends on a data dependent memory access within a few instructions. For these reasons it's also expensive and relatively slow to implement in hardware.

Almost all of these downsides are helpful for password has validation function like bcrypt() because there is nothing to an attacker can to guess much faster than a desktop CPU.

Blowfish was a good cipher at a time when CPUs lacked dedicated trustworthy crypto engines, wide and deep OoO execution capability, and packed-SIMD support. AES and SHA1/2 are commonly implemented in hardware on modern server, desktop and mobile CPUs. Where hardware offloading isn't available ciphers can take advantage of OoO and SIMD to perform vastly more useful work per cycle than stalling on memory accesses.


Blowfish has an unusually slow key-setup phase. Slowness is an advantage for password hashes, since it makes offline attacks harder.


Correct me if I'm wrong, everything is also being done out in the open for everyone to see. The NIST aren't using some secret analysis to make any recommendations.


Teams of cryptographers submit several proposals (and break each other's proposals). These people are well respected, largely independent, and assumed honest. Some of the mailing lists provided by NIST where cryptographers collaborated to review each other's work are public

NIST may or may not consort with your friendly local neighborhood NSA people, who are bright and talented contributors in their own right. That's simply in addition to reading the same mailing lists

At the end, NIST gets to pick a winner and explain their reasonning. What influenced the decision is surely a combination of things, some of which may be internal or private discussions


> NIST may or may not consort with your friendly local neighborhood NSA people

It is worth noting that while breaking codes is a big part of the NSA's job, they also have a massive organization (NSA Cybersecurity, but I prefer the old name Information Assurance) that works to protect US and allied systems and cryptographic applications.

In the balance, weakening American standards does little to help with foreign collection. Their efforts would be much better spent injecting into the GOST process (Russia and friends) or State Cryptography Administration (China and friends).


> In the balance, weakening American standards does little to help with foreign collection.

While that makes logical sense, the previous actions of the NSA has demonstrated they're not a logical actor in regards to this stuff, or that there's more going on.


> In the balance, weakening American standards does little to help with foreign collection.

Though it can be greatly beneficial for domestic collection. Further, so long as the US remains a dominant player in Tech and Tech-influenced fields like finance, odds are a lot of the world is going to be at least de facto using US standards.


I was under the impression that only fools trust NIST after DUAL_EC_whatsit.

Is that not the case?


from the article:

> I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.

> That lawsuit has been gradually revealing secret NIST documents, shedding some light on what was actually going on behind the scenes, including much heavier NSA involvement than indicated by NIST's public narrative.

even if I had never heard of DUAL_EC_whatsit, there's enough here to make me mistrust NIST.


You mean ANSI/ISO/NIST and Dual_EC_DRBG, that everyone suspected had a backdoor before it was included as one of multiple options? https://en.m.wikipedia.org/wiki/Dual_EC_DRBG#Timeline_of_Dua...

Or the s-boxes in DES, that the NSA suggested to IBM + NIST's predecessor, so as to be resistant to then-not-widely-known differential cryptanalysis? https://web.archive.org/web/20120106042939/http://securespee...


One of those things happened after 9/11, and one of those things happened before.

There is a widely held belief that the US IC changed fundamentally in terms of their regard for their own raison d’etre that day.


It'll be curious, looking back from the near future, what prompted the next fundamental change.

I'd like to think the US is in the midst of that now, with the Afghan withdrawal and Ukraine war.


[flagged]


He/she means that there have been good things coming out of the NSA/NIST collaborations (another example is SHA0->SHA1, introducing a "mysterious" left shift that made SHA1 much stronger), and the bad ones are caught quickly.


Things have changed quite a bit since then.


How so? Or rather, taking change for a given, what are believable indicators that a secret organization outside normal systems of law and publicity has changed _for the better_? After all, the Snowden relevations lead not to the NSA deciding that creating a global panopticon for a super-surveillance state would be a bad idea, but rather them doing their damnedest that never again the American public would be informed of the true scale of their dystopian actions.


My rule of thumb in these situations is always: if they could, they would.

I've seen enough blatant disregard for humanity to assume any kind of honesty in the powers that were.


I'm sure the NSA 9-5ers justify weakening standards processes by the fact it's still secure enough to be useful for citizens and some gov orgs but flawed enough to help themselves when it matters at x point in the future.

No one can say they pushed some useless or overtly backdoored encryption. That's rarely how Intel agencies work. It's also not how they need to work to maintain their effectiveness indefinitely.

When the CIA is trying to recruit for HUMINT if they can get claws into anything whether it's a business conference that has a 0.1% chance they'll meet some pliable young but likely future industry insider that may or may not turn into a valuable source then they'll show up to every single year to that conference. It's a matter of working every angle you can get.

They aren't short of people, time, or money. And in security tiny holes in a dam turn into torrents of water all the time.

The fact NIST is having non public backroom meetings with NSA, concealing NSA employee paper authors, generating a long series of coincidental situations preferencing one system, and stonewalling FIOAs from reputable individuals. IDK, if was a counter intelligence officer in charge of detecting foreign IC work I'd be super suspicious of anything sold as safe and open from that org.


Where is your evidence other than your gut feeling from other unrelated news articles?


> everything is also being done out in the open for everyone to see

Well, everything apart from the secret stuff:

"I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.

That lawsuit has been gradually revealing secret NIST documents, shedding some light on what was actually going on behind the scenes, including much heavier NSA involvement than indicated by NIST's public narrative"


Thing is... no lawsuit will ever reveal documents directly against the USA's national interest.

Every single document will be reviewed by a court before being opened up, and any which say "We did this so we can hoodwink the public and snoop on russia and china" won't be included.


> any which say "We did this so we can hoodwink the public and snoop on russia and china" won't be included

Ever since Snowdon we know it's actually "hoodwink the public and snoop on the public"

Russia and China are just an excuse.


There is a final standardization step where NIST selects constants, and this is done without always consulting with the research team. Presumably, these are usually random, but the ones chosen for the Dual-EC DRBG algorithm seem to have been compromised. SHA-3 also had some suspicious constants/padding, but that wasn't shown to be vulnerable yet.


The problem with Dual EC isn't the sketchy "constants", but rather the structure of the construction, which is a random number generator that works by doing a public key transformation on its state. Imagine CTR-DRBG, but standardized with a constant AES key. You don't so much wonder about the provenance of the key so much as wonder why the fuck there's a key there at all.

I don't know of any cryptographer or cryptography engineer that takes the SHA3 innuendo seriously. Do you?

Additional backstory that might be helpful here: about 10 years ago, Bernstein invested a pretty significant amount of time on a research project designed to illustrate that "nothing up my sleeves" numbers, like constants formed from digits of pi, e, etc, could be used to backdoor standards. When we're talking about people's ability to cast doubt on standards, we should keep in mind that the paragon of that idea believes it to be true of pi.

I'm fine with that, for what it's worth. Cryptography standards are a force for evil. You can just reject the whole enterprise of standardizing cryptography of any sort, and instead work directly from reference designs from cryptographers. That's more or less how Chapoly came to be, though it's standardized now.


I do know a few cryptographers who were suspicious of SHA-3 when it came out, but after some napkin math and no obvious hole was found, they were fine with it. The actual goal of that extra padding was to get extra one bits in the input to avoid possible pathological cases.

My understanding of the Dual-EC problem may be different than yours. As I understand it, the construction is such that if you choose the two constants randomly, it's fine, but if you derived them from a known secret, the output was predictable for anyone who knows the secret. The NIST did not provide proof that the constants used were chosen randomly.

Random choice would be equivalent to encrypting with a public key corresponding to an unknown private key, while the current situation has some doubt about whether the private key is known or not.


Even with a verifiably random key, Dual EC is still unacceptable.

First, because its output has unacceptable biases [1,2].

Second, because its presence allows an attacker to create a difficult-to-detect backdoor simply by replacing the key, as apparently happened with Juniper NetScreen devices [3,4].

--- [1] Kristian Gjøsteen, Comments on Dual-EC-DRBG/NIST SP 800-90, draft December 2005. Online: https://web.archive.org/web/20110525081912/https://www.math....

[2] Berry Schoenmakers and Andrey Sidorenko, Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator, May 2006. Online: https://eprint.iacr.org/2006/190.pdf

[3] Stephen Checkoway, Jacob Maskiewicz, Christina Garman, Joshua Fried, Shaanan Cohney, Matthew Green, Nadia Heninger, Ralf-Philipp Weinmann, Eric Rescorla, and Hovav Shacham, A Systematic Analysis of the Juniper Dual EC Incident, October 2016. Online: https://www.cs.utexas.edu/~hovav/dist/juniper.pdf

[4] Ben Buchanan, The Hacker and the State, chapter 3, Building a Backdoor. Harvard University Press, February 2020.


> Even with a verifiably random key

What's a "verifiably random" key?


"Verifiably random" means produced using a process where it isn't possible for you to know the outcome. In this case, saying "the key is [X], which is the SHA-2 hash of [Y]" would allow you to know that they couldn't choose [X] without breaking SHA-2.


Who were those cryptographers?


the NSA


This is closer to the correct answer than anyone who writes publicly about cryptography.


> The problem with Dual EC isn't the sketchy "constants", but rather the structure of the construction

It's both.

It is bad enough even without the constants

https://blog.cryptographyengineering.com/2013/09/18/the-many...


You don't really know, but you can be reasonably sure that they didn't sabotage the submissions themselves.


The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position. NIST is staffed by normal people who probably view aggressively worded requests for clarification in the same way that most of us have probably fielded aggressively worded bug reports.

Adding accusatory hyperbolic statements like: "You exposed three years of user data to attackers by telling people to use Kyber starting when your patent license activates in 2024, rather than telling people to use NTRU starting in 2021!" doesn't help. Besides the fact that nobody is deploying standalone PQ for some time, there were several alternatives that NIST could have suggested in 2021. How about SIKE? That one was pretty nice until it was broken last year.

Unfortunately, NIST doesn't have a sterling reputation in this area, but if we're going to cast shade on the algorithm and process, a succinct breakdown of why, along with a smoking gun or two would be great. Pages and pages of email analysis, comparison to (only) one other submission, and accusations that everyone is just stalling so data can be vacuumed up because it is completely unprotected makes it harder to take seriously. If Kyber-512 is actually this risky, then it deserves to be communicated clearly.


This is 100% in line my reading of the submission.

Also noting that the page contains seventeen thousand words. That many words of harry potter take an average person 70 minutes to read. This text is no harry potter: it's chock-full of numbers, things to consider, and words and phrasings to weigh (like when quoting NIST), so you're not going to read it as fast as an average book, if you know enough about PQC to understand the text in the first place.

I even got nerdsniped near the beginning into clicking on "That lawsuit has been gradually <revealing> secret NIST documents, shedding some light on what was actually going on behind the scenes". That page (linked by the word <revealing>) is another 54000 words. Unaware, due to not having a scroll bar on mobile (my fault, I know), I started skimming it linearly to see what those revelations might be. Nothing really materialized. At some point I caught on that I seemed to have enrolled for a PhD research project and closed that tab to continue reading the original page...

Most HN readers, who are often smart and highly technical but in various fields, cannot reasonably weigh and interpret the techobabble evidence for "nist=bad". Being in an adjacent field, I would guess that I understand more than the average reader, but still don't feel qualified to judge this material without really giving it a thorough read. The page reasonably gives context and explains acronyms, but there's just so much of it that I can't imagine anyone who doesn't already know would want to bother with it. Not everyone understanding a submission is okay, but this is about accusations, and that makes me feel like it is not a good submission for HN.


HN readers that don't want to read the piece in full can take solace in that PQC has not been proven viable. Thus, what algorithms we should use to protect ourselves once what we thought was intractable becomes tractable may be a moot point. Shor's algorithm is capable of factoring 21 into 7 x 3. That's a long way off from factoring the thousands of digits-long numbers used for modern cryptography.


> Shor's algorithm is capable of factoring 21 into 7 x 3. That's a long way off from factoring the thousands of digits-long numbers

That is quite misleading, per my understanding.

Today's or near-future quantum computers can do this level of arithmetic, but Shor's algorithm does not have hardware limitations because it's an algorithm and not a computer. You can apply it to a thousand digits as well as to one. Apparently the thousand digits requires a certain number of qubits, i.e. a big enough quantum computer, but that's kind of the point: many people expect that we will gain that capability (keeping enough qubits stable for long enough to do the computation) sooner or later. Security agencies are saying to expect it in about ten years from now. Maybe you know better, yes can be, but that is not where I am going to put my money.

There now exist algorithms that can mitigate this risk, might as well use them. Why try to convince people they shouldn't bother?


Right, implementations of Shor's algorithm on existing quantum computers can only factor 7 x 3. But even if quantum computing power doubled every year it would still take decades before breaking modern crypto becomes viable. That would require many scientific and engineering breakthroughs. Possible of course, but I wouldn't bet on it.


Edit: Just realized the author is djb, Daniel Bernstein, which I guess is semi-ironic for me because I was recently praising him on HN for an old, well-read blog post on ipv6. Thus, I guess I may take back a bit of what I said below, or least perhaps it would be better to say that I can better understand the adversarial tone given djb's history with NIST recommendations (more info at https://en.wikipedia.org/wiki/Daniel_J._Bernstein#Cryptograp...).

> The unfortunate reality of this is that while he may be right, it is difficult to classify the responses (or non-response) from the NIST people as deceptive vs just not wanting to engage with someone coming from such an adversarial position.

Couldn't agree with this more. I don't like to harp on form over substance, but in this case the form of this blog post was so bad I had difficulty evaluating whether the substance was worthwhile. I'm not in the field of cryptography, so I'm not qualified to assess on the merits, but my thoughts reading this were:

1. All the unnecessary snark and disparagement made me extremely wary of the message. It seemed like he was making good points, but the overall tone was similar to those YouTube "WhaT ThE ElITe DoN'T WanT YoU TO KnoW!!" videos. Frankly, the author just sounds like kind of an asshole, even if he is right.

2. Did anyone actually read this whole thing?? I know people love to harp on "the Internet has killed our attention spans", and that may be true, but the flip side is we're bombarded with so much info now that I take a very judicious approach to where I'll spend my time. On that point, if you're writing a blog post, the relevant details and "executive summary" if you will should be in the first couple paragraphs, then put the meandering, wandering diary after. Don't expect a full read if important tidbits are hidden like Where's Waldo in your meandering diary.


I read the whole thing because of who the author was.

The executive summary is above the fold:

Take a deep breath and relax. When cryptographers are analyzing the security of cryptographic systems, of course they don't make stupid mistakes such as multiplying numbers that should have been added.

If such an error somehow managed to appear, of course it would immediately be caught by the robust procedures that cryptographers follow to thoroughly review security analyses.

Furthermore, in the context of standardization processes such as the NIST Post-Quantum Cryptography Standardization Project (NISTPQC), of course the review procedures are even more stringent.

The only way for the security claims for modern cryptographic standards to turn out to fail would be because of some unpredictable new discovery revolutionizing the field.

Oops, wait, maybe not. In 2022, NIST announced plans to standardize a particular cryptosystem, Kyber-512. As justification, NIST issued claims regarding the security level of Kyber-512. In 2023, NIST issued a draft standard for Kyber-512.

NIST's underlying calculation of the security level was a severe and indefensible miscalculation. NIST's primary error is exposed in this blog post, and boils down to nonsensically multiplying two costs that should have been added.

How did such a serious error slip past NIST's review process? Do we dismiss this as an isolated incident? Or do we conclude that something is fundamentally broken in the procedures that NIST is following?


> I know people love to harp on "the Internet has killed our attention spans"

Not just that. Give your parent or grandparent a 75-page booklet to read, full of accusations and snark, and let's say it's about something they care about and actually impacts their lives (maybe a local government agency, idk). What are the odds they are going to read that A-Z versus waiting for a summary or call-to-action to be put out? The latter can be expected to happen if there is actually something worthwhile in there.

This is objectively too long for casual reading, nothing to do with anyone's attention span.

(The 75-page estimate is based on: (1) a proficient reader doing about a page per minute in most books that I know of, so pages==minutes; (2) the submission being 17.6k words; (3) average reading speed is ~250 wpm, resulting in 17.6e3/250=70 minutes; (4) this is not an easy text, it has lots of acronyms and numbers, so conservatively pad to 75.)


People read it because of djb’s reputation. I’m the future, when someone smarter than you writes something it might benefit you to put aside your tone scolding and receive the information. It might be important.


Really smart people can be horrible writers. It's fair to call that out regardless of the reputation of the author.


> Did anyone actually read this whole thing?

Yup. I'm not a cryptographer, so I didn't understand most of the detail. I realized it ws DJB after a couple of paragraphs.

> the relevant details and "executive summary" if you will should be in the first couple paragraphs

It wasn't written for "executives".


> It wasn't written for "executives".

When writing about real-world topics (especially where the goal is to educate or change opinions), it's usually a good idea to summarize the overall piece at the beginning, regardless of the intended audience. If the piece is broken up into chapters, sections, etc., it often helps to open each of those with a summary as well.

Like a lot of technical people, my default writing style tends to be a linear/journal-entry structure that tells a story more or less in the order it occurred. Over time I've learned that that type of structure only really works if someone is already interested in the material. Otherwise, they're likely to see a wall of text and move on.

Summarizing the overall piece as well as sections lets the reader immediately figure out if what they're reading is relevant to them, what the author's goals are, and if there are parts they can skip over because they're already familiar with those topics.


Even worse, I expected to find a part when he reports it and includes the responses/follow-up from that... But this is the first time it's published a far as I understand? Did I miss it in the wall of text? Or is it really a huge initial writeup that may end up with someone responding "oh, we did mess up, didn't we? Let's think how to deal with that."


It's in there.

He first raised the issue in April 2022.

Then in December 2022 he asked about the evaluation of Kyber's security and they posted this[1], which included a 2^40 multiple that he wasn't sure where it came from; if it came from where he thought it did (bogus math on numbers from a paper DJB himself coauthored), then that was troubling.

There was no response, so a few weeks later he posted his assumptions and asked if anyone else could come up with another possible explanation for what the NIST e-mail was assuming.

This did get a response[2], the main thrust of which was:

> While reviewers are free, as a fun exercise, to attempt to "disprove what NIST _appears_ to be claiming about the security margin," the results of this exercise would not be particularly useful to the standardization process. NIST's prior assertions and their interpretation are not relevant to the question of whether people believe that it is a good idea to standardize Kyber512.

After further prodding the response[3] was essentially a rather polite version of "You're the scientist and it's your model, why don't you tell us?" Which DJB considers dodging his question of "How did you get these numbers?"

At this point DJB posts[4] a dissection of the December 2022 e-mail, which is similar to the middle quarter of TFA.

1: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...

2: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...

3: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...

4: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...


> NIST's prior assertions and their interpretation are not relevant [...]

That seems to be an extraordinarily strong claim to make, without detailed explanation, which apparently wasn't provided.


There did seem to be some talking past each other. The most kind to NIST explanation is they wanted DJB to say something like "Adopting Kyber-512 is bad because it is likely to be less strong than AES-128, and here's the math" while DJB wanted to rebut the analysis that NIST, (hopefully with the aid of a member of the team developing Kyber) had done.

I think there was also a bit of DJB wanting to engage NIST in a scientific debate (and getting increasingly abrasive when this didn't happen), while NIST wanted none of that, preferring that such debates be between researchers.

However from the point of view advanced in TFA, the best published papers implied that Kyber's security was likely very close to another algorithm (that the author of TFA preferred) that was disqualified for being insufficiently strong.


Thank you. Now that's a readable summary!


That's pretty selective quoting of the issues. He even says himself that the waiting for the patent is one of the minor issues.

The many questions he asks is why did they repeatedly change the evaluation criteria after the fact, presented results in a misleading ways, and made basic calculation errors (remember these guys are experts). All these in favor of one algorithm.

Now to someone like me this points to the fact that they really wanted that algorithm to be the standard. If we add to that the fact that there was significantly more NSA involvement than indicated and that they did their best to hide this, leads me to be extremely skeptical of the standard.


Because someone likely stood to benefit from it. The question is who and how?


> If Kyber-512 is actually this risky, then it deserves to be communicated clearly.

The statement djb seems to be making: It is not known if Kyber-512 is as cryptographically strong as AES-128 by the definitions provided by NIST.

This is an issue because these algorithms will be embedded within hardware soon.

> Besides the fact that nobody is deploying standalone PQ for some time

Now that an implementation has been chosen to be standardized, hardware vendors are likely to start designing blocks that can more efficiently compute the FIPS 203 standard (if they haven't already designed a few to begin with).

Given that the standard's expected publication is in 2024, and the 1-2 year review timeline for NIST CMVP review on FIPS modules, I wouldn't be surprised to see a FIPS 140-3 Hardware Module with ML-KEM (Kyber-etc.) by mid 2026.

> a succinct breakdown of why

The issue seems to be his statement from [1]: "However, NIST didn't give any clear end-to-end statements that Kyber-512 has N bits of security margin in scenario X for clearly specified (N,X)."

djb succinctly outlines the "scenario X" he referred to in [2], in which he only needs a yes or no answer. He is literally asking the people who should know and be able to discuss the matter, who would have the technical background to discuss this matter. He had received no response, which is why he had posted [1].

NIST's reply in [3] is a dismissal of [1] without a discussion of the security itself. The frustrating part for me to read was the second paragraph: "The email you cited (https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...), speaks for itself. NIST continues to be interested in people's opinions on whether or not our current plan to standardize Kyber512 is a good one. While reviewers are free, as a fun exercise, to attempt to "disprove what NIST _appears_ to be claiming about the security margin," the results of this exercise would not be particularly useful to the standardization process. NIST's prior assertions and their interpretation are not relevant to the question of whether people believe that it is a good idea to standardize Kyber512."

If NIST views the reviewers' claims about security to be "not particularly useful to the standardization process," (and remember: the reviewers are themselves cryptographers) then why should the public trust the standard at all?

> a smoking gun or two would be great

There wouldn't be a smoking gun because the lack of clarification is the issue at hand. If they could explain how they calculated the security strength of Kyber-512, then this would be a different issue.

The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements, so clarification or justification seems necessary.

[1]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...

[2]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...

[3]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu...


> The current 3rd party estimates of Kyber-512's security strength (which is a nebulous term...) puts it below the original requirements

More to the point, (at least to my understanding) it puts it on par with another contender that was rejected from the NIST competition for being too weak a security construct.


If TFA were by a nobody I might agree, but TFA is by DJB and/or Tanja Lange, and they're not nobodies. These things need to be at least somewhat adversarial partly because that's what it takes to do cryptanalysis, and partly because of past shenanigans. It goes with the territory and the politics. It's unavoidable.


One can be combative and adversarial and still write succinctly and persuasively.

This text does DJB no favors. He comes across like a conspiracy theorist, based on the form of the content alone.


That's more of a diary than an article -- jargony, disorganized, running in circles, very hard to follow. But the information might be important regardless. There's a strong implication that NIST with help of the NSA intentionally standardized on a weak algorithm.

We all know that's possible.

But can someone who follows some of this stuff more closely explain what the play would be? I always assumed that weakening public cryptography in such a way is a risky bet, because you can't be sure that an attacker doesn't independently find out what you know. You can keep a secret backdoor key (that was the accusation when they released Dual_EC_DRBG), but you can't really hide mathematical results.

Why would they be willing to risk that here?


Why the overwhelming benefit of the doubt in an organization that has repeatedly failed expectations? I don't understand why this is even a conversation. We don't need them any more. Export restrictions are gone. What we need is a consortium to capture the attention of the hardware vendors and limit NIST and the NSA to participant status. Then if the government decides to adopt their backdoored standards, they're the only ones.


You're making an assumption that the NSA cares about the efficacy of cryptography for other people. Why would they care about that?


Because the NSA has equally well funded adversaries that would love to find a back door to the NIST standards the whole of the US government uses. Even if the highest levels of the military and government use secret squirrel super cryptography the rest is using NIST standards. It's all the boring parts of government that deposits paychecks and runs the badge readers to their offices.


> You're making an assumption that the NSA cares about the efficacy of cryptography for other people. Why would they care about that?

Hypothesis 1: because the NSA sees evidence that more efficient cryptographic algorithms are easier to crack for them.

To give some weak evidence for this: if you need brute force to crack the cipher (or hash function), a more efficient algorithm need less computation power to crack.

Hypothesis 2: A more efficient algorithm is likely to become applied in more areas than a less efficient one (think of smartcards or microcontrollers). So if the NSA finds a weakness or is capable of introducing a backdoor in it, it can decrypt a lot more data from more areas.


it's in the national security interest of the United States to have its industries use high-quality crypto

see: colonial oil pipeline hack


It's in the national security interest of the United States to have its industries use robust security practices.

Industries with secure fences that are regularly patrolled are entirely different to industries with partial coverage by unpatrolled rusty fences and a freestanding door frame that has a titanium unpickable lock.

Passwords get compromised that's a fact.

How the single employee password that got breached was obtained is still (AFAIK) a mystery - but this will always happen ... given many employess, at least one will eventually make a mistake.

After that, the VPN had no multifactor authentication, the network had no internal honey subnets, canary accounts, sanity checks, etc.

High-quality crypto alone does not make for secure systems.

And systems can be secure with lower quality crypto if the systems are robust.


I feel that examples argues the opposite.

It's not entirely known how every step of that attack went down, but "breaking low quality crypto" hasn't factored into any incident write up I've ever seen.

However, nearly all ransomware uses rsa. Therefore in this particular case, high quality crypto caused harm.

(To state the obvious, I'm not advocating for bad crypto, just discussing this case).


> Why would they be willing to risk that here?

Certain types of attacks basically make it so you need to have a specific private key to act as a backdoor. That's the current guess on what may be happening with the NIST ECC curves.

If so, this can be effectively a US-only backdoor for a long, long time.


I don't believe that is anybody's guess on what may be happening with the NIST ECC curves. Ordinarily, when people on HN say things like this, they're confusing Dual EC, a public key random number generator, known to be backdoored, with the NIST curve standards.


The issue with the NIST curves is that they were generated from a PRNG with some kind of completely random seed. The conspiracy theory there is that the seed was selected such as to make the curve exploitable for NSA and NSA only. Choosing such a seed is somewhat harder than complete break of the hash function (IIRC SHA-2) used in the PRNG that was used to derive the curve.

On the other hand, there is a lot of reasons to use elliptic curve that was intentionally designed, so, DJB's designs. And well, in 2009 I would not imagine that the kinds of stuff that DJB publishes will end up being TLS1.3.


It's very unlikely the seeds were random, and they weren't even ostensibly generated from a PRNG, as I understand it. Rather, they were passed through SHA1 (remember: this is the 1990s), as a means to destroy any possible structure in the original seed. The actual seeds themselves aren't my story to tell, but are a story that other people are talking about. For my part, I'll just point again to Koblitz and Menenzes on the actual cryptographic problems with the NIST P-curve seed conspiracy:

https://eprint.iacr.org/2015/1018.pdf


This seems to be all that is publicly known about the seeds: https://saweis.net/posts/nist-curve-seed-origins.html


A hash function is a (CS)PRNG. It has the key property, namely of being indistinguishable from randomness while being generated deterministically.


In fact, `echo "This is my seed" | openssl sha -sha256` is not really a CSPRNG. Hash functions are the bases of many PRNGs. But I think you're abusing an ambiguity with the word "random" here. At any rate: we should be clear now on the point being made about the P-curve seeds.


That is not true. There is no such requirement for a hash function.


Thread is talking about cryptographic hash functions, given the context


Yes, they don’t output random looking things necessarily. For example a hash function could be collision resistant but not pre image resistant, or vice versa. There’s much more nuance in these definitions.


Yeah I've noticed people mixing them up. They happened around the same time, so I can excuse it a bit.

The problem with the NIST ECC curves are that we still do not know where the heck that seed came from and why that seed specifically.


See Koblitz and Menenzes:

https://eprint.iacr.org/2015/1018.pdf


Also: if the NIST ECC curves actually are backdoored then why would the NSA need to try to push a backdoored random number generator? Just exploit the already-backdoored curves.


Redundancy, so if one backdoor is closed/fixed/avoided, you still have more.


No, it’s really not. Ask Neal Koblitz.


NSA weakened DES from 64-bit keys to 56-bit keys. The idea was that they could be ahead in breaking it, and that by the time 56-bit keys were too weak in general then something else would replace DES. Risky? Yes, but it worked out, for some value of "worked out". So I wouldn't assume something like that wouldn't happen again.


They did that openly. What they did in secret was to harden it against an incredibly powerful attack (it's still a basis for block and hash cryptanalysis today) that nobody else knew about.


The general idea would be that they get a few years out of it before other nation/state factions discover it. The theory behind it is called “kleptography”, because the NSA is deluded enough to think that you can steal information “securely”.


It's all far too conspiratorial for me. Just show me the math as to why it's broken, I don't need a conspiratorial mind map drawing speculative lines between various topics. Do an appendix or two for that.


There's nothing conspiratorial about the post, why not read the article? The math error is described in line 2, the actual error about two screens down, highlighted in red.


Related thread from last year, with 443 comments:

https://news.ycombinator.com/item?id=32360533 ("NSA, NIST, and post-quantum crypto: my second lawsuit against the US government (cr.yp.to)")


> Discovering the secret workings of NISTPQC. I filed a FOIA request "NSA, NIST, and post-quantum cryptography" in March 2022. NIST stonewalled, in violation of the law. Civil-rights firm Loevy & Loevy filed a lawsuit on my behalf.

As much as I generally loathe djb personally, professionally he will always have my support as he’s been consistently willing to take the federal government to task in court. It brings me great joy to see he’s still at it.


Why do you dislike him personally?



Might be because he’s a bit of a “Linus” in crypto with the same ego and temper.

However, the man has done so much to advance privacy and cryptography I think he’s earned the right to be a bit snippy, especially when he’s discussing something so complex 99% of the comments are “too long to read” and “I read it but I still don’t understand it”.


Notwithstanding DJB's importance to cryptography, and the fact that I'm ignorant of a large number of details here, there was a point where he lost a lot of credibility with me.

Specifically, when he gets to the graphs, he says "NIST chose to deemphasize the bandwidth graph by using thinner red bars for it." That is just not proven by his evidence, and there is a very plausible explanation for it. The graph that has the thinner bars is a bar chart that has more data points than the other graph. Open up your favorite charting application, and observe the difference in a graph that has 12 data points versus one with 9... of course the one with 12 data points has thinner lines! At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.

In the next bullet point, he complains that they're not using a log scale for the graph... where everything is in the same order of magnitude. That doesn't sound like a good use case for log scale, and I'm having a hard time trying to figure out why it might be justified in this case.

Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.


>At this point, it feels quite strongly to me that he is trying to interpret every action in the most malicious way possible.

Given the long and detailed history of various governments and government agencies purposefully attempting to limit the public from accessing strong cryptography, I tend to agree with the "assume malice by default" approach here. Assuming anything else, to me at least, seems pretty naive.


Eh, it goes both ways. Back in the 1970's and 1980's there was a whole lot of suspicion about changes that the NSA made to DES S-boxes with limited explanation- was it a backdoor in some way? Then in 1989 white hats "discovered" differential cryptography, and realized that the changes that were made to the algorithm actually protected it from a then-unknown (to the general public) cryptographic attack. Differential cryptography worked beautifully on some other popular cryptosystems of the era, e.g. the FEAL-4 cipher could be broken with just 8 plaintext examples, while DES offered protection up to 2^47 chosen plaintexts.

The actual way that the NSA had tried to limit DES was to cap its key length at 48 bits, figuring that their advantage in computing power would let them brute force it when no one else could. (NIST compromised between the NSA's desire for 48 and the rest of the world's desire for 64, which was why DES had the always bizarre 56 bit key.) So sometimes they strengthen it, sometimes they weaken it, and so I'm not sure it appropriate to presume malice.


>So sometimes they strengthen it, sometimes they weaken it, and so I'm not sure it appropriate to presume malice.

If you had a dog that sometimes licked you and sometimes bit you, would you let it sleep with you?

Neither NSA nor NIST can be trusted. They brought this on themselves.


There's a meaningful difference between assuming an actor is malicious or untrustworthy and going out of your way to provide the maximally malicious interpretation of each of their actions. As a matter of rhetoric, the latter tends to give the impression of a personal vendetta.


DJB has lost a ton of credibility already within the non-government cryptography community for his frankly unhinged rants on the PQC mailing list.

If you read his posts there, it’s hard not to come away with the impression that he’s just upset his favourite scheme wasn’t chosen.


Stare into randomness for long enough, and you'll see something staring back. There's a reason I didn't go pure-math


Hasn't djb always been rather difficult and ranty? That's certainly always been my impression of him.


If you continue reading, you'll find that they aren't responding to requests for clarification on their hand-waving computations. Suspicion is definitely warranted.


> Knowing that DJB was involved in NTRU, it's a little hard to shake the feeling that a lot of this is DJB just being salty about losing the competition.

There isn't a lot of people in the world with the technical know-how for cryptography. It's clear that competitors in this space are going to be reviewing eachothers work.


Yes, that was the premise of the competition, and was in fact what happened.


Sure, but this was just a weird thing to hone in on.


FWIW, there are two NTRUs: the original one, which had no djb involvement, and NTRU Prime, which does.


Yeah. It does honestly sound like he looked at the options and decided that this one was the best, then he started contributing.


Something I've learned from a career of watching cryptographer flame wars: Don't bet against Bernstein, and don't trust NIST.




I'm not sure N(IST)SA has any credibility left. Polularity of curve 25519 over their P curves is encouraging and it would be great to see the community continue this direction and largely ignore them going forward. The government shouldn't be leading or deciding, it would be better organized around gathering current consensus and following when it comes to FIPS, regulation, etc.


The NIST standardization process, appears to have a grey area particularly around the selection of constants.

The skepticism around standardization, advocating instead for direct adoption from cryptographers, sheds light on potential shortcomings in the current system.

There is definitely a need for a more transparent or open scrutiny in algorithm standardization to ensure security objectives are met.


Related note: Government employees (including military, intel) are just people, and worse, bureaucrats. They aren't magical wizards who can all do amazing things with mathematics and witchcraft. If they were good at what they do, they wouldn't need ever increasing funding and projects to fix things.


Cryptanalysis and encryption are somewhat of an exception to this. There are some extremely smart people who work in these areas for the government, precisely because funding and application is on a different scale.


Very few folks except the gov’t have real existential need for best in breed crypto, frankly.


My takeaway (impression) from the DJB post is that the evaluation by the NISTPQC seems not to provide algorithms with a firm level of security. That the evaluation is not clear cut, and not provide a good, conservative lower bound for the security provided by the algorithms selected.


"Security is supposed to be job #1. So I recommend eliminating Kyber-512."


It would be interesting to see Signal Sciences response to this Bernstein’s post


Signal seems to use Kyber-1024, which does meet the NIST contest's security criteria.

I wrote some more details here: https://community.signalusers.org/t/signal-blog-quantum-resi...


Who is Signal Sciences?


Actually, I meant Open Whisper, the company behind Signal.

Got my wires crossed.


nit: Open Whisper Systems (who built WhisperCore, an encryption layer; WhisperMonitor, a firewall; TextSecure, a messenger; and RedPhone, a voice call service) exited to Twitter back in the day [1]. Signal is apparently marketed and developed by Signal Messenger LLC [2].

[0] https://archive.is/IRYTQ

[1] https://news.ycombinator.com/item?id=3286530

[2] https://archive.is/ZI7G1


Minor typo. "How can NIST justify throwing NIST-509 away?" should be "How can NIST justify throwing NTRU-509 away?"


Scorpions and frogs as usual.


If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.

However, this man is one of the foremost cryptographers in the world, he has basically single-handedly killed US government crypto export restrictions back in the days, and (not least of all because of Snowden) we know that the NSA really is trying to sabotage cryptography.

Also, he basically founded the field of post-quantum cryptography.

Is NIST trying to derail his work by standardizing crappy algorithms with the help of the NSA? Who knows. But to me it does smell like that.

Bernstein has a history of being right, and NIST and the NSA have a history of sabotaging cryptographic standards (google Dual_EC_DRBG if you don't know the story).


This comment is factually incorrect on a number of levels.

1) single-handedly killed US government crypto export restrictions - Bernstein certainly litigated, but was not the sole actor in this fight. For example, Phil Zimmerman, the author of PGP, published the source code of PGP as a book to work around US export laws, which undoubtedly helped highlight the futility of labelling open source software as a munition: https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_i...

2) Bernstein "founded" the field of post quantum cryptography: Uh. Ok. That's not how academia works. Bernstein was certainly an organiser of the first international workshop on post quantum cryptography, but that's not the same as inventing a field. Many of the primitives that are now candidates were being published long before this, McEliece being one of the oldest, but even Atjai's lattice reductions go back to '97.

3) The dual_ec rng was backdoored (previously read was and is fishy, poor wording on my part), but nobody at the time wanted NIST to standardize it because it was a _poor PRNG anyway_: slow and unnecessarily complicated. Here is a patent from Scott Vanstone on using DUAL_EC for "key escrow" which is another way of saying "backdoor": https://patentimages.storage.googleapis.com/32/9b/73/fe5401e... - filed in 2006. In case you don't know Scott Vanstone, he's the founder of Certicom. So at least one person noticed. This was mentioned in a blog post as a result of the Snowden leaks working out how the backdoor happened: https://blog.0xbadc0de.be/archives/155

NSA have been caught in a poor attempt to sabotage a standard that nobody with half a brain would use. On the other hand NSA also designed SHA-2, which you are likely using right now, and I'm not aware of anyone with major concerns about it. When I say NSA designed it, I don't mean "input for a crypto competition" - a team from the NSA literally designed it and NIST standardized it, which is not the case for SHA-3, AES or the current PQC process.

DJB is a good cryptographer, better than me for sure. But he's not the only one - and some very smart, non-NSA, non-US-citizen cryptographers were involved in the design of Kyber, Dilithium, Falcon etc.


Dual EC is virtually certain to be a backdoor.

I had the same take on Dual EC prior to Snowden. The big revelation with Snowden wasn't NSA involvement in Dual EC, but rather that (1) NSA had intervened to get Dual EC defaulted-on in RSA's BSAFE library, which was in the late 1990s the commercial standard for public key crypto, and (2) that major vendors of networking equipment were --- in defiance of all reason --- using BSAFE rather than vetted open-source cryptography libraries.

DJB probably did invent the term "post-quantum cryptography". For whatever that's worth.


DualEC: agree. Wanted to point out that it was a poor PRNG _anyway_ and point out that the NSA's attempt at backdooring the RNG wasn't that great - as you say, RSA BSAFE used it and it made no sense. We could also point out they went after the RNG rather than the algorithm directly, which is a less obvious strategy.

I'll believe he invented the term - I have a 2009 book so-named for which he was an editor surveying non-DLP/non-RSA algorithms. Still, the idea that he's "the only one who can produce the good algorithms" and literally everyone else on the pqc list (even if we subtract all the NIST people) is wrong is bonkers.


While I agree with a lot of what you have said,

>Still, the idea that he's "the only one who can produce the good algorithms"

The parent post did not, at all, make the claim that Bernstein is the only one.


No, true, the post did not explicitly state this. However the post did suggest that NIST is specifically out to get him and take a swipe at the other candidates:

> Is NIST trying to derail his work by standardizing crappy algorithms with the help of the NSA? Who knows. But to me it does smell like that.

"Crappy" algorithms that were designed by well-regarded cryptographers, none of whom work for NIST or the NSA, many of whom are not US nationals.


The evidence seems to at least point to NIST trying to get selected one specific algorithm selected.

How else do you explain the after-the-fact changing of evaluation criteria (all favoring one algorithm) and the weird calculation error (which as I understand the text didn't come from the Kyber designers but the evaluation committee)?

Add to that the lack of transparency in particular why not follow the FOI requests ? and the much more significant involvement of NSA employees in the process (contrary to their own statement). Shouldn't that make everyone very suspicious?


It is NIST's competition. They've been open that if there are good reasons not to standardize Kyber then they won't, but, absent good reasons then they'll pick what they want.

The evaluation criteria are, naturally, in a state of constant evaluation. It would make no sense to fix them in 2019, and never update them in response to research.

It seems that DJB is not necessarily representing what NIST is saying honestly with regards to security levels: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu... , https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4MBu... - It also seems that the improved dual lattice attack from MATZOV (Israeli spooks) isn't actually as practical as thought: https://eprint.iacr.org/2023/302 (this paper was published in CRYPTO, which is purely run by academia, and top-tier academic at that). What actually the answer should be depends on various cost models, but the overall conclusion seems to be there is not a lot between Kyber and SNTRUP - on the other hand, it may be an open problem (https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/4iaf...).

Security bounds, however, are not the only reason to pick an algorithm. To take Curve25519, the prime order subgroup has order 2^252 + a bit, so it falls just short of the 128-bit security level. Should we reject it for this? Absolutely not: X25519 is excellent from an implementation perspective. This is more qualitative than quantitative, but such considerations also count.

Pointing out further notes by Peikert on SNTRUP, here is a detailed risk analysis: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/G0Do... with responses from others: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/G0Do... - to summarize: a) it seems the patent risk of Kyber might also apply to SNTRUP, and b) SNTRUP makes modifications over plain NTRU that are not clear. DJB also tried to argue against Kyber's performance: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/ik1p... . It is not clear that SNTRUP is necessarily a better choice either; quoting directly from NIST IR 8413 (note that the SIKE section is out of date; SIKE is definitively broken pre-quantum now):

> The current version of NTRU Prime has performance and concrete security estimates > (e.g., quantitative estimates of the computational resources required for usage and > cryptanalysis) that are roughly comparable to other lattice-based cryptosystems.13 As a > result, the current version of NTRU Prime is notable more for its unusual design features, > and claims that it offers higher security in a qualitative sense. > > One particular issue is the choice of the NTRU Prime ring (rather than a cyclotomic > ring), which is claimed to eliminate the possibility of certain kinds of algebraic attacks. > To date, most work on the cryptanalysis of algebraically structured lattices (see Appendix > C) has focused on cyclotomic rings, because they are widely used and simpler to analyze. > Relatively little is known about the security of cryptographic schemes that use the NTRU > Prime ring.

As for the involvement of NSA employees - they show up to NIST forums on standardization and take part in the process, and if you go, it isn't exactly hard to work out who they are. NSA also has an information assurance mission. If we take what is said in https://www.youtube.com/watch?v=qq-LCyRp6bU (Richard George, an NSA targeting retrospective) to be accurate then NSA Suite-A are almost drop-in replacements for NSA Suite-B (the algorithms mandated for use across US FedGov) so the NSA team have an interest in the outcome, because future choices of cryptography suites will follow on from what is standardized.

As for FOIA: if the NSA does know about a backdoor they don't think anyone else does, don't you think they'd classify it? Wouldn't that make it exempt from any FOIA lawsuit you care to raise? If we are attributing competence to them beyond what is available publicly, then surely they wouldn't be so careless as to discuss the backdoor they'd discovered in FOIA-able channels, would they?

I'm 100% for scrutinizing the process to make sure neither the NSA (nor anyone else) can sneak in a backdoor either deliberately or by allowing it to pass through unremarked. I am not convinced by DJB's writeup: I agree that NIST have a preference for Kyber, but I do not currently see any evidence that this is an unreasonable conclusion to arrive at, or that they have substantially ignored serious flaws in the design.


Being incompetent is possible even if you’re not a US national.


And being incompetent is possible even if you’re DJB.


Competence was never the point I was making by adding nationality.

Foreign nationals don't meet the "Unquestioning loyalty to the US" criterion for security clearance, meaning it is much less likely they might collaborate willingly with the US. They're also less exposed to pressure from the US Government, especially if they are not resident/GC holders.

I personally think it is quite paranoid to assume that any US resident or national cryptographer is in cahoots with the NSA but if we are strictly considering risks, then this process is open to scrutiny by world-wide participants, many of whom may not sit on a vulnerability at the behest of the US GOV - actually, for many of them, finding such a vulnerability would likely mean a top tier paper at CRYPTO/EUROCRYPT probably with best paper awards and all the plaudits that make an academic career for life.


Bernstein is often right, despite the controversy around the Gimli permutation.

In this particular case it's worth noting that neither BSI (Germany) nor NLNCSA (The Netherlands) recommend Kyber.

Unfortunately, alternative algorithms are more difficult to work with due to their large key sizes among other factors, but it's a price worth paying. At Backbone we've opted not to go down the easy route.


> If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.

> However, this man is one of the foremost cryptographers in the world […]

It's possible to be both (not saying Bernstein is).

Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.


> Plenty of smart folks have 'jumped the shark' intellectually: Ted Kaczynski, the Unabomber, was very talented in mathematics before he went off the deep end.

Kaczynski dropped out of society to live in a cabin alone at 29. He delivered his first bomb at 35. I'm not sure this is a reasonable comparison to invoke in any way whatsoever.

When DJB starts posting about the downfall of modern society from his remote cabin in Montana, perhaps, but as far as I know he's still an active professor working from within the University system.


While kaczynski was clearly unhinged, and I frankly don’t see how sending mail bombs did anything helpful towards solving the problems he addressed (or that his proposed solution would necessarily be better than ‘the disease’), I dare anyone to read his manifesto and say he was wrong.

If DJB is unhinged but similarly insightful about a crypto algo, I think we’d all be better off. Assuming he lays off the mailbombs anyway.


There was a smart guy once who went crazy. We should assume smart people are crazy.


That's not the claim. The claim is "because we know smart people have gone crazy, we know being smart and being crazy are not mutually exclusive, so someone being smart isn't disqualified from also being crazy." Which seems obviously true.


And not useful


Due to likely CIA sponsored mental abuse (MKULTRA), absurdly.


>If you have never heard of Bernstein, this may look like mad ramblings of a proto-Unabomber railing against THE MAN trying to oppress us.

Can I point out that Ted Kaczynski was also actually a mathematical prodigy, having been accepted into Harvard on a scholarship at 16?


If you want, sure, but I think the reason he was mentioned with a negative connotation might be more to do with the murders he committed.


An interesting set of comments (by tptacek) from a thread in 2022 (I wonder if they still hold the same opinion in light of this latest post on NIST-PQC by djb):

> The point isn't that NIST is trustworthy. The point is that the PQC finalist teams are comprised of academic cryptographers from around the world with unimpeachable reputations, and it's ludicrous to suggest that NSA could have compromised them. The whole point of the competition structure is that you don't simply have to trust NIST; the competitors (and cryptographers who aren't even entrants in the contest) are peer reviewing each other, and NIST is refereeing.

> What Bernstein is counting on here is that his cheering section doesn't know the names of any cryptographers besides "djb", Bruce Schneier, and maybe, just maybe, Joan Daemen. If they knew anything about who the PQC team members were, they'd shoot milk out their nose at the suggestion that NSA had suborned backdoors from them. What's upsetting is that he knows this, and he knows you don't know this, and he's exploiting that.

---

> I spent almost 2 decades as a Daniel Bernstein ultra-fan --- he's a hometown hero, and also someone whose work was extremely important to me professionally in the 1990s, and, to me at least, he has always been kind and cheerful... I know what it's like to be in the situation of (a) deeply admiring Bernstein and (b) only really paying attention to one cryptographer in the world (Bernstein).

> But talk to a bunch of other cryptographers --- and, also, learn about the work a lot of other cryptographers are doing --- and you're going to hear stories. I'm not going to say Bernstein has a bad reputation; for one thing, I'm not qualified to say that, and for another I don't think "bad" is the right word. So I'll put it this way: Bernstein has a fucked up reputation in his field. I am not at all happy to say that, but it's true.

---

> What's annoying is that [Bernstein is] usually right, and sometimes even right in important new ways. But he runs the ball way past the end zone. Almost everybody in the field agrees with the core things he's saying, but almost nobody wants to get on board with his wild-eyed theories of how the suboptimal status quo is actually a product of the Lizard People.

(https://news.ycombinator.com/item?id=32365259, https://news.ycombinator.com/item?id=32368598, https://news.ycombinator.com/item?id=32365679)


I don't think the "these finalist teams are trustworthy" argument is completely watertight. If the US wanted to make the world completely trust and embrace subtly-broken cryptography, a pretty solid way to do that would be to make competition where a whole bunch of great, independent teams of cryptography researchers can submit their algorithms, then have a team of excellent NSA cryptographers analyze them and pick an algorithm with a subtle flaw that others haven't discovered. Alternatively, NIST or the NSA would just to plant one person on one of the teams, and I'm sure they could figure out some clever way to subtly break their team's algorithm in a way that's really hard to notice. With the first option, no participant in the competition has to that there's any foul play. In the second, only a single participant has to know.

Of course I'm not saying that either of those things happened, nor that they would be easy to accomplish. Hell, maybe they're literally impossible and I just don't understand enough cryptography to know why. Maybe the NIST truly has our best interest at heart this time. I'm just saying that, to me, it doesn't seem impossible for the NIST to ensure that the winner of their cryptography contests is an algorithm that's subtly broken. And given that there's even a slight possibility, maybe distrusting the NIST recommendations isn't a bad idea. They do after all have a history of trying to make the world adopt subtly broken cryptography.


If the NSA has back-pocketed exploits on the LWE submission from the CRYSTALS authors, it's not likely that a purely academic competition would have fared better. The CRYSTALS authors are extraordinarily well-regarded. This is quite a bank-shot theory of OPSEC from NSA.


It's true that nothing is 100% safe. And to some degree, that makes the argument problematic; regardless of what happened, one could construct a way for US government to mess with things. If you had competition of the world's leading academic cryptographers with a winner selected by popular vote among peers, how do you know that the US hasn't just influenced enough cryptographers to push a subtly broken algorithm?

But we must also recognize a difference in degree. In a competition where the US has no official influence over the result, there has to be a huge conspiracy to affect which algorithm is chosen. But in the competition which actually happened, they may potentially just need a single plant on one of the strong teams, and if that plant is successful in introducing subtle brokenness into the algorithm without anyone noticing, the NIST can just declare that team's algorithm as the winner.

I think it's perfectly reasonable to dismiss this possibility. I also think it's reasonable to recognize the extreme untrustworthiness of the NIST and decide to not trust them if there's even a conceivable way that they might've messed with the outcome of their competition. I really can't know what the right choice is.


That's an argument that would prove too much. If you believe NSA can corrupt academic cryptographers, then you might as well give up on all of cryptography; whatever construction you settle on as trustworthy, they could have sabotaged through the authors. Who's to say they didn't do that to Bernstein directly? If I'd been suborned by NSA, I'd be writing posts like this too!


You're still not recognizing the difference between corrupting a single academic cryptographer and corrupting a whole bunch of academic cryptographers. This isn't so black and white.

For what it's worth, I do think the US government could corrupt academic cryptographers. If I was an academic cryptographer, and someone from the US government told me to do something immoral or else they would, say, kill my family, and they gave me reason to believe the threat was genuine, I'm not so sure I wouldn't have done what they told me. And I know this sounds like spy movie shit, but this is the US government.

One last thing though, if you're giving me the black and white choice between blindly trusting the outcome of a US government cryptography standard competition or distrusting the field of cryptography altogether, I choose the latter.


As long as we're clear that your concern involves spy movie shit, and not mathematics or computer science, I'm pretty comfortable with where we've landed.


If your argument is: “assuming the US government wouldn’t be able to make someone act against their will and stay silent about it, the NIST recommendation is trustworthy”, I’m certainly more inclined to distrust this recommendation than I was before this conversation.

Note that the “forcing someone to comply” thing was just meant as one possibility among many, I don’t see why you completely dismiss the idea of someone who’s good at cryptography being in on the US’s mission to intercept people’s communications. I mean the NSA seems to be full of those kinds of people. You also dismiss the possibility that they just … picked the algorithm that they thought they could break after analysing it, with no participant being in on anything. But I get the feeling that you’re not really interested in engaging with this topic anymore, so I’ll leave it at that. It’s already late here.


Why would you use mathematics or computer science to ascertain whether someone has been corrupted by a government agency?


It's an interesting thought, but then you would need those cryptographers to not only stay quiet about it, but also spend a good chunk of the next part of their lives selling the lie.

Secrets are hard to keep at scale. Trying to do it with coercion, to a group of people who's entire field of study is covert communication, seems like an unenviable prospect.


This of course means we should ignore reasonable criticism of the contestants in this contest.


No, it doesn't.


I hope he finds all sorts of crazy documents from his FOIA thing. FOIA lawsuits are a very normal part of the process (I've had the same lawyers pry loose stuff from my local municipality). I would bet real money against the prospect of him finding anything that shakes the confidence of practicing cryptography engineers in these standards. Many of the CRYSTALS team members are quite well regarded.


> actually a product of the Lizard People

Nobody says that (not that I've seen).

My reading is that he's a combative academic, railing against a standards body that refuses to say how they're working, with a deserved reputation for dishonesty and shenanigans.


I'm pretty sure that was a humorous exaggeration, and just means "conspiratorial bent". I don't think anyone really believes in Lizard People except David Icke.


This also skips his pioneering work into microservice architecture, as exemplified by the structure of qmail, djbdns, and daemontools.


Bernstein did not “found” the field of PQC. He wasn’t even doing cryptography when this field was founded!

Also, the schemes he’s railing against are also the work of top cryptographers in the space.


Love the narrative style of this writing second-guessing the erroneous thought processes. Are they deceptive? Who knows.

What worries me is that it's neither malice nor incompetence, but that a new darker force has entered our world even at those tables with the highest stakes.... dispassion and indifference.

It's hard to get good people these days. A lot of people stopped caring. Even amongst the young and eager. Whether it's climate change, the world economic situation, declining education, post pandemic brain-fog, defeat in the face of AI, chemicals in the water.... everywhere I sense a shrug of slacking off, lying low, soft quitting, and generally fewer fucks are given all round.

Maybe that's just my own fatigue, but in security we have to vigilant all the time and there's only so much energy humans can bring to that. That's why I worry that we will lose against AI. Not because it's smarter, but because it doesn't have to _care_, whereas we do.


Bad systems beat good people.

There are a lot of symptoms to distract yourself with. Focus on the game instead.

A society full of good people will sort out the rest.


This apathy is an interesting phenomenon, let's not ignore it. The Internet has brought us a wealth of knowledge but it has also shown us how truly chaotic the world really is. And negativity is a profitable way to drive engagement, so damn near everyone can see how problematic our society is. And when the algorithm finds something you care to be sad about, it will show you more, more, and ever more all the way into depression.

This is the lasting legacy of the Internet, now. Not freedom for all to seek and learn, but freedom for the negativity engines to seek out your brain and suck you into personal obliteration.

A society of good people? Nobody really cares any more. And I do agree with the gp; if you look, you can see it everywhere. What is this going to become? Collective helplessness as we eek out what little bits of personal fulfillment we can get in between endless tragedy and tantalizing promise?


Bad systems beat good people.

Everything you listed is valid (through one lens), but they are symptoms. Distract yourself with symptoms and you'll never solve the problem.

Application Service Providers, as they exist today, are bad systems.

They provide tremendous value, that's why they exist. But they also carry tremendous cost. So far, nobody has solved the cost without compromising the value.

If you want to fix the web, moving us closer to a free and open web, stay hyper focused on solving the cost without compromising the value.

First past the post voting is a bad system.

In the U.S. you don't solve politics by voting for candidates, that's treating symptoms.

If you are a staunch republican, vote republican. If you are a staunch democrat, vote democrat.

Everyone else should be hyper focused on one thing: ballot reform.

If you want to solve the problem, focus on the game and solve it. A society full of good, but currently defeated, people will do the rest.


Do we have evidence that ballot reform actually can improve political outcomes?


Not just evidence, we also have well established models for evaluating voting systems.

First-past-the-post is one of the objectively worst, seemingly sensible, voting systems.

https://en.wikipedia.org/wiki/First-past-the-post_voting#Vot...


The car is on fire and there is no driver at the wheel.


Bollocks to the car. It's the rest of us innocent pedestrians who need to take cover. :)


Unfortunately, the NSA & NIST most likely is recommending a quantum-proof security that they've developed cryptanalysis against, either through high q-bit proprietary technology or specialized de-latticing algorithms .

The NSA is very good at math, so I'm be thoroughly surprised if this analysis was error by mistake rather than error through intent.


The NSA also has a mission-based interest in _breaking_ other people's crypto though, which is generally known.

Which is generally known, so I'm surprised by your argument. Even if the NSA knows more than they are telling us, this doesn't result in most of us feeling less worried, as their ends may not be strengthening the public's cryptography!


Yes: https://en.wikipedia.org/wiki/Dual_EC_DRBG

Also, we still to this day do not know where the seed for P256 and P384 came from. And we're using that everywhere. There is a non-zero chance that the NSA basically has a backdoor for all NIST ECC curves, and no one actually seems to care.


NIST P-256 curve seed came from the X9.62 specification drafted in 1997. It was provided by an NSA employee, Jerry Solinas, as an example seed among many other seeds, including those provided by Certicom. Read this for more details: https://eprint.iacr.org/2015/1018


Or you find it somewhat credible but still use them because fending off the NSA is not something you want to spend energy on, and you are confident in the fact that NSA think no one else can find the backdoor.


Isn't that what the person you're replying to said?


It's clear to me now that it is! Either I misread it, or maybe they edited it to make it more clear!


I just find it sad that it's things like these that make it impossible for the layman to figure out what is going on with, for example, Mochizuki's new stuff

I have no reason to doubt that a lot of math has been made more difficult than necessary just because it is known to give a subtle military advantage in some cases, but this isn't new;


[flagged]


what do you have against dodo birds


"High q-bit proprietary technology" and "specialized de-latticing algorithms" are made up terms that nobody uses.


I'm stuck on trying to work out what it would mean to de-lattice something. Would that transform a lattice basis into a standard vector space basis in R or something, or, like MOV, would it send the whole lattice to an element of some prime extension field?

In my mind's eye, it's cooler: it's like, you render the ciphertext as a raster image, and then "de-lattice" it to reveal the underlying plaintext, scanline by scanline.


i'm still working on understanding lattices better

but i can imagine, based on my own ignorance, creativity, and lack of correct understanding, would be some kind of factorization.

as I think while trying to better know what's a lattice, I imagine a lattice like a coordinate pair, but instead of each coordinate existing on a line, they exist on a binary tree (or some other directed graph explored from a root outwards without cycles)

which means you have two such binary-trees (not necessarily binary, but it's just easier to work with them seemingly)

and then you combine these into ONE lattice. so then, to de-lattice means to recover the binary trees.

but when I say binary tree I'm thinking about rational numbers (because stern broccott trees)


A lattice is like a vector space, but with exclusively integer coefficients. It's not a coordinate pair. If you think of vectors as coordinate pairs, a vector space is a (possibly unbounded) set of coordinate pairs. If you haven't done any linear algebra, a decent intuition would be mathematical objects like "the even numbers" or "the odd numbers", but substituting vectors (fixed-sized tuples of numbers) for scalars.


Just bounce a graviton particle beam of the main deflector dish.


> through high q-bit proprietary technology

Somebody would leak or steal that as it would be a GIGANTIC leap forward in our engineering skill at the quantum level.

Getting more than a handful of qubits to stay coherent and not collapse into noise is a huge research problem right now, and progress has been practically non-existent for almost a decade.


"Specialized de-latticing algorithms"?


Assuming djb is correct and the current process is broken... is trying to expose it and then fix it through FOIA requests really the best approach?

If your codebase is hairy enough, and the problem to be solved is fundamentally fairly simple, sometimes it's better to rewrite than refactor. Doubly so if you believe a clever adversary has attempted to insert a subtle backdoor or bugdoor.

What would a better crypto selection process look like? I like the idea of incorporating "skin in the game" somehow... for example, the cryptographer who designs the scheme could wager some cash that it won't be broken within a particular timeframe. Perhaps a philanthropist could offer a large cash prize to anyone who's able to break the winning algorithm. Etc.


Taking money from the cryptographers offers the exact opposite incentive that you want it to: your NSA black budget slush fund has orders of magnitude more spending power than anybody honest could hope to acquire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: