Heart of his argument is this, with which I absolutely agree:
"The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. We’ve learned the hard way how algorithms can get so entrenched in systems that it can take many years to update them: in the transition from DES to AES, and the transition from MD4 and MD5 to SHA, SHA-1, and then SHA-3."
Although, personally, I am more supportive of the OpenVPN model (many standards to choose from, including older algos, maybe too much choice) compared to the Wireguard model (one set of well thought of defaults, no choice), one has to ask -- aren't they both wrong? Isn't the correct model high flexibility, while relentlessly deprecating and removing older standards, and, maybe, a clear nudge towards sensible default choices ("X recommends the following algos in 2022...").
Obviously crypto is super hard. But the 'problem of agility' seems like a software engineering problem not a hard crypto theoretical or implementation issue.
If the software supports multiple cryptographic primitives how do you stop a down grade attack when one of those primitives becomes obsolete?
To do so securely would require cryptographic primitives that can not become weak and obsolete at which point why not just use that for everything? Its a chicken and egg problem.
So then what stops a down grade attack? The attack would no longer leak encrypted data sure but now it acts as a dos attack. In reality the user would use another method one that is probably easier to break than the latest best crypto.
Without secure crypto we can't securely establish which primitives to use.
I guess I'm wondering what you're getting at. Are you saying a more inflexible/less agile model is a better model? Doesn't that have a similar problem? When say, for example, Wireguard's choices become obsolete doesn't that operate like a DOS as well? What's your point?
> Are you saying a more inflexible/less agile model is a better model?
No I am saying there is no secure answer. Both choices result in the same thing use one primitive when it becomes broken the protocol is broken, or negotiate a primitive and face downgrade attacks that make the protocol broken.
There is no secure answer to stop mitm attack from the protocol level it has to come from cryptography at which point what happens when that cryptography is broken? The whole thing is a chicken and egg problem, there is no secure answer.
It has to be taken as a given that all encryption deployed today will get “weaker” over time, so that’s already accounted for. The solution is that you just have to live with some risk from older algorithms, and make sure the replacement schedule is reasonably fast.
You only need the absolute highest levels of security for a very few things, and if those are so critical then cutting off access is probably the right choice if the best algorithms aren’t available. Presumably those situations are in well-controlled environments and everyone is aware and can manage that type of coordination.
> Current quantum computers are still toy prototypes, and the engineering advances required to build a functionally useful quantum computer are somewhere between a few years away and impossible.
We don't know what quantum computers exist in the hands of powerful adversaries like state actors as they do not openly share this information. Schneier even admits this later on.
> This represents the first time a national intelligence organization has published a cryptanalysis result in the open literature.
Given how much control the NSA has over the NIST when defining standards. The NSA and NIST's consistent history of intentionally weakening their standards, and the current secrecy around NIST and NSA collaboration especially when it comes to the NIST PQ competition there is very good reason to believe that these cryptographic primitives are all compromised.
I would really hope that for the next 2 decades at least, no one deploys anything using post-quantum crypto without wrapping it in a layer of traditional crypto. If your threat model includes quantum computers, it definitely also includes new math on a relatively poorly studied algorithm.
"The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. We’ve learned the hard way how algorithms can get so entrenched in systems that it can take many years to update them: in the transition from DES to AES, and the transition from MD4 and MD5 to SHA, SHA-1, and then SHA-3."
Although, personally, I am more supportive of the OpenVPN model (many standards to choose from, including older algos, maybe too much choice) compared to the Wireguard model (one set of well thought of defaults, no choice), one has to ask -- aren't they both wrong? Isn't the correct model high flexibility, while relentlessly deprecating and removing older standards, and, maybe, a clear nudge towards sensible default choices ("X recommends the following algos in 2022...").
Obviously crypto is super hard. But the 'problem of agility' seems like a software engineering problem not a hard crypto theoretical or implementation issue.