The article basically says that it's bad to support multiple choices (crypto primitives) in a protocol.
> If you need to enumerate the options you support, it means that not only you support multiple ones, which is already bad, but you need to communicate the choices in the protocol itself, meaning you do runtime negotiation. Do you want bugs? Because that’s how you get bugs.
The author then says that we should just bump the version of the protocol each time we change the ciphers. This already assumes that every time there is always only one algorithm that governments, standard bodies, cryptographers, companies, banks, hw manufacturers and others can agree to.
Good luck with that
And besides, how will you update our map that says that protocol "age" 1 uses rsa, "age" 2 uses ed25519? That's right, with "Registries". Unless you convince the whole world to just instantly switch to the new version at the same time, so that you can also drop the old version.
> The industry now understands updating and patching software quickly is critical
Not my experience with cellphones, iot, pcs or even servers of any os
I might be in a bad mood, but I don't feel the article provides any practical option or that the author has thought this out much
> And besides, how will you update our map that says that protocol "age" 1 uses rsa, "age" 2 uses ed25519?
With protocol versions, which don't require a registry for the primitives. Like, there is no number that means RSA and no number that means Ed25519, there is just age v1 and age v2, and each of them uses only the one specified primitive. If you want to call version numbers registries, sure, but I think you get what I mean.
People bring up "what about AES hardware" a lot, but they also seem to like Wireguard, which also happens to be the fastest VPN option for most deployments. Well, Wireguard just silently went and did what I'm advocating, including only supporting ChaCha20Poly1305.
So instead of having "primitive1 = rsa" you have "age1 = rsa".
It's the same thing. You are just bumping the protocol version faster, and confusing people because protocol 42 will be just another crypto change, but 43 will be a complete protocol rewrite.
Meanwhile each protocol version needs to be standardized and accepted by the whole world. That stuff is not fast
Interesting. I was about to comment how well known and widespread everything on the article is, and welcome the author to the last decade... so reading your disagreement is really unexpected.
Crypto algorithm negotiation is an artifact of the 90's. Currently crypto is much more stable than communication protocols, so there is no need to negotiate them, and the negotiations are vulnerable to attack so one just can not keep support for broken clients, however practical it is to update them.
And, anyway, protocol version negotiation is still pretty much a thing.
challenges for new standards are underway, AEAD standardization is relatively new, ECC is no longer quantum proof, we still have no definite idea on lattice and so on.
Even CHAHCA has multiple evolutions.
DJB, famous and influential cryptographer worked on MinimaLT, a protocol that only uses one primitive and has no negotiation. go check how widespread it is, then think on your ability to convince the world on how good your choice is.
> one just can not keep support for broken clients, however practical it is to update them
In an ideal world I would agree. Not in this one
BTW, I see no difference in version negotiation and primitive negotiation. You can't say "primitive negotiation is bad!" and "protocol negotiation is not bad!". Especially since negotiating protocol versions without crypto primitives is just screaming for downgrade attacks.
If done correctly, it's the same mechanism. if not, both can have the same flaws.
This distinction between protocol version support and primitive support reeks of lack of knowledge in the field (both crypto and protocols), insecurity of the product, and sounds just like an impractical philosophy (especially due to the article renaming many common names of common fields).
Code reuse is a (big) thing. If a software supports multiple "ages", those will be handled with a "registry".
Evolution is another. People have taken huge pains to extend features and capabilities of existing protocols. QUIC is winning probably not only due to google, but also due to its internal versioning of various features.
TLS history showed us that upgrading the cipher list and deprecating the old ones is incredibly faster than trying to update the TLS protocol version. I can really see no reason why anyone would want to go back to the days of unupgreadable protocols and their nightmares
> BTW, I see no difference in version negotiation and primitive negotiation. You can't say "primitive negotiation is bad!" and "protocol negotiation is not bad!". Especially since negotiating protocol versions without crypto primitives is just screaming for downgrade attacks.
Primitive negotiation involves primitives being decided at runtime. This means you can, for example, tell a system that normally does RSA verification to switch to HMAC-SHA256 instead (but using the RSA public key as a HMAC shared key). This is what bit many JWT implementations.
Protocol negotiation involves primitives being decided at the time that a protocol version is defined, and then you can only select the version to use. You don't get to tell the server "use this encryption algorithm in this mode", you can only say "use v2" or "use v3". There is no mix-n-match potential.
You can't use v2 age with RSA.
This isn't a subtle difference. Most cryptography vulnerabilities occur at the joinery between primitives.
That’s why cryptographic systems where “governments, standard bodies, cryptographers, companies, banks, hw manufacturers and others” need to agree on things don’t tend to be very secure.
> And besides, how will you update our map that says that protocol "age" 1 uses rsa, "age" 2 uses ed25519? That's right, with "Registries". Unless you convince the whole world to just instantly switch to the new version at the same time, so that you can also drop the old version.
There is a not so subtle difference here.
Rule: When more than one algorithm is implemented in a protocol, negotiation is unavoidable. What you can choose is to do one of two things:
1. Run-Time Negotiation --- Within a given version of a protocol, you can have multiple simultaneous in-flight algorithms, modes, key sizes, etc. at play.
2. Compile-Time Negotiation -- Version 1 only does Algorithm ABC with config XYZ, Version 2 does Algorithm DEF with config VXW. You cannot get version 1 with another algorithm.
(I use "Compile-Time" somewhat hand-wavy here; some programming languages are interpreted instead of compiled.)
It should be plainly obvious why option 1 is more dangerous and harder to prove secure than option 2.
> how will you update our map that says that protocol "age" 1 uses rsa, "age" 2 uses ed25519?
I'd imagine most (but not all) cipher suite changes would also include meaningful protocol changes as well, so a registry that tells you about cipher suites wouldn't help. And, if this is for open consumption, the client has to deal with servers that haven't updates too, so you need version negotiation, unless the old cipher is super bad. My personal experience is more with centralized servers, where I could guarantee protocol changes were fully distributed to servers, so clients did not have to negotiate. And we had to support old protocols until the last client that used it was expired.
Just out of curiosity, don’t you think that in strictly versioned software, ie software that everyone has to be running the same version at runtime for it to work, you could enforce such versioning primitives?
I think there’s a world where forcing users to have a little choice as possible about how they use a cryptographic system keeps them safer, don’t you?
> software that everyone has to be running the same version at runtime for it to work
Have you ever attempted a flag day? Even at a small scale organization it's a nightmare - imagine trying to do it at Credit Card Processor (tens of millions of devices) or Google's (billions of devices) scale.
Anybody paying attention knows I disagree with Filippo about the actual cryptographic content here. In practice systems have different constraints. AES for example is pretty nice if you have hardware support. Some do and some don't. By offering AES as a choice you allow people who do have hardware support to prefer AES while those who don't can have say, ChaCha20. And so you end up needing such a registry.
So instead I will focus on how I agree with his choice of movies. Before Sunrise in particular is really under-appreciated.
You'll be lucky. People keep calling their attempts at weak take downs of ideas they disagree with variants on "A Modest Proposal" and Swift died in 1745.
>...but you need to communicate the choices in the protocol itself, meaning you do runtime negotiation.
But then the author goes on to use age as an example, which encrypts data at rest so that no negotiation is possible.Sort of a low key way to weaken your own point.
This is completely the wrong place for this, but you don't list an email address and I don't want a Twitter account
In Dispatch #4 you complain (speaking of FIDO for OpenSSH) that "As far as I can tell, that’s true when the key handle is indeed an encrypted private key, but there’s nothing in the spec requiring that". In the WebAuthn spec. which I've actually read unlike the various other specifications, there are two alternatives offered:
1. At least 16 bytes that include at least 100 bits of entropy, OR
2. The public key credential source, without its Credential ID or mutable items, encrypted so only its managing authenticator can decrypt it. This form allows the authenticator to be nearly stateless, by having the Relying Party store any necessary state.
(The Reply-To of the Dispatches goes to my inbox!)
Nice! Both 1 and 2 should provide the security properties. Now I do wonder how the WebAuthn spec reflects the ecosystem of tokens compatible with OpenSSH. Honestly, I don't even know if OpenSSH implements WebAuthn or there's a different umbrella spec?
I can't imagine there's actually a meaningful market for products that implement the hardware protocols described in FIDO (which use byte conservative CBOR because it's imagined that you'd try to run them over some ratty wireless protocol with no bandwidth) but aren't intended to be used with WebAuthn (which is full of JSON mapped to that CBOR because it's imagined you're a full fat modern web site displayed on a modern graphical web browser like Chromium or Firefox) but they are separate.
But OpenSSH doesn't need WebAuthn it just needs FIDO.
You'd presumably hate both specifications, because they drag in (and rely upon) registries for a bunch of technically unnecessary stuff and not even for the practical engineering reason I've excused in my sister posts to this thread. It's pure politics, you need everybody on the bus and so if they insist upon stupid ideas A, B and C between them then your options are veto A, B and C and nobody goes on the bus so the exercise was futile or you allow A, B and C with your face buried in your hands and plan for the inevitable negative security consequences.
Microsoft really wanted to do RSA for example. So, even though you wouldn't do RSA and I'm sure none of the Google people working on this love it either, the WebAuthn specification makes RSA an option and Microsoft shipped code that does RSA with WebAuthn.
I believe agl wanted the standards to rip out U2F's counters. Some security people love counters (does somebody have a war story where counters actually saved the day? I want to hear that war story) but others wanted them and so they are still in FIDO2/ WebAuthn even though they're a potential privacy hazard.
I suppose you can make any at rest encryption scheme negotiate (even PGP) if you create such a negotiation system that uses the scheme as a primitive. I was only referring to the way such things are normally used. You make an encrypted file. After that the only way to decrypt it is the way it was encrypted.
Looks like an oracle attack against malleable encryption. It does something clever to enhance the effectiveness of that attack but nothing there could reasonably be considered negotiation.
> If you need to enumerate the options you support, it means that not only you support multiple ones, which is already bad, but you need to communicate the choices in the protocol itself, meaning you do runtime negotiation. Do you want bugs? Because that’s how you get bugs.
The author then says that we should just bump the version of the protocol each time we change the ciphers. This already assumes that every time there is always only one algorithm that governments, standard bodies, cryptographers, companies, banks, hw manufacturers and others can agree to.
Good luck with that
And besides, how will you update our map that says that protocol "age" 1 uses rsa, "age" 2 uses ed25519? That's right, with "Registries". Unless you convince the whole world to just instantly switch to the new version at the same time, so that you can also drop the old version.
> The industry now understands updating and patching software quickly is critical
Not my experience with cellphones, iot, pcs or even servers of any os
I might be in a bad mood, but I don't feel the article provides any practical option or that the author has thought this out much