> Protocols are very sensitive to errors, possibly more than primitives. If you screw up the internals of a primitive, the results will be different (and visibly so), but it stands a good chance at still being secure.
I don't know if I agree with this. You can easily write an implementation of a primitive that even creates the correct output bytes while leaking secrets via every side channel, or via the one side channel you didn't realize existed.
I also think that "protocol" is too wide to be a useful category. TLS is a protocol, right? But what about HTTPS? Your site's API? There is always going to be cryptography at the bottom, but at some point you have to draw a line or "don't roll your own crypto" becomes "don't write your own software" because everything is in scope.
Or maybe you can't draw that line because the upper layer stuff still has implications. Think about the compression oracle attacks. The upper layer has secret data, compresses it and then shovels it through a "secure" protocol but has already leaked the secret contents through the content size difference due to the compression. But if that means everything is in scope, what then?
> or via the one side channel you didn't realize existed.
Can't do much about that one. Gotta have someone telling you, or (worst case) the very state of the art advancing under your feet.
> at some point you have to draw a line or "don't roll your own crypto" becomes "don't write your own software" because everything is in scope.
Yes, there is a point beyond which you don't have a choice. The natural (and utterly impractical) line to draw is untrusted input. Talking through the internet is the obvious one, but merely playing a video exposes you to untrusted data that might take over your program and wreck havoc.
> Can't do much about that one. Gotta have someone telling you, or (worst case) the very state of the art advancing under your feet.
But that's kind of the point. Primitives aren't easy either. Even one thought to be secure yesterday might not be today, which makes it easy to get wrong merely by relying on literature from a year ago rather than today.
> The natural (and utterly impractical) line to draw is untrusted input.
I think you're right about that, both as to where the line really is and as to how impractical that is if you don't want to just end up with essentially everything being in scope.
> Compression is a tough one. I'd personally try padding.
I'm not sure you can fix it strictly in the lower layers. The general attack works like this: The attacker can supply some data, the victim compresses that data and some secret data together and then sends the combination encrypted over a channel where the attacker can observe the size. If the attacker-supplied data matches the secret data, it can be compressed more so it gets smaller.
The attacker's job is really easy if supplying one more byte of data that doesn't match the secrets causes the observed length to increase by one byte, but fixed padding just requires the attacker to find the padding boundary by supplying increasing amounts of pseudorandom (i.e. incompressible) data until the threshold for the next output size is reached, then swap in different bytes until some of them match the secret and it falls below that due to the compression. And random padding just requires more samples to account statistically for the randomness. In theory you might be able to fix it by making all messages the same length, but then if you want to support large messages, all messages become large, which could be unreasonably inefficient. And the whole point of the compression was to do the opposite of this.
The real solution is not compressing attacker-supplied data together with secret data to begin with, but that means the upper layer has to know not to do that.
> Or maybe you can't draw that line because the upper layer stuff still has implications.
Yes. You gave the example of compression. One of the things you'll find inside HTTP/3 (not QUIC and not even TLS even though QUIC is underneath HTTP/3 and TLS provides the cryptography for QUIC) is an explicit design choice to compress each header separately because of BREACH.
Arguably the whole web stretches this point. It is common for cryptologists to discuss cryptography with the assumption that an adversary can choose to force participants to send messages of their choosing for them to observe the results - in many consumer applications this attack class seems pretty fanciful. But because of Javascript it's actually tremendously easy on the Web and we must prepare against or it expect bad guys to always win.
> if that means everything is in scope, what then?
Training. Your programming teams need appropriate skills and training to cope with the implications for their environment. You likely already train employees about what to do if there's a fire, or if a would-be supplier offers them Superbowl tickets, or their boss asks them for a blowjob, or the new head of marketing wants to email the user database to an ad exec or plenty of other things.
Most likely as a result they also need a trustworthy expert they can escalate any hard questions to. That could be somebody in-house at a big organisation or it could be out-sourced especially at smaller firms. Any questions they ask can help shape future training.
I don't know if I agree with this. You can easily write an implementation of a primitive that even creates the correct output bytes while leaking secrets via every side channel, or via the one side channel you didn't realize existed.
I also think that "protocol" is too wide to be a useful category. TLS is a protocol, right? But what about HTTPS? Your site's API? There is always going to be cryptography at the bottom, but at some point you have to draw a line or "don't roll your own crypto" becomes "don't write your own software" because everything is in scope.
Or maybe you can't draw that line because the upper layer stuff still has implications. Think about the compression oracle attacks. The upper layer has secret data, compresses it and then shovels it through a "secure" protocol but has already leaked the secret contents through the content size difference due to the compression. But if that means everything is in scope, what then?