<Proceeds to describe Cryptography in a way that confirms that it is exactly as complex as I set it out in my head>
e.g. from just the first pagraph:
> " Never ignore timings, they're part of most threat models."
> On most CPUs, the timing side channel can be eliminated by removing all secret dependent branches and all secret dependent indices.
> Some CPUs also have variable time arithmetic operations.
> Watch out for multiplications and shifts by variable amounts in particular.
Yeah dude, stuff like this is EXACTLY what most people don't want to think about, and shouldn't have to think about, and which is why the guidance is "don't roll your own".
I reject his premise as well that this guidance prevents good people from pursuing Crypto as a field of study - as far as I can tell it's not discouraging anyone with actual interest in it.
Think Go (the board game). The rules are much simpler than Chess, and the game itself arguably deeper (for instance, effective Go AIs appeared much later than effective Chess AIs).
> Yeah dude, stuff like this is EXACTLY what most people don't want to think about, and shouldn't have to think about
Selecting yourself out is fine. And apparently you're doing it for all the right reasons: too much investment, not worth your time.
One of my goals was to address the "how hard can it be?" eyes wide would be cryptographer. Well, this hard. More or less.
> I reject his premise as well that this guidance prevents good people from pursuing Crypto as a field of study - as far as I can tell it's not discouraging anyone with actual interest in it.
I confess I'm not quite sure about this one. I'll just note that we've seen people bullied out of some fields. (I recall stories of women being driven out on competitive gaming that way.) A constant stream of "don't roll your own crypto" is not exactly bullying, but I can see it be a tad discouraging. To give you an example, here's the kind of mockery I have to face, even now.
Arguably, that was because until recently our Chess and Go machines relied too heavily on extensive deep search. For much of the game Go has a branching factor at least an order of magnitude higher than Chess, and a Go game typically lasts many more moves than a Chess game, and the consequences of a bad move in Go can take a lot longer to become apparent than in Chess.
When DeepMind came alone with an approach that was not as heavily reliant on extensive deep search, their machines didn't seem to have much more difficulty with Go than with Chess.
I think that this is extremely overrated. As long as you are using C (rather than some weird language), avoid branches with secrets, avoid indexing arrays with secrets, and avoid *, division, and mod with secrets it should be fine.
Low quality posts like that which encourage dunking on people rather than discussion are what made me stop using twitter. Extremely disgusting on his part, I am sorry that you have to deal with this sort of bullying. (I also did not find any signed shift despite the claim of the person responding)
The other side channels however I gave up on them: only custom silicon can meaningfully squash the energy consumption side channel for instance. Software approaches are in my opinion brittle mitigations at best.
About Twitter, I may have overplayed it: I don't use it, so I mostly don't see these things, which in reality are really infrequent. The worst I got was at the time I disclosed the signature vulnerability. It was like a dozen tweets, and only a couple were openly mocking (for the anecdote, I only saw those tweets a year later). In any case, I don't give them much weight: writing this kind of drivel requires some degree of ignorance about my work.
These things mentioned are what frustrate me with crypto implementations in pure Rust, the attempts at constant time operations aren't that solid and everyone is going to war with the compiler to simply get basic functionality. Replacing pointer arithmetic where it's needed with array indexing stands out the most but there's other issues.
Honestly think just using C bindings and calling it day is the best way for anything going into production.
What situations do you run into where array indexing is not an acceptable substitute for pointer arithmetic?
The one borderline case I can cite is wiping memory. I go by sizeof() and access each byte.
I'd even go a step further and say a lot of the problem is that people don't even know about this as a thing to even consider thinking about. Unknown unknowns are the dangerous bit. Does this article cover every aspect of the things you need to consider? I know enough about it to say both I don't know and I highly doubt it.
To me, the biggest thing about Spectre et al. wasn't the proof that CPU uArch can negatively impact security. Rather, it's the proof of information leakage from other processes and privilege modes, which is a drastically heightened vector than cryptography.
 https://ts.data61.csiro.au/projects/TS/cachebleed/ https://ts.data61.csiro.au/projects/TS/cachebleed/
 On Subnormal Floating Point and Abnormal Timing https://cseweb.ucsd.edu/~dkohlbre/papers/subnormal.pdf
 Predicting Secret Keys via Branch Prediction https://eprint.iacr.org/2006/288.pdf
 Exploiting I-Cache https://eprint.iacr.org/2007/164.pdf
This is by definition not simple if the very tools you're using can cause what you think is correct to be wrong and dangerous!
GCM on the other hand took me a gazillion tries, and even though I ended up more or less copying a reference implementation I still got weird edge case errors.
US7949129B2 expired because he didn't pay the patent fees. US8321675B2 claims priority to a patent from 2001. Claiming priority is a way to broaden an existing patent by saying “it's this, but improved”; however, this also means inheriting the expiry date of the patent claiming priority to. So yeah, that one's expiring next year already. Holy cow, thanks for making me check.
Another, related patent that he notes in the FAQ is 8,107,620, which won't expire until 2029 unless IBM forgets to pay patent fees. Hard to tell if it really applies to OCB3 though.
So, 2021 might be the year!
Edit: I was incorrect about the Jutla patent. It seems to only apply to patents that are CBC- or ECB-like. GCM should be fine.
> <Proceeds to describe Cryptography in a way that confirms that it is exactly as complex as I set it out in my head>
This. Soooooo much this.
Of course someone can roll their own crypto, if they've a willingness to study and internalize the concepts, have a commitment to doing it right, and spend time doing things like "Make it bug free. Test, test, test. Prove what you can. Be extra-rigorous".
The whole point of that common advice is that the overwhelming majority of developers have none of those things and it would behoove them to lean on a library instead.
My other hopes is that it would help newcomers focus their learning. Had I read this article 4 years ago, it would have taken me less time to design a reliable enough test suite.
Small tangent, I feel the same way about car servicing: I watch YouTube videos on how to do it, then I pay a mechanic to do it.
Besides not needling to invest time in learning a non-core competence -- most devs need to deliver a user experience and features -- not rolling your own crypto also transfers risks. I doubt you would like to hear your bank using a custom IV, because they thought it would be cool. :)
In a follow-up, it would be cool if you wrote about SRP authentication.
First time I hear of this. Looks interesting, but as PAKE goes I know B-SPEKE, AuCPACE, and OPAQUE better. I'm trying to determine which one I want right now, possibly even design my own, but I found those protocols are significantly harder to get right than authenticated key exchange. I also don't know them well enough to competently write about them just yet.
If you are going to try to create a new crypto library, release it to the world and let them beat the crap out of it, until it is battle hardened then have at it.
I think that is the major point, of saying don't roll your own. Is that, someones hack it together in a weekend for logins is going to get broken if someone really wants to get in. But if we did not have people trying out new ideas in cryptography we would have never seen blowfish or ECDSA.
I think people should absolutely try writing a crypto library, I don't think they should use it to try to secure anything of importance.
I remember back in the BBS days people would actually play a cryto war game where they would write a library, encrypt a message, put it up on the BBS and get people to try to hack it. The message usually contained contact info, with a request to please contact the author and explain how they broke it, so the author could learn from their mistakes.
The hard part is all the invisible things like the side channels. These are the pitfalls that you will fall into because you must know about them to avoid them.
What this article talks about somewhat translates to the larger infosec community.
A lot of the presumptions I had about working in infosec were false:
- You need to be good at and understand software exploitation well
- You need to be a good programmer
- You need to know how to code (I do fwiw)
- Your soft-skills should be great (not more than any regular office job)
- You should know offensive techniques well, including breaking crypto
- You need to go to cons and do heavy infosec social networking
- You need to be good at math
- You need to master every single IT discipline
- How can you work in infosec if you never hacked a gibson? (joke)
And many more.
I can tell you,these types of elitist gate-keeping is why infosec always complains about a "skills shortage". I am nowhere near exhausting my mental/skill capacity with my day to day work and I do fairly well. I meet people all the time who never coded before and never heard of an elliptic curve that do well and impress me in very technical infosec disciplines.
My suggestion to anyone considering infosec is, if you have strong interest in the subject and you enjoy the very technical aspects of it (even if you don't understand some things well), I say go for it regardless of what you lack so long as you don't lack motivation and free time to pursue your studies. There are plenty of jobs that need people with passion in infosec and you need to be an elite hacker as much as a sports team needs every player to be an egotistical superstar.
"Trithemius' most famous work, Steganographia (written c. 1499; published Frankfurt, 1606), was placed on the Index Librorum Prohibitorum in 1609 and removed in 1900. This book is in three volumes, and appears to be about magic—specifically, about using spirits to communicate over long distances. However, since the publication of a decryption key to the first two volumes in 1606, they have been known to be actually concerned with cryptography and steganography. Until recently, the third volume was widely still believed to be solely about magic, but the "magical" formulae have now been shown to be covertexts for yet more cryptographic content."
The issue is not so much that there might be a flaw in their implementation -- indeed you should use reviewed libraries instead of rolling your own to avoid flaws -- this is not specific to crypto, it's just as much true for IO libraries or memory management libraries as it is for crypto.
But what makes crypto unique is that cryptographic algorithms have very strict, well-defined, limited behaviors and there is generally a big gap between what people want to accomplish "don't let an attacker see this file" and what crypto will actually do for them "encrypt the file" and very often the use of crypto doesn't end up creating much value.
Here, people do think cryptography is some kind of dark magic, where they can "secure" something just by encrypting it, and it's incredibly frustrating to have to implement cargo cult crypto that doesn't add much in the way of real security just because a PO views encryption as an end in itself -- e.g. as a feature.
The only thing from that article that I agree with is that gate keeping is bad. Crypto is so hard that we need more cryptographers to help us, not less.
For my part, I think I can evaluate people who know less than I do. Those who are more competent than I am however, I could not tell by how much. I'd have to rely on reputation or past achievements.
I think there is a conflation here of "cryptographer" in the sense of "person who studies, and maybe invents cryptographic algorithms" v/s "someone who needs a broad understanding of cryptographic algorithms in their day-to-day software work, studies them in some depth, but likely doesn't invent new ones".
The former is the kind you mean, but likely most people (including the OP, I think) are referring to the latter when they say "cryptographers". An unscientific test of this can be done by searching jobs.lever.co for "cryptographer" v/s "cryptography" – the former yield one position, whereas the latter yields ten pages. 
And maybe some of this is just due to the term being imprecise. For example, would you term this position as being a "cryptographer"? https://jobs.lever.co/protocol/9afbc1c9-8b3b-4c03-856d-6b0cb.... It's certainly full time.
I don't know. Maybe cryptocurrency startups change the equation. I know that prior to crypto-mania, good jobs doing pure cryptography engineering were not especially easy to come by.
We'd also have more working Elligator2 reverse map implementations. If DJB & al themselves didn't provide a working reference implementation over Curve25519, that suggests serious labour shortage, at least locally.
And it's a gross mess. It's actually called NaCL.
TLDR: Libsodium is over 10 times bigger than Monocypher, it needs an autotools build system, and most of all, it doesn't support embedded targets. Monocypher is closer to TweetNaCl, except it's much more complete, and an order of magnitude faster.
Also recall that LibHydrogen was released not long after Monocypher. It appears Frank Denis felt the same kind of void I did. If Monocypher is as unneeded as you pretend, so would be LibHydrogen (which by the way is slower across the board than Monocypher).
As I said above, Libsodium's support of embedded targets is limited. To the point it wasn't even considered on some benchmarks, while Monocypher was. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8725488 (Note that since that benchmark, I have reduced stack usage of signature verifications by half, at no noticeable loss of performance).
Monocypher's portability comes from 3 factors beyond sticking to standard C99: the size of its binary (least important), the size of the stack (more important), and the lack of dependency (most important). Monocypher uses no system call, and does not depend on libC. That makes it usable on targets without a kernel.
The second difference is simplicity. When I wrote about Libsodium being 10 times bigger, I didn't refer to the size of the binary. I was referring to the size of the source code. And the number of exposed symbols, to a lesser extent. Such a massive difference in degree is close to a difference in kind. Audits take 10 times more time and money (where Monocypher cost $7K and 7 days, Libsodium would cost $70K and 14 weeks). Building, porting, and deploying takes 10 times more effort, and sometimes is not possible at all, when the environment is constrained enough.
Simplicity matters, even on server farms. DJB wrote TweetNaCl for this very purpose if I recall correctly. He wasn't even targetting embedded devices.
One important thing to clarify: small-footprint embedded cryptography and server farm cryptography can be the same in the IoT world: that connected object has to connect somewhere, generally to the vendor's servers. While embedded devices are often very constrained, the servers can be a bottleneck (especially true if there's no subscription to pay for the servers). You might want to optimise for the server side, to help scaling, even if it put more strain on the embedded side.
That, and there's an advantage to using the most popular primitives: less room for new cryptanalysis, more compatibility. That's how you get Ed25519 signature verification even on tiny 16-bit embedded processors. You'd like to use an extension field to make a faster, more lightweight curve, but the literature on prime fields (and therefore Edwards25519) is more stable, making Ed25519 the safer choice.
I also acknowledge that Monocypher is not ideal for tiny embedded devices: Blake2b instead of Blake2s, 64-bit multiplication on Poly1305 and Curve25519… 64-bit multiplication is particularly problematic on 8/16-bit processors, they end up inflating the binary size to prohibitive proportions. For those tiny targets, C25519 is a much better fit. I have redirected a user to it once, though they ended up using a slightly bigger processor, and kept Monocypher because of its speed.
On small 32-bit processors however, Monocypher is king. As far as I know, only custom assembly beats it.
Breaking news: TweetNaCl has not been fully verified. Two instances of Undefined Behaviour (negative left shifts), lines 281 and 685, remain to this date. They're easily found with UBSan (yay for verification!), but for some reason DJB has yet to correct them.
The original paper verified 2 specific memory safety properties (no out of bound accesses, no uninitialised memory access). Monocypher's test suite does the same (and more) on a systematic basis since before version 1.0. I use Valgrind, all sanitizers, and the TIS interpreter. The test suite covers all code & data paths, much thanks to the code being constant time.
So not only Monocypher has been build with verifiability in mind, it has been pretty thoroughly verified. You would know that if the time you took to discredit Monocypher were used to look at it instead. It's all there in tests/test.sh, referenced in the README.
About that vulnerability 2 years ago. As shocking at it may be, I learned from it. The looming threat of something similar happening again tends to do that. I've paid my dues since, learned a ton. The audit gives no cause to fear another error of that kind. The test suite was deemed adequate, and they found no bug, however minor.
That old bug is irrelevant now. Give me a break.
One of the nice things about the crypto designed by djb is the effort to make it easy to implement safely. For example, as mentioned, Chacha20 is designed to avoid timing attacks. Curve25519 is designed so every 32 byte key is a valid public key.
Just like programming languages are shifting from the C like view of it is solely the programmer’s responsibility to avoid screwing up, to languages like rust which emphasize safety and make it harder to have an inadvertent memory safety issue, so our crypto algorithms ideally should be designed that an competent general software engineer can implement them without screwing up.
That's a dangerous statement on it's own. Making proper use of primitives is not at all a simple concept. Developers can absolutely undermine their systems with poor choices/mistakes.
I wrote a blog up on a very high level screw up with type conversions to show just the very surface of how to screw up using solid crypto primitives. Time allowing I want to do more entries on topics within the crypto realm itself. IV reuse, etc.
I'm not sure how best to say it.
Implementing primitives & protocols requires little cryptographic knowledge. It does however require significant knowledge about program correctness: testing methods, proofs, and if side channels are important, the characteristics of your platform, and an accurate enough idea how your compiler or interpreter works.
Likewise, to implement an energy constant implementation of Chacah20 in silicon, you don't need a cryptographer, you need a hardware designer. The only thing you need a cryptographer for, is telling the hardware designer to make it constant energy — or convincing the higher ups why the extra cost is justified.
The blog post you link (which I love by the way) seems to confirm my view: many problems are ones of correctness. I believe most such bugs would be caught by corrupting the inputs, as I alluded to. Here, corrupting the password would fail to abort, and you'd catch the bug.
Using an example of IV reuse in AES-GCM:
The weaknesses resulting from this wouldn't be discoverable with a test like corrupting the password from the first example. If the developer wasn't aware that IV reuse introduced that weakness then they would be using strong primitives but in a way that dramatically undermines the actual encryption.
Not to put words in your mouth, but I assume your answer would be to say that this would be a matter of correctness. If yes, then where I'm coming from is that the majority of devs don't have the skillset to be correct and sometimes wouldn't dive deep enough to discover these kinds of pitfalls.
Yes, that one wouldn't be caught by corrupting inputs. You need to make sure you don't reuse the IV in the first place. And that's indeed a cryptography related bug.
> I assume your answer would be to say this would be a matter of correctness
It would be.
> where I'm coming from is that the majority of devs don't have the skillset to be correct
Unfortunately, I can believe that. Correctness is hard. Or expensive. Let's try with this example.
If you're designing an AEAD yourself, you can notice the error by trying (and failing) to prove IND-CCA2. If you're implementing or using AEAD, code review should the problem… unless this is a bug like you've shown before. Tough one to spot.
One way to avoid the IV bug with a reasonable degree of certainty would be to use a counter or a ratchet. Don't send the IV over the network, let it be implicit. Then write two implementations in two very different languages. It is very unlikely that both happen to repeat the same hard coded IV.
If we still want to use random IVs, we probably need to mitigate replay attacks: have the receiver store the last few IVs of the messages it received this session, and have it compare any new IVs with this set. Won't stop all replay attacks (the attacker could wait until old IVs are forgotten), but it will at least catch the accidental reuse.
Wouldn't a more strongly-typed language have prevented that bug at compile time?
The audience is for devs without any real experience/knowledge of using crypto that might go into it too casually.
You know, this starts to sound like anti-vaxxers
>and started to teach myself cryptography 4 years ago
Yeaaaaah, I definitely see the parallers with anti-vaxxers.
Look, it's not about not-studying stuff. It's that you will fail and you will have side-channel attacks. You do not have the resources or the skills that multiple hundreds or thousands of people have in a) developing crypto and b) breaking it.
I've built my "novel" crypto scheme (any hash algorithm can be used to create a symmetric cipher), and I like it! But I wouldn't think about using that on an actual product where there are stuff really at stake.
You were talking about failure, but this doesn't look like failure to me: https://cure53.de/pentest-report_monocypher.pdf
You were talking about side effect, but I have the feeling you didn't even read the part of the essay that addresses this exact point.
Do you support gate-keeping?
> Yeaaaaah, I definitely see the parallers with anti-vaxxers.
They did not educate themselves via facebook posts nor via youtube videos.
Regardless, are you saying that self-teaching is something only by charlatans? The only difference between being self-taught and being formally educated is that you do not have a mentor and that you do not get a degree at the end. You are going to use the same resources (books, handouts, presentations, etc) to learn stuff from.
> It's that you will fail and you will have side-channel attacks
In addition to that loup intentionally implemented only DJB's algorithms (or algorithms based on DJB's primitives).
> You do not have the resources or the skills that multiple hundreds or thousands of people have
Yet their record is much better compared to other similar libraries that have the support of "multiple hundreds or thousands of people". looks at openssl
> a) developing crypto and b) breaking it.
> I've built my "novel" crypto scheme (any hash algorithm can be used to create a symmetric cipher), and I like it!
Loup implemented DJB's algorithms, not their own.
But, if you never attempt to roll your own, then, how will you learn to make a basic version? Then, how will you learn how to make the next great encryption algorithm or program?
"The slightest error may throw cryptographic guarantees out the window, so we cannot tolerate errors. Your code must be bug free, period. It's not easy, but it is simple: it's all about tests and proofs."
(Please, I don't intend this as a flamebait, only asking on what is this belief founded?)
(1) Protocols are very sensitive to errors, possibly more than primitives. If you screw up the internals of a primitive, the results will be different (and visibly so), but it stands a good chance at still being secure. Protocols however tend to be very tightly designed. Modifications that have a working happy path are more likely to have significant wholes: a missing check, failure to authenticate part of the transcript, loss of forward secrecy… No real justification there, it just has been my experience dealing with modern primitives and protocols.
(2) Correctness subsumes security. A program is correct if it fulfils its requirements. A program is secure if it fulfils its security requirements. Which by definition are a subset of all requirements. That said, while immunity to relevant side channels are definitely parts of security requirements, it helps to separate them from the correctness of end results.
(3) Bug free code is possible. It's not easy, but it can be done. Constant time implementations of modern primitives are among the easiest code to test, ever: since code paths only depend on the lengths of parameters, testing them all against a reference is trivial. That's not just "100% code coverage", it's 100% path coverage. As for the proofs, while they may not be easy to produce, the good ones are fairly easy to follow (though extremely tedious), and the great ones can be checked by a machine, making them trivial to verify.
The main value from protocol level proofs is that since you need to tell the machine your assumptions before it spits out a proof, a careful proof development process can discover unstated assumptions.
The TLS Selfie attack is an example. In principle this attack could have been found during proof generation for TLS 1.3, but in practice the proofs generated during TLS 1.3 development smuggle in an unstated assumption that means Tamarin rules out Selfie even though in some cases it would be a viable attack.
[Selfie goes like this: Alice and Bob have a PSK for authentication, Mallory doesn't know the PSK, but Mallory can interfere with the network between Alice and Bob. Alice intends to ask Bob, "Do you have the car?". Mallory can't read this question or write an answer Alice will accept because they don't know the PSK. However, Mallory just redirects the question back to Alice. "Do you have the car?" and Alice doesn't have the car, so she answers "No" and she knows the PSK so her answer is proper. Now Alice gets an answer to her question, "No" and so she concludes Bob doesn't have the car. But actually Bob was never asked!]
Knowing about this, it can be repaired. Alice and Bob simply address the intended recipient in each message, "Bob, do you have the car?" "No Alice, I don't" - and check for their own name in messages they receive. Or they use a separate PSK for each direction, not just one per pair of participants in their system. Or they can choose only to either be a TLS client or a server and never both. But all these steps aren't obvious if there's nowhere stated the assumption that you did one of these three things. Intuitively it seems as though since Mallory doesn't know the PSK and the Tamarin prover says this protocol works you're fine.
Having worked on protocols, I have found again and again that making the context explicit by cryptographically binding intent to the message is paramount.
I don't know if I agree with this. You can easily write an implementation of a primitive that even creates the correct output bytes while leaking secrets via every side channel, or via the one side channel you didn't realize existed.
I also think that "protocol" is too wide to be a useful category. TLS is a protocol, right? But what about HTTPS? Your site's API? There is always going to be cryptography at the bottom, but at some point you have to draw a line or "don't roll your own crypto" becomes "don't write your own software" because everything is in scope.
Or maybe you can't draw that line because the upper layer stuff still has implications. Think about the compression oracle attacks. The upper layer has secret data, compresses it and then shovels it through a "secure" protocol but has already leaked the secret contents through the content size difference due to the compression. But if that means everything is in scope, what then?
Can't do much about that one. Gotta have someone telling you, or (worst case) the very state of the art advancing under your feet.
> at some point you have to draw a line or "don't roll your own crypto" becomes "don't write your own software" because everything is in scope.
Yes, there is a point beyond which you don't have a choice. The natural (and utterly impractical) line to draw is untrusted input. Talking through the internet is the obvious one, but merely playing a video exposes you to untrusted data that might take over your program and wreck havoc.
Compression is a tough one. I'd personally try padding. Something like PADME should work well in many cases. https://en.wikipedia.org/wiki/PURB_(cryptography)
But that's kind of the point. Primitives aren't easy either. Even one thought to be secure yesterday might not be today, which makes it easy to get wrong merely by relying on literature from a year ago rather than today.
> The natural (and utterly impractical) line to draw is untrusted input.
I think you're right about that, both as to where the line really is and as to how impractical that is if you don't want to just end up with essentially everything being in scope.
> Compression is a tough one. I'd personally try padding.
I'm not sure you can fix it strictly in the lower layers. The general attack works like this: The attacker can supply some data, the victim compresses that data and some secret data together and then sends the combination encrypted over a channel where the attacker can observe the size. If the attacker-supplied data matches the secret data, it can be compressed more so it gets smaller.
The attacker's job is really easy if supplying one more byte of data that doesn't match the secrets causes the observed length to increase by one byte, but fixed padding just requires the attacker to find the padding boundary by supplying increasing amounts of pseudorandom (i.e. incompressible) data until the threshold for the next output size is reached, then swap in different bytes until some of them match the secret and it falls below that due to the compression. And random padding just requires more samples to account statistically for the randomness. In theory you might be able to fix it by making all messages the same length, but then if you want to support large messages, all messages become large, which could be unreasonably inefficient. And the whole point of the compression was to do the opposite of this.
The real solution is not compressing attacker-supplied data together with secret data to begin with, but that means the upper layer has to know not to do that.
Yes. You gave the example of compression. One of the things you'll find inside HTTP/3 (not QUIC and not even TLS even though QUIC is underneath HTTP/3 and TLS provides the cryptography for QUIC) is an explicit design choice to compress each header separately because of BREACH.
> if that means everything is in scope, what then?
Training. Your programming teams need appropriate skills and training to cope with the implications for their environment. You likely already train employees about what to do if there's a fire, or if a would-be supplier offers them Superbowl tickets, or their boss asks them for a blowjob, or the new head of marketing wants to email the user database to an ad exec or plenty of other things.
Most likely as a result they also need a trustworthy expert they can escalate any hard questions to. That could be somebody in-house at a big organisation or it could be out-sourced especially at smaller firms. Any questions they ask can help shape future training.
The Magic comes more from things like games and computer graphics and deep learning.
"Don't roll your own crypto because its basically magic and you'll screw it up unless you're a cryptographer".