> Sure, it's complex and you have to get all the details right
What's really dangerous about cryptography (someone on HN pointed this out a few weeks ago) is that you get very little feedback on this. It's very easy to take reliable primitives and build a broken system. When I say "broken" I don't mean "theoretically vulnerable" - I mean "game over".
I get paid to look for these bugs, and they are legion.
> so for developers, I recommend a more modern approach to cryptography — which means studying the theory and designing systems which you can prove are secure.
The average application developer does not have the time or the need to learn the theory at a deep level. And as before, it's very difficult to say with confidence "this design has no bugs".
The best advice I can give developers is to play very conservatively. Our default recommendation is PGP for data at rest and TLS/SSL for data in transit. Or some high-level library like KeyCzar, NaCl, etc.
Are there any easy-to-implement tests for this sort of thing? For example
* For mathematical functions, I can trivially test for an expected value, a too low value, a too high value, and not a number input.
* For input/output sensitization, I can pump crazy unicode characters or system code.
* For encryption, I can ... open up tcpdump and see that it's ostensibly garbage-looking?
It's pretty easy to eyeball your ciphertext and think it's sufficiently garbage-looking, but it's difficult to predict what an attacker can do with access to your running system. Seemingly innocuous flaws can lead to complete plaintext recovery or the ability to forge arbitrary ciphertext. (Here's a fun way to get a taste: http://www.matasano.com/articles/crypto-challenges/)
@cperciva is right that cryptographic security is something that needs to be proven. Since I'm not smart enough to do that, I avoid designing my own crypto systems, and I recommend clients do the same. Lots of risk, little or no reward.
Last year I was lucky enough to engage Colin on some work I was doing. His interest was piqued by my (erroneous) claim about the security properties of a protocol I had designed.
Here is a great real-world example from last month. Synergy added "encryption support":
(scroll to the bottom)
"Stryker, we actually use the Crypto++ library and do not "code the encryption" as you put it. If you are not happy using Crypto++, then please disable encryption and use SSH tunnelling instead. The trend seems to be that most users do not know how or do not want to use SSH tunnelling, and would prefer for this to be built into Synergy itself.
Discussing this further is a waste of time. Patches welcome."
It also illustrates a really key point about crypto: because it looks simple (oh, just run the bytes through that function/hash/send them over SSL), people assume that it is simple they know enough to hack together a decently secure system.
At the very least, a healthy respect of crypto theory is called for. In my experience most developers do not have this healthy respect and see crypto as a magic black box that makes data unreadable.
I find attacks on cryptosystems illustrative for the "oh CRAP" moment. Oh CRAP salted hashes are a terrible idea. Oh CRAP you can pad a hash to make a remote system accept "signed" data. The more I learn and the older I get, the more cautious I am.
Crypto is complex enough that more people will land in the destructive camp, it is by far the simpler approach to fame and riches.
But that does not mean that a 'breaker' can automatically move to 'builder' status, testimony to that is the number of people that we think are capable of constructing solid cryptographic systems. Colin is one of the people that has definitely achieved 'builder' status, he's not scared to publish his work and has made meaningful advances in the field.
If you feel like, you too can be famous, analyse tarsnap and see if you can find a bug, Colin will be happy to advance your stature, see:
Meanwhile, take Colins advice if you ever want to advance from 'breaker' to 'builder', any idiot can break a window, it takes a lot of expertise to make a flat piece of glass and to set it properly, far more than breaking a window does.
Anybody can create a block cipher. Anybody can create a new encrypted transport. It's far easier to cough up a crappy new design than it is to break even a mediocre one.
But, more importantly, "breaking" a cryptosystem isn't "destructive". It's the most productive thing you can do to crypto. You can't prove a negative. All you can do is spot the designs we can know we shouldn't be using.
"Any idiot can break a window". Sheesh. Have you ever coded a crypto attack? Would you like me to send you a model problem, Jacques?
Coppersmith always comes into my head because of that thing Schneier wrote about him being the world's cleverest cryptanalyst in _Applied Cryptography_. But you're obviously right.
I can't believe you'd write something this awful as a direct personal attack.
Like I'm anyone to talk: https://news.ycombinator.com/item?id=5894298
You simply love laying in to me every chance you get, from questioning my 'credentials' (for the record, I did the NIBE course on the subject matter in that particular thread, but I really don't see what you were trying to achieve with that) and then, in a perfect passive aggressive move you claim in the same line when challenged about your behaviour that you 'like me', and that's not the first time that happened either.
If you can't take it, don't give it, and if you don't like me or feel like bullying me then at least be clear about it.
I'm fine with you being annoying but precise about your subject matter but your personal attacks are getting a bit much lately.
If you think there's some personal problem between us, you should know that I'm pretty easy to get ahold of; my contact information is in my profile. You might find that I'm a lot easier to argue with when I don't feel like I have to stick up for my whole profession in public.
(You definitely sent me some mail; you're not crazy.)
Actually, in cryptography, breaking is hard, too.
The science part of it will continue to provide methods of greater robustness and security, laden with increasingly better and broader security proofs and properties.
The engineering side of it will continue to seek and cling onto any slightest toehold afforded by mis-steps in design, implementation, protocol, all the way up to UI. Much of this is not, at least pre-emptively, subject to mathematical analysis.
This seems to have been a recurring theme for as long as cryptography has been around rather than some specific aspect '90s vs '10s crypto.
To me the discussion highlights one interesting point: the provability of programs. In particular, Colin says I recommend a more modern approach to cryptography — which means studying the theory and designing systems which you can prove are secure.
Now, I remember back in the days of structured programming, and we were all reading Dijkstra and thinking about proving programs. We concluded that it was a pretty expensive to do so. In all my decades of programming, I haven't made a serious attempt to do so. The best I have done is to (sometimes) write programs that are hopefully provable.
The possibility of writing provable programs to me is confounded by the work done by John Regehr, who seems to be the current champion of fuzzing compilers. His techniques find legions of bugs in compilers, including 17 errors in one research compiler that is _proved correct_.
Nonetheless, my question for Colin is how many programmers do you feel are capable of proving programs correct, be they compilers or cryptography libraries? For those of you who haven't given it a shot, take a look at some of the links on the side of Colin's blog post, the "Software Development Final Exam". Not very many of us did very well with that. (My feeble excuse is that there weren't any computer science departments when I went to Engineering School.)
I guess that Colin would agree that anyone who does not do well on those exams might not do well in writing and proving programs correct.
And on the other hand, Thomas explodes stuff that is written by a range of programmers covering the whole spectrum of talent. He sees on a day-by-day basis of the result of confident, talented programmers when subjected to excruciating punishment.
I do enjoy the discussion between these two.
Unfortunately, I don't know many programmers who would take kindly to rubber hose pentesting.
...it is like planning a gravity-assisted interplanetary
trajectory. Sure, it's complex and you have to get all
the details right — but once you start moving, the only
way you will fail to reach your destination is if the
laws of physics (or mathematics) change.
Other long-range space probes come under the influence of the many small objects in the solar system on their cold voyages, leading to course corrections during the mission. Estimating the position of these spacecraft or asteroids leads to an uncertainty ellipsoid in space, representing N% probability that the object is inside it.
So, yes, astrodynamics does work really well over short distances, but just like other engineering arenas, there is no Exact Answer.
As for the rest of the article, I'm not qualified to agree/disagree. The engineer in me just wants to use secure tools to make secure services and protect private data.
I think crypto, being a digital phenomenon, is way more on the science side than orbital determination.
Now, crypto on satellites poses a unique challenge, due to the interaction of solar radiation with electronics.
I think people who follow both me and Colin on HN know that I have a lot of respect both for him and for Tarsnap, the service he runs, which is the only encrypted backup service I have ever recommended to anyone and which is to this day my go-to recommendation for people looking to safely store data in the cloud. Colin has built one of the very few modern cryptosystems I actually trust.
First, let me dodge Colin's whole post. My Twitter post was:
If you’re not learning crypto by coding attacks, you might not actually be learning crypto.
(I was cheerleading people doing our crypto challenges [http://www.matasano.com/articles/crypto-challenges/] and didn't think much of my twerp; I didn't exactly try to nail it to the door of the All Saints Church).
Note the word "might". I specifically chose the word "might" thinking "COLIN PERCIVAL MIGHT READ THIS". Colin, "might" means "unless you're Colin".
Anyways: I think the point Colin is making is valid, but is much more subtle than he thinks it is.
Here's what's challenging about understanding Colin's point: in the real world, there are two different kinds of practical cryptography: cryptographic design and software design. Colin happens to work on both levels. But most people work on one or the other.
In the world of cryptographic design, Colin's point about attacks being irrelevant to understanding modern crypto is clearly valid. Modern cryptosystems were designed not just to account for prior attacks but, as much as possible, to moot them entirely. A modern 2010's-era cryptosystem might for instance be designed to minimize dependencies on randomness, to assume the whole system is encrypt-then-MAC integrity checked, to provide forward secrecy, to avoid leaking innocuous-seeming details like lengths, &c.
While I think it's helpful to understand the attacks on 1990's-era crypto so you can grok what motivates the features of a 2010's-era ("Crypto 3.0") system, Colin is right to point out that no well-designed modern system is going to vulnerable to a (say) padding oracle, or an RSA padding attack (modern cryptosystems avoid RSA anyways), or a hash length extension.
In this sense, learning how to implement a padding oracle attack (which depends both on a side channel leak of error information and on the failure to appropriately authenticate ciphertext, which would never happen in a competent modern design) is a little like learning how to fix a stuck carburator with a pencil shaft.
The deceptive subtlety of Colin's point comes when you see how cryptography is implemented in the real world. In reality, very few people have Colin's qualifications. I don't simply mean that they're unlike Colin in not being able to design their own crypto constructions (although they can't, and Colin can). I mean that they don't have access to the modern algorithms and constructions Colin is working with; in fact, they don't even have intellectual access to those things.
Instead, modern cryptographic software developers work from a grab-bag of '80s-'90s-era primitives. A new cryptosystem implemented in 2013 is, sorry to say, more likely to use ECB mode AES than it is to use an authenticated encryption construction. Most new crypto software doesn't even attempt to authenticate ciphertext; cryptographic software developers share a pervasive misapprehension that encryption provides a form of authentication (because tampering with the ciphertext irretrievably garbles the output).
I think it's telling that Colin breaks this out into '90s-crypto and 2010's-crypto. For instance:
This is an AES CTR nonce reuse bug in Colin's software from 2011. Colin knew about this class of bug long before he wrote Tarsnap, but, like all bugs, it took time for him to become aware of it. Perhaps he'd have learned about it sooner had more people learned how cryptography actually works, by coding attacks, rather than reading books and coding crypto tools; after all, Colin circulates the code to Tarsnap so people can find exactly these kinds of bugs. Unfortunately, the population of people who can spot bugs like this in 2010's-era crypto code is very limited, because, again, people don't learn how to implement attacks.
But I'll push my argument further, on two fronts.
First: Colin should account for the fact that there's a significant set of practical attacks that his approach to cryptography doesn't address: side channels. All the proofs in the world don't help you if the branch target buffer on the CPU you share with 10 other anonymous EC2 users is effectively recording traces of your key information.
Second: Colin should account for the new frontiers in implementation attacks. It's easy for Colin to rely on the resilience of "modern" 2010's-era crypto when all he has to consider is AES-CTR, a random number generator, and SHA3. But what about signature systems and public key? Is Colin so sure that the proofs he has available to him account for all the mistakes he could make with elliptic curve? Because 10 years from now, that's what everyone's going to be using to key AES.
So, I disagree with Colin. I think it's easy for him to suggest that attacks aren't worth knowing because (a) he happens to know them all already and (b) he happens to be close enough to the literature to know which constructions have the best theoretical safety margin and (c) he has the luxury of building his own systems from scratch that deliberately minimize his exposure to new crypto attacks, which isn't true of (for instance) anyone using ECC.
But more importantly, I think most people who "learn crypto" aren't Colin. To them, "learning crypto" means understanding what the acronyms mean well enough to get a Java application working that produces ciphertext that looks random and decrypts to plaintext that they can read. Those people, the people designing systems based on what they read in _Applied Cryptography_, badly need to understand crypto attacks before they put code based on their own crypto decisions into production.
As I wrote in my blog post, I have a lot of respect for Thomas. He's who I usually point people at when they want their code audited. I really hate reading other people's code and I trust Thomas (well, Matasano) will do a good job.
two different kinds of practical cryptography: cryptographic design and software design
Colin happens to work on both levels. But most people work on one or the other.
I'm generally writing for an audience of people who already know how to write software, but want to know something about crypto. So I take one as given and focus on the other.
modern cryptographic software developers work from a grab-bag of '80s-'90s-era primitives
Right, and that's exactly what I'm trying to change through blog posts and conference talks. We know how to do crypto properly now!
This is an AES CTR nonce reuse bug in Colin's software from 2011. Colin knew about this class of bug long before he wrote Tarsnap, but, like all bugs, it took time for him to become aware of it.
To be fair, that was not a crypto bug in the sense of "got the crypto wrong" -- you can see that in earlier versions of the code I had it right. It was a dumb software bug introduced by refactoring, with catastrophic consequences -- but not inherently different from accidentally zeroing a password buffer before being finished with it, or failing to check for errors when reading entropy from /dev/random. Any software developer could have compared the two relevant versions of the Tarsnap code and said "hey, this refactoring changed behaviour", and any software developer could have looked at the vulnerable version and said "hey, this variable doesn't vary", without needing to know anything about cryptography -- and certainly without knowing how to implement attacks.
Unfortunately, the population of people who can spot bugs like this in 2010's-era crypto code is very limited, because, again, people don't learn how to implement attacks.
Taking my personal bug out of the picture and talking about nonce-reuse bugs generally: You still don't need to learn how to implement attacks to catch them. What you need is to know the theory -- CTR mode provides privacy assuming a strong block cipher is used and nonces are unique -- and then verify that the preconditions are satisfied.
Isn't that the entire basis of 'tptacek's argument, though? That even you, as an expert in both software development and cryptography, accidentally got something wrong? An engineering fault occurred, to an expert practitioner. This seems to suggest this sort of thing is not just a function of pure science.
EDIT: On a more serious note, isn't crypto both science and engineering? We have the theoretical aspects, etc... Then we have the practical aspects of implementing these systems in production within an ecosystem that is constantly fighting entropy. I declare a draw.
Implementing attacks is a good way to internalize the idea that "Oh shit, this isn't just a theoretical attack, I better be super careful when doing X, Y, and Z."
I think Fravia said something similar. He was talking about copy-protection dongles. He respected the cryptography provided by some of the hardware manufacturers, but was dismissive of the way software vendors implemented that crypto in a broken way.
> But more importantly, I think most people who "learn crypto" aren't Colin. To them, "learning crypto" means understanding what the acronyms mean well enough to get a Java application working that produces ciphertext that looks random and decrypts to plaintext that they can read. Those people, the people designing systems based on what they read in _Applied Cryptography_, badly need to understand crypto attacks before they put code based on their own crypto decisions into production.
Oh god yes.
These people need to understand that when someone says "This is broken for a whole slew of reasons. No, I'm not going to code a proof of concept crack." it probably means that the crypto is very broken, and should not be pushed out to production, and certainly should not be promoted as safe and unbreakable and suitable for use by political dissidents in oppressive regimes.
It doesn't mean "We know we can break it faster than we can brute force it, even if there's no practical attack yet".
Well, it is not practical, but you can prove that an algorithm has no side channels e.g. you can use the construction of Pippenger and Fischer to create an oblivious version of any algorithm. To put it another way, if you could not prove that there were no side channels in an algorithm, you could never prove the security of something like FHE. Even if we assumed a perfect world where implementations were never wrong, practical concerns would still be a drag on the value of security proofs. We do not use AES because we can prove it is secure; we use it because it is fast and "secure enough."
"Those people, the people designing systems based on what they read in _Applied Cryptography_, badly need to understand crypto attacks before they put code based on their own crypto decisions into production."
I am not sure understanding the particular attacks we know so far is really important here. More than anything, I think people need to understand that attacks in general occur where abstractions fail. The closer you stick to the abstraction assumed in a security proof, the more secure your system will be (ignoring implementation bugs). If ever there was a place where premature optimization is a bad idea, it is in the implementation of cryptosystems.
The second point though, I think the opposite is true. You need to understand in your gut that parameters to number-theoretic crypto can be proposed specifically to make your math fail; you need to understand that even if flipping a single bit in your ciphertext garbles the output, that attackers can do useful things with that property; you need to understand that being able to coerce a system into producing the same ciphertext block for the same plaintext block admits terrible attacks; you need to understand where systems "want" randomness versus where they absolutely require it.
It's not enough to know that errors "happen". You have to be able to predict them.
I disagree, I think there is so much code, including so much crypto code out there in open source projects that you either have to wait for someone explicitely reviewing this code or someone particularly interessed in this project to find it.
Otherwise on your other points, I don't disagree with your argument but I think it's more important in that order to 1- know the mathematical aspects behind what you implement 2- know how to implement crypto (by having studied different open source projects) 3- know all the main attacks at code level. Ideally one should have a good knowledge on these 3 points before feeling confident in his code.
It is exactly this 1-2-3 approach to learning that I was thinking about when I wrote the fateful tweet. How do you evaluate whether software is making proper choices? Why do you assume popular open source packages are secure? They often aren't; in fact, they're broken in meaningful ways more often than not.
It's the engineering equivalent of a game of telephone; you copy the errors of the systems you crib from, which are multifarious, and at the same time introduce new ones because human nature has you working hard only so long as there's a payoff, and 99.999% of the payoff in this approach happens once your system round-trips properly; you miss all the subtleties that happen after round-tripping works.
So yeah, don't do it like this.
Neither Colin nor I were suggesting that you could hope to learn how to build secure cryptography by cribbing code from open source projects. Colin isn't just saying "understand the math"; he's saying, "build provable systems, then prove them".
I realize this was a side remark in your post, but should I understand this as that in your opinion (maybe the consensus, even?), Applied Cryptography is outdated? Or just that when somebody needs AC to implement their crypto, they don't understand crypto enough to do it well? Or something else entirely?
(Asking because although I don't use crypto much, I do still use AC to get a handle on the high-level concepts; it was _the_ recommended book when I bought it in the late 1990's)
In the late 90's the book that I had recommended to be my academics in the field was Handbook of Applied Cryptography. It's a lot more academic and mathematical and not as mass market friendly is Applied Cryptography, but it is also a lot more accurate for people wanting a fundamental mathematical and theoretical grounding in what's going on.
Can you elaborate on this? What is the most common scenario where somebody gets this wrong?
Adding HMAC/CMAC/GMAC authentication code helps to mitigate tampering.
Newer block cipher modes like CCM, GCM, OCB, and others roll both confidentiality and authentication into one, making it much easier to use AES correctly.
Even within the academic community there has been recent upheaval over the value of proofs. Ostensibly, proofs are good in the sense that they restrict the focus of cryptanalysis. Without a security proof of a mode (say CTR or HAIFA), cryptanalysis wouldn't be able to focus on distinguishing the core function from random in some sense, but on everything. Note that coding attacks does not help much here. It also seems that the role of proofs is often misunderstood, and assumed to mean much more than it does.
Then there are the practical aspects of the trade: cryptographic engineering. This involves avoiding information leaks (timing, power, etc), knowing what to do with IVs and keys and nonces, and the list goes on. This is a much more hands-on task, and often much less documented ("the implementation is left as an exercise to the reader"). Experience on building and attacking these is often the best way to learn how to do it, and not by reading a book about it.
A good chunk of bad cryptography was done by people who thought "eh, I can't see any attack against this" -- or read Applied Cryptography and thought that was enough. Hopefully the Matasano crypto challenges don't have the same effect.
First, even if you say that proofs are enough, you got to know what you're proving. The problem is that, AFAIK, most security properties are actually defined as absence of a particular attack (or a class thereof). Thus knowing the properties you want to achieve is equivalent to knowing the attacks you want to avoid. In other words, even if I build my system to have a property A, I might not know that I also need to attain property B (thus securing it against the complement of B).
Second, if you do want actual proofs, well, good luck. You start off with the indistinguishability stuff, which is not easy in itself. Toss in the distributed aspects of Internet applications and you've got yourself a proper mess. Case distinctions abound, this stuff slowly crosses into the domain of intractable. If you look at the game-based security proofs, well, for anything non-trivial, who can really be confident that the proofs are correct? Machine verification would help, but our tools are both still too weak, and still only a domain of select few specialists.
Third, even if you do get your proofs, well, more likely than not, these are going to be based on a simplified model which will sweep a lot of stuff under the rug. E.g. I don't know of works which address things like timing attacks in anything of even remotely practical value. And there's a bunch of other stuff, keys distributed, management, etc.
Lastly, as lot of other commenters have pointed out, you are also more likely than not to deal with existing codebases at some point, where you might end up plugging the holes rather than constructing.
The Pythagorean theorem doesn't get outdated ! Cryptography does.
Cryptography reminds me of cellular automata in that both are made up concepts that you can have lots of fun with, if you enjoy that kind of thing. I prefer CA because of its visual nature.
In the ongoing whack-a-mole of TLS vulnerabilities, RC4 was considered the best option. I am the opposite of an expert, so I have no idea if that was true then, or if it's still true now.
I'm not sure I agree. The RC4 keystream bias problem is really bad, and it's baked into TLS just like MAC-then-encrypt is. In a nutshell: there are byte offsets into an RC4 stream that are simply predictable. Bernstein and Paterson have an attack that recovers plaintext from it at multiple byte positions. But for the first couple biases, anyone can see how easy it is to recover a byte or two of plaintext from RC4. Clever attackers can shift the plaintext in TLS around to make that byte position more valuable.
That willingness to break what you have made, to take on an attacker's mindset and say "What can go wrong with this?"
That is exactly what coding attacks teaches.
Most attacks on secure systems involve attacking the engineering - the implementation of the system, rather than even attempting to break the crypto, even if it is only DES.
What about not using authenticated encryption? Crypto or implementation?
Storing SHA512 of passwords? Using non-cryptographic random number generators?
Care to explain the distinction between a 'cryptographic' random number generator and a 'noncryptographic' random number generator.
A random number generator is a random number generator. Some are worse than others under various metrics. Arguably, random number generators that are ill-equipped to generate high-quality random numbers shouldn't be used at all. For anything, cryptography included.
There are high-quality RNGs that aren't good CSPRNGs. CSPRNGs need to be fast, ready to deliver results after a cold boot, and unbiased. What you're probably thinking of as a "high-quality" source of random numbers is just part of a CSPRNG (the entropy source).
Conversely, CSPRNGs aren't actually suitable for all random number needs in programming; for instance, in software testing, you want an RNG you can seed and retrieve deterministic results from. If you can do that with your RNG, it's very unlikely that it's not suitable for cryptography.
But I mean, specifically, using CSPRNG (such as an algorithm behind /dev/urandom) vs pseudorandom number generators such as rand() from C library.
You can read more about them on Wikipedia:
For modeling you might want very large quantities of random data. It's handy if you can reproduce those same numbers at will. This is a bad quality for a cryptographic random number generator.
"Good enough" really is good enough for many uses. Obviously, that's dangerous for crypto where we want "actually good".
Rather, I think the most reasonable definition of a science is very simple: a discipline whose primary technique for gathering new results is the scientific method.
Anyway, given either definition, I don't see how Cryptography is a science.