As is often the case in standards bodies when there is disagreement, neither side won, and we left things unspecified. The RFC doesn't constrain how the IV should be chosen, so you can initialize the IV either way.
There was language indicating that the IV should never be the same, as well as an additional method of generating the IV, by using a Linear Feedback Shift Register (LFSR), which also will eliminate the potential problem of related plaintext being sent to the crypto algorithm. So although the RFC doesn't say this, I would recommend either the use of a LFSR, or the last cipherblock in the previous packet encrypted using that key, and to avoid using a simple counter in your IPSEC implementation for the Initialization Vector (IV) in each packet.
So to be fair, in this particular case, the person with "known associations with the NSA" was actually trying to make things better, although he did feel constrained in not explaining all of his reasons in the open wg meeting. Which no doubt tripped Gilmore's paranoia, and I can't blame him for that either.
If this person was really NSA-qualified to recommend this stuff, he would have known that known plaintext was the least of your problems.
I'm sorry, tytso. It sounds like you all got expertly socially engineered.
Then it's just a matter of mounting an (adaptive) chosen plaintext attack against the system knowing what the next IV will be. This is something you can't do against AES CBC mode if the IV is actually random.
It fails in the same way if you use the last packet's cipher-text as the IV and there are pauses between packets.
(This is aside from the fact that we're talking about an extremely elaborate way to construct a CPA attack against a protocol that does not automatically give attackers chosen plaintext).
A slight digression: Asking how you'd actually figure that out, in the context of arguing for the security of the system, is the wrong question and a flawed approach to designing cryptographic standards. The same question could be asked about the last packet IV issue Rogaway raised. No one knew at the time how to exploit it then, but it was still a bad idea, as time has proved. I can't see the attack is not the right design philosophy for cryptographic system. To anyone reading this, please please don't use it. It usually leads to broken shit and, if some of the people proposing the standards are malicious, certainly will if they are cleverer than you ( and the NSA probably is).
Now, if you are just curious how to break it: All you have to do is get a few IVs. Suppose the system does encrypt then mac. Let's assume you can get the system to decrypt its own messages and give you the result (this is a weaker version of a chosen cipher text attack and authenticated AES CBC is still safe even with this issue provided IV's are random). Simply replay a given message a few times. The IV changes each time, and hence you get the result of m xor IV_1, m xor IV_2, m xor IV_3, etc. Minimally, this gives you the XOR's of the various IV's with each other which may be enough to calculate the LFSR state. If you happen to know m as well, then you have the IVs. Game over.
I don't think LFSR CBC IVs are a good idea (though again note that they work in CTR), but they seem like an awfully dumb way to backdoor a standard, and it's straightforward to see how a well-intentioned suggestion could propose their introduction.
1) AES CTR needs a nonce (number used once) not an IV. So a maximal LFSR works up until it repeats. People call it an IV, but strictly speaking IV's need to also be random.
2) I have no idea if you need the replays in a very specific order or not. If you know the m being decrypted, then you always get the IVs. I suspect if given random parts of the output sequence of an LSFRs and the distances between them, it's still easy to recover the seed.
Brings up an interesting point. Going forward should it be disqualifying to be currently working for the NSA to be on one of these committees? To have formerly worked for them? To done a lot of consulting? To have done any consulting? To have ever had a security clearance?
Witchhunts have a deservedly negative connotation, but what if you are actually surrounded by witches?
I'd say the more important thing is that we not trust any one single person (or corporation). Everything needs to be open, and everything needs to be studied and verified by independent authorities.
This also applies to how you implement your security. You want two person controls for everything, and separation of roles --- those people who can bypass file access controls had better not also be able to wipe log records. The NSA is relearning that lesson the hard way as they try to figure out what records Snowden might have leaked --- apparently Snowden had the ability to impersonate other users, as well as wipe the log records which were _hopefully_ being made when person A uses an "su"-like command to impersonate person B.
After all, when someone stands up to give a suggestion at a standards meeting, or when someone goes through the hiring process to be one of your sysadmins, they won't be wearing a sign saying, "I do a huge amount of consultant work with the NSA and I have Top Secret/SCI/POLY clearance". Neither will job applicants have signs saying, "I'm getting paid by the Chinese MSS", or "I'm here to steal your SSL private keys".
Thanks for all the details on the process tytso, not to mention your work for all these years.
While an argument that in the recent years the pendulum has gone way out into subverting is valid this has not always been the case. They just cannot make USG secure without giving some of that security to everyone. And they cannot weaken everyone's security without endangering themselves.
Of course for now US doesn't have global adversary (that they know of). But with the world moving from sole superpower towards multiple power center this may not be the case for long. And then it will swing in the other direction again.
Just wait for an real cyber attack through one of their backdoors to happen and then you will have them working full steam ahead towards security.
Btw - one of the other things about the extremely potent NSA that are not mentioned at all - if you are a smart adversary then you should just infiltrate NSA and let them do the heavy lifting and have the needed information for the cheap.
The NSA have shown that they, unsurprisingly, have some ability in this area. The trick is surely to discern constructive and destructive contributions.
That assumption was not taken for granted back in the 90's. It was only with years of cryptanalysis of AES (which had had resistance to known plaintext an explicit design goal) that designers felt comfortable with things like CTR mode.
In practice, we've learned that the block ciphers are by far the strongest part of the system, and it's the block cipher modes that provide the weakness for attackers.
IMHO this is the type of poisoned knowledge that the US spies should be blamed for.
We should be paranoid, but mostly that we might be not worrying about the right things.
1. There's this proof that a trapdoored symmetric cipher is equivalent to a public key system (there's a link in the metzdowd thread somewhere). No one seems to have figured out how to build a public key system with anything close to the efficiency of AES.
2. The specification of AES itself isn't that large. It's not minimal, but neither does it seem to have large arbitrary-looking constants. https://en.wikipedia.org/wiki/Kolmogorov_complexity https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number
The NIST Dual-EC DRBG fails on both counts: 1. it is based on an obvious public key system, and 2. it recommends the use of large arbitrary constants that could easily be a trapdoor.
Now is it possible that NSA occasionally intentionally fails to backdoor a NIST standard in order to help us underestimate how often they succeed? Can we ever develop a coherent threat model with the possibility of this kind of adversary?
But really we need to ask some the folks older than I am. It would be a good question for the randombit or metzdowd crypography lists. Also, I'll tweet at Bellovin.
BTW, I was wrong, the comments about not using a counter did make it into an RFC. See RFC 2405, which does state that using a counter or some other low-hamming distance is preferred.
Note that there are dangers with using a random IV --- if you do this, the RNG used to generate your IV's had better be independent of the RNG used to generate your session keys, or else you might potentially leak information about your RNG state that might be yet another wedge into attacking the crypto. (In particular, I would NOT recommend using RDRAND in bare fashion --- see comments on my G+ post #ThinkLikeTheNSA --- about what fun and games you might have if you could compromise the RDRAND instructuction.)
If you really want to be paranoid, generate a separate key (different from the key used for encryption and for generation of the MAC) and use that key as the basis of a Cyptographic RNG, and then use the output of that CRNG for your IV.
In the context of network stacks rather than browsers, inducing the target to encrypt chosen plaintext is often possible - ICMP ECHO, for example. Of course it seems rather difficult to arrange for your chosen plaintext to be right at the start of an IPSEC packet, since there will usually be a TCP or UDP header - but perhaps it is possible (and even if it isn't, that looks like rather flimsy protection).
Either way, IPSEC was formulated in the dark ages of crypto. Almost nobody (except Phil Rogaway) knew this stuff about CBC back when the standard was being drafted.
I don't think BEAST was a product of Bard. If we read Bard's papers then we probably never came up with BEAST . Bard wanted to apply Rogaway-Dai's attack to SSL, but he didn't understand how browsers worked. His papers didn't target cookies - in fact he didn't even mention that word - but he wanted to decrypt short PIN sent in POST requests, which is impossible as far as I can tell. The reason we cited Bard was because his English is much better than us, so we'd thought that we could re-use his explanation of Rogaway-Dai's attack in the context of SSL. In other words, we both attempted to exploit the same vulnerability, but we used different approaches and targeted different secrets.
Another way to look at this: imagine if one reads Bard's do you think if he or she could use any of Bard's ideas to invent new BEASTies attacks, e.g., CRIME, Lucky 13, the RC4 attack, <your attack here>?
But here, we're talking about IPSEC, not HTTPS/TLS, and the fundamental cryptographic design weakness we're talking about is, I think, Bard's. I could be wrong, though.
No, it wasn't Bard's. It's Rogaway and Dei's.
Lots of smart people have involved themselves with IETF, but those among them that I have talked to about it seem universally and profoundly frustrated by the whole process.
LFSRs are also sometimes used in counter mode, too.
Again: obviously, a simple LFSR isn't cryptographically secure, and anyone who would deploy one in a cryptosystem probably already knows that. The question is: how are you mounting the attack you're talking about?
 http://patents.justia.com/patent/6546487 - if that looks complex and twisted, it is. The original version wasn't "compliant."
(I'm quite interested in '90s crypto/political interaction, and had thought that by '99, the NSA and State/Export restrictions had been mostly lifted, but I'd love to learn more from the inside).
If you have a source that contradicts this, I'm all ears. I'd just always believed (based on what I've read) that the period from '96-'99 opened up arbitrarily strong crypto (excluding embargos/restrictions to states like North Korea).
I think we're pretty much in agreement, considering that the time period between Sep. '99 and Jan '00 is about the length of time for new policy to fully propagate. Regardless, the latter portion of '99 and early '00 was when NSA and State/Export restrictions had been lifted.
(most of this criticism applies to DNSSEC too)
There are more reasons than just current events to avoid using SSL. But if this NSA hoopla is what it takes to make people aware of how poor SSL is as a solution, then let the conspiracy theories flourish.
Who knows, maybe he was paranoid of others because he was incompetent. He could look at your code and its complexity would astound him, therefore you're trying to confuse him and mislead him into accepting your code without knowing what it was doing. He may have already known that there were NSA employees in kernel development.
Does anyone know what I am referring to?