(g) I believe that NETSEC was probably contracted to write backdoors as alleged.
(h) If those were written, I don't believe they made it into our tree.
And that the next version of OpenBSD will probably have a funny song about this on the end of CD #2
What's the history there?
Personally, I jive with these personality types, and, think Theo is a fantastic fit for OpenBSD.
That is what you experience when you agree with him. Try to disagree and argue with him someday.
In my dozen or so years of experience with Theo, we've disagreed and
argued many times and he's always been reasonable. For the record, if I
had bothered to keep track of our debates, he's ended up being "right"
far more often than I have. Yep, I've been wrong, quite often, and he's
been kind enough to drop-kick me in a better direction. He's a friend
and I've learned a lot from him over the years.
Your misconception comes from people showing up on the OpenBSD mailing
lists with a prideful hostile attitude intending to prove how much they
know, when in fact, they haven't done their homework correctly or
completely. Things often go sideways in a hurry if people aren't willing
to look at their own opinions critically and do the additional work to
see the other sides. Of course, if you walk into someone else's home and
shit on their couch, you get what you deserve.
There are edgy debates between developers, particularly in code reviews
and proposed patches, but the reason for them is simple; everyone wants
to get to the "best" and "most correct" answer. In other words the goal
is the same, but the opinions vary.
Being friends with the person/people on the other side of a pointed
debate is important, particularly across (human) language barriers. Jake
Meuser recently wrote about this on undeadly.org and I can't express it
any better than he has:
It's a worthy read.
I can believe this happens, but De Raadt wasn't asked to resign from the NetBSD core team by new users who hadn't done their homework correctly or completely; it was a consensus decision of some of the world's most experienced BSD developers who had worked with him extensively. (Of course, it's possible some/much of the fault lies with them.)
I think not always getting agreeing, not having perfectly aligned goals with others, and not always communicating well is just part of being human. But going our separate ways and doing our own thing can open up opportunities for changes (good and/or bad).
BTW, I never discussed BSD with him. I was never a BSD user and do not recall entering a BSD list.
Yes, working together as a happy family in the land of rainbows and candy canes is a nice ideal. But the reality is that not a lot of code is written by unicorns :)
Truth by consensus.
I might have hoped to see him stick by his guy.
But then, that's Theo. You take the good with the not-so-good.
But I'm not talking about rhetorical purity here. I'm talking about a comment that implies Theo gave a reasonable (maybe even classy) response to the situation. Put yourself back in the position of being a lead dev at Bear. Imagine someone accused one of your better developers of, say, fraud. What's the standard you set for yourself for sticking by your people? From your demeanor, my guess is your standard for loyalty to your own people is even higher than mine.
And that's my point here. How well is Theo acquitting himself here? He's not doing terribly. He's not necessarily doing great.
But hey, either way, I'm out here with my name signed to my posts. If it turns out Theo was justified in questioning how Jason handled this, everyone can look this post up and hoist me with it down the road. I would concede up front that this is kind of not a whole lot my business; I have a minor, historical, emotional attachment to it and nothing more.
You brought it up, I responded. There you go.
Mind you, this whole thing is built on the idea that NETSEC was, in fact, developing backdoors.
There are people who are smart, who do not.
There are people who are smart, give you ideas, and let you take all the credit.
And there are people who are smart, and subtly guide you until you discover "your" idea on your own.
I suspect that there's some positive feedback in this; if you are successful, you tend towards the latter, more cooperative end of the spectrum, and then people, seeing that you're kind, think more highly of you, and give you more respect and on balance you end up more successful. The converse is also true (I'm battling a little envy and frustration myself).
There's a saying in politics that "nobody cares how much you know until they know how much you care", and it's both corny and tainted by association with politics, but there's some truth in it; high intelligence is value-neutral - it can make you a more effective dick just as easily as it can make you a constructive force.
First, the tl;dr: this flaw is not as bad as a CBC padding oracle. It allows an attacker with continuous on-line access to a vulnerable target to conduct a slow, online dictionary attack against all previous ciphertext blocks. Given, say, a dictionary of passwords, and incurring packet I/O for each guess, you could discover whether a given ciphertext block encrypts any entry in the dictionary.
CBC is a way ("the" way, really) to take something like AES that works in 128-bit blocks and apply it across an arbitrarily long stream of data.
The naive way to do this (called "ECB") is simply to apply AES over and over in 16 byte increments. This sucks for two reasons: first, blocks can be shuffled around or replayed, because each is encrypted in isolation, and second because the same plaintext always encrypts to the same ciphertext --- so if you know the plaintext for a specific block, you can see if that block has ever occurred anywhere else.
CBC fixes that problem by XORing each plaintext with the previous ciphertext block; it "chains" all the blocks together. This neatly addresses both those problems; you can't simply look for repeating patterns because the same block encrypts to infinitely many different ciphertext blocks. Go CBC!
The one tricky thing about CBC is what you do with the first plaintext block. There's no previous ciphertext block to XOR it with. The answer is what's called an "initialization vector" (IV), which is one of the worst names in crypto. All the IV is is a fictitious zeroth ciphertext block. It's a parameter to the CBC construction; when you use a library, it'll ask for the cipher ("AES"), the key size (128 bits), the mode ("CBC"), and the IV.
Here's the thing about the IV: it's conceptually ciphertext. It's a public value. Anybody can see it. The only thing it's not supposed to be is predictable or repeatable.
The way you're supposed to do the IV is very simple: you generate 16 random bytes from your CSPRNG, use them as the IV, and then tack those 16 random bytes to the front of your message so that the recipient can strip them off and use them as the IV.
Here's what OpenBSD (and SSH2, back before 2004) did instead: each time it encrypted a message, it used the last ciphertext block as the IV for the next message.
This makes perfect conceptual sense! Think about it: what they are basically do is encrypting in one long, unending CBC stream. It's hard to see how that could even be a flaw, because a real ciphertext block seems stronger than an artificial one.
Turns out, no. Here's where that goes wrong: attacker sees a message go across the wire. Attacker has the last ciphertext block from that message. Attacker now knows the IV for the next message before the message is encrypted. Here's what can happen: if attacker can get the target to encrypt something for it†, the attacker can use for that block known-iv XOR cipherblock-prior-to-any-target-block XOR guessed-plaintext-of-target-block. The target is going to XOR the known-iv into that block (it's the IV, after all), so they cancel each other out. The block prior to the target block is going to cancel out the CBC XOR for the target block. So, what you're left with is the ECB-style encryption of the target block; if you feed iv XOR previous-cipherblock XOR "YELLOW SUBMARINE" to the target, you can tell if the target block was "YELLOW SUBMARINE", because they'll match.
The "oracle" here is the condition that the server will encypt a block of plaintext for the attacker, thus revealing the associated ciphertext. The "vulnerability" is that by pre-disclosing the IV before using it, that oracle can be used to attack any previous cipherblock.
This is a neat attack, but also a total pain in the ass to use, and certainly not an FBI backdoor in OpenBSD IPSEC.
A nit about Theo's message:
This is not a complete summary of what's been discovered about OpenBSD's IPSEC stack. Marsh Ray also discovered that sometime back in 2003, OpenBSD discovered a serious vulnerability in its IPSEC stack: they weren't verifying ESP authenticators! This was due to a straightforward bug in Angelos' original code to handle crypto accelerator hardware, and it was fixed in 2003, and that's presumably why Theo didn't mention it.
But Theo is missing the point. We're not simply interested in whether OpenBSD is vulnerable today. We want to know if there's any evidence that the IPSEC stack was ever tampered with, and particularly around the time frame that Greg Perry suggested that it was. Worse still, OpenBSD had what appeared to be†† a very serious security flaw, and they fixed it without telling users. OpenBSD users do have a right to ask the question, "hey, what gives?".
† It turns out that this is very often easy to do! For instance, the attacker might be able to invoke a web app on the target and get it to send a database query across an IPSEC link; by playing with the lengths of the input, the attacker can coerce the target into sticking a specific piece of attacker-controlled data at the beginning of an IPSEC packet. Just one example.
†† We don't know if any particular configuration of OpenBSD with or without hardware accelerators in any particular release of OpenBSD had this problem exploitably, although it sure looks like they did.
> But Theo is missing the point. We're not simply interested in whether OpenBSD is vulnerable today. We want to know if there's any evidence that the IPSEC stack was ever tampered with, and particularly around the time frame that Greg Perry suggested that it was. Worse still, OpenBSD had what appeared to be†† a very serious security flaw, and they fixed it without telling users. OpenBSD users do have a right to ask the question, "hey, what gives?".
> †† We don't know if any particular configuration of OpenBSD with or without hardware accelerators in any particular release of OpenBSD had this problem exploitably, although it sure looks like they did.
If you do not care enough to do the work to prove or disprove your
allegations, then there's really no point in making or reiterating
allegations of tampering or exploitable releases. Unlike most people,
I believe you have the skill and experience necessary to do it, but
without doing the work, you're doing more harm than good.
If you were falsely accused of tampering, you'd be pretty upset with me
if I kept on yammering about it without providing a shred of evidence.
And rightfully so.
I think it's extraordinarily unlikely NETSEC even built a private version of that code with a backdoor of any sort in it, even though to have done so would be no more controversial than writing "ssldump".
I've been saying that for over a week now. Could I possibly be clearer about the fact that I don't think OpenBSD was backdoored? If so, I'm sorry.
What I see now is Theo refusing to put this to bed.
I think Theo should have told Greg Perry where to shove this story, then wrote a message saying that someone with zero credibility made a claim and they were going to look at the code "just in case".
Some context, since you don't follow HN:
The specifics everyone should understand are as follows...
1.) All of bugs found so far look like unintentional mistakes. Of
course, there's always some wise-ass that will say that a perfect
backdoor should look like an unintentional mistake, so proving intent
2.) No one has done the work necessary to prove the bugs found so far
are actually exploitable. Publicly speculating and debating whether or
not a bug is exploitable is harmful and disingenuous.
3.) Due to complexity, completely proving the code is perfect and free
of all exploitable bugs is intractable. The very best anyone can ever
say is, "I personally didn't find any bugs during my audit."
Given the above, ANY accusation of intentionally putting a backdoor
into code is indefensible, and hence, it is nothing more than vicious
rhetorical defamation. Even if such an accusation is true, it is still
fallacious and must be discarded.
I hope you don't mind if I pilfer a wonderfully descriptive phrase from
you, but I feel accusations of Gregory Perry qualifies him as a
"mendacious kook." I'm not omniscient, so I would never say there's
"zero chance" of a backdoor being placed in anything. None the less, in
this situation, we basically agree. I believe it is exceedingly unlikely
a backdoor ever made it into the tree.
The real problem is Perry made some very serious and damaging
allegations. If Theo had just ignored this kook, he would have been
taken to task for not divulging and addressing them.
Theo did exactly what you suggested in his initial Dec 14th message to
the security-announce@openbsd list:
> The mail came in privately from a person I have not talked to for nearly 10 years. I refuse to become part of such a conspiracy, and will not be talking to Gregory Perry about this. Therefore I am making it public so that (a) those who use the code can audit it for these problems, (b) those that are angry at the story can take other actions, (c) if it is not true, those who are being accused can defend themselves.
Essentially, you asked for it to continue. The same is true for many
others, so you are certainly not alone. And yes, even my discussing this
with you publicly on HN means I'm also at fault for the continuation.
The accusations made against Jason Wright and Angelos Keromytis are
indefensible, so I cannot defend them. You cannot defend them. Theo
cannot defend them. No one can defend them, and they cannot defend
themselves. The one thing all of us should clearly and loudly say is,
"The accusations are indefensible, fallacious, and should be discarded,
but we should still look at the code again to see if there are any
OpenBSD being trolled by some kook is not newsworthy. It happens all the
time. All the articles on HN and elsewhere are just whoring a
fallacious and most likely falsified controversy, and by doing so,
defaming two people who gave their time and effort to develop open
I am angry. After making great contributions to open source, two great
hackers, Jason Wright and Angelos Keromytis, are being defamed and I am
unable to prove they are innocent because no one can prove they are
innocent of indefensible accusations. It's frustrating.
Out of respect for Jason and Angelos, I'm done talking about it.
The tough question is, why does it take an overly verbose village idiot
like me to clearly state the obvious?
In OpenSSH we increased the frequency of rekeying to make it even less feasible. More recently as a result of the CBC attacks found by the researchers at Royal Holloway (http://www.openssh.org/txt/cbc.adv) we switched to preferring CTR mode ciphers (and RC4) which are immune to both these attacks.
The IV chaining is obviously broken, but it doesn't seem like it would make a make a good backdoor.
RFC 2405 (1998-11) says "The IV MUST be a random value."
RFC 3602 (2003-08) says "The IV MUST be chosen at random, and MUST be unpredictable." So the protocol knew about it at that time.
So it was a learning experience for the protocol developers at that time, not just OpenBSD.
If there is a smoking-gun type backdoor found, I suspect it will have to do with unexpected or out-of-order IP headers or ipcomp.
Is CTR mode considered insecure, or is it just not appropriate here?
An attacker who can get a target to repeat a CTR nonce can often coerce that target to repeat an entire keystream, which is devastating.
I'm pretty certain 3.0 and 3.1 had this bug, regardless of hardware accelerator. At the bottom of this post http://extendedsubset.com/?p=41 there's a rough description of the code path.
BTW, the cryptography @ metzdowd list was mentioned elsewhere as a decent source. It kinda goes in spurts, and Perry has pretty high standards for what is of sufficient quality and refuses to read more than one email a day from anyone; the @ randombit.net list is unmoderated.
PS: I have a free book: http://www.subspacefield.org/security/security_concepts.html
> ciphertext. It's a public value. Anybody can see it.
> The only thing it's not supposed to be is predictable
> or repeatable.
You don't need the IV to be non predictable, as long as it is a different string every time it's all ok. The outcome of the encryption is also related to the first block of real plaintext, so your IV can just be a few random bytes with a decent distribution: no need for a cryptographically secure random number generator.
Also, answer my email! =)
When a block cipher in CBC mode is used, an initialization vector (IV) is maintained for each key. This field is first initialized by the SSL handshake protocol. Thereafter the final ciphertext block from each record is preserved for use with the following record.
Paul Kocher co-wrote this spec. A decent illustration of why people shouldn't design their own crypto protocols; some of the most skilled people in the world manage to make mistakes. At least this mistake was tiny.