Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Schneier's take on the alleged backdoor in OpenBSD (schneier.com)
41 points by cosgroveb on Dec 17, 2010 | hide | past | favorite | 36 comments


I'm where he is. Interesting discussion, though -- and it really highlights the limits of "with many eyes all bugs are shallow".


he didn't add anything new to the discussion but his opinion. Crypto scholars are excellent at cryptography and security theory, but when it comes to actually implementing secure systems (exception being crypto algorithms), and securing systems, Crypto scholars are horrible. For example he mentions that it would be better to just find an existing vulnerability instead of planting an FBI backdoor in the OpenBSD code: good luck Schneier, obviously you don't know that much about OpenBSD security culture and history. Plus the NSA has a history of putting backdoors into solutions. This is just my opinion from experience.


Bruce Schneier isn't some random academic. He's extremely highly respected, and is the Chief Security Technology Officer of BT Communications. He has tons of experience with securing systems in the real world, and to say he "obviously [doesn't] know that much about OpenBSD security culture and history" is crazy.


I'll go out on a limb and say this right now: Bruce Schneier almost certainly doesn't know much about OpenBSD security culture and history.

He is, as I am fond of saying lately, "many good things", but.


sorry I didn't mean to appear to disrespect Bruce Schneier, I've met him, gone to his book signings, own all his books, I even buy his books for gifts to my friends. I'm a huge fan of his work. We need people like him who have done highly advance studies in the security field; he is the best and an amazingly lucid writer. I never said he was a random academic or that his overall research should be disregarded. I really don't think you understand my original comment. Theory is not practice; they are two separate things. I guess I should have just said that. do you see my point now?


I am not so much a Schneier fan, so if you feel like you need cover for leveling any kind of criticism against anything he says, don't worry too much. Are you sure you believe Schneier would know anything about the code quality of a specific IPSEC implementation --- or really, about the code quality of any IPSEC implementation?


Alright, I will bite. What are your reasons for not liking Bruce?

(I can't wait for this)



oh is that all. An entire generation were taught cryptography on the back of AC, so he definitely the widest read crypto dev.


He is definitely the widest-read crypto dev. There can be no question of that.


Out of curiosity, who else is out there writing essay-length on crypto and security in general?


One of the better recent sites has been RSnake's:

http://ha.ckers.org/

But he just retired from blogging about netsec and is done with the industry, I think (a lot of ppl get sick of it, I left the sec industry 10+ years ago and never looked back)


rsnake doesn't do any crypto work. Like, at all. A fine guy to go to for XSS or SQLI.


I highly respect Bruce Schneier, but he obviously has no clue on current subject like most anybody else.


Plus the NSA has a history of putting backdoors into solutions.

Have there been proven (or at least credibly shown probable) to be NSA backdoors into shipping products?


The Clipper Chip[1] immediately comes to mind as the most publicized case of the NSA wanting a backdoor in consumer products. There are also recent stories of the US Government wanting similar encryption disabling mechanisms in other technology[2]. Coupled with the Patriot Act and it letting the NSA eavesdrop on communications, it gives a precedent. A quote I am reminded of is "If you are on the internet, you aren't being paranoid enough".

[1] http://en.wikipedia.org/wiki/Clipper_chip

[2] http://www.wired.com/threatlevel/2010/09/fbi-backdoors/


Yeah, but 'wanted' doesn't equal 'did' like the OP claims.


I had always considered the Clipper-Chip incident to hint at the tip of an iceberg.

Do you really think that was an isolated one-time event?


The Clipper Chip was introduced in the open. They tried to push it through legislation. It's not like the NSA blackmailed Intel executives to include the capabilities secretly in their Pentium Processors without notifying their customers.

Same with the new proposed legislation. But all that demonstrates is that the NSA has an interest in being able to (legally) monitor encrypted communications. Which everyone already knows.

If someone had 'busted' the NSA trying to do something sneaky and/or covert and/or illegal, then you could argue that it's the tip of some iceberg of nefarious activity. But like I said this was all done out in the open.

You might as well say that because we know the FBI wiretaps phones through legally obtained court orders, that's the tip of the iceberg that points to millions of illegal wiretaps. It's a bad inference.


The whole purpose of the Clipper Chip was the escrowed encryption - that is a big difference from a backdoor, especially from a concealed backdoor.


Yeah the NSA modified the DES S-Box in its development, they made the final tweeks to the GSM A5/1 algorithm, another person points out the Clipper Chip, etc. You are clueless if you didn't know these things, do you think the NSA just sits on their butt?


The NSA is believed to have strengthened DES by making its substitutions more resilient against differential cryptanalysis. Careful with with words like "clueless".

On this point I'm inclined to agree with Schneier: why inject backdoors into things, leaving fingerprints and betraying both opsec and tradecraft, when you can just sit back and watch the software companies build the backdoors for you? NSA has as much as come out and said this at keynote speeches already, but it seems pretty obvious from my vantage point.


"The NSA is believed to have strengthened DES" No they didn't they cut the key size in half, and where can I find on a source that shows me that the s-box changes were intended to make DES stronger?


Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different."

[...]

Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique back in the 1970s.

I'm done bickering about trivia, though. If you'd like the last word, as long as you don't say anything overtly stupid, I'm not going to respond. Happy holidays!


He's just pointing out that a big project will have bugs and he's right. That's not a matter of opinion. Not much fuzz testing has been done on OpenBSD since the early 2000s. When Theo did fuzz test back then, he found bugs. He claims to have found two just now while doing the audit in the crypto code. Code has bugs. Large projects have many bugs.


In the early 2000's I was still speaking to Theo, and I don't believe that during that time period he ever did systemic fuzz testing on OpenBSD. SPIKE wasn't even released until 2002.

Also: while we use fuzzers to probe for specific kinds of crypto flaws, the kind of fuzzing being done then (and for the most part today) does not identify crypto flaws.

We are, let's be clear, talking about a project that appears to have managed to ship IPSEC code that didn't verify packet authenticators for something like a year.


Here is what he said about his fuzz testing in 2000:

http://lwn.net/2000/0803/a/openbsdfuzz.php3


He's talking about the academic research project fuzzing was named for. All it did was fuzz command line arguments.


Or pointing out how hard it would be to get a "bug" inserted into such a codebase. He's not saying that either endeavor is easy. His only point was that, given a budget of $X and an embarrassment cost of (abstract) $Y for getting caught attempting to degrade the security of OpenBSD [my interpretation of "risky"], spending those $X on the identification of existing exploits would be the more rational course of action.


Given the past feats from Theo de Raadt, my guess is on a nice stunt to get a free thorough code check :)


Too many stupid people are saying too many bad things about the OpenBSD project for this to be a net positive for him.

After all, he doesn't really profit from a free audit, and all the auditing I've seen so far has been done by the OpenBSD team itself.


Easy way to prove it isn't true:

Has there ever been a criminal case prosecuted in the USA where the FBI entered or revealed intercepted VPN data as evidence?


This is false logic. This way you can only prove that the backdoor exists, not that it doesn't.


What I meant to say was that a reason why it may not be true. I started typing the response with one thing in mind and ended with another.

Point still applies though. No cases where prosecution has cited intercepted VPN traffic.


Ah, but perhaps this is why it's so important gitmo detainees, et al, are not granted a trial?


Not sure about that, but if the FBI were conducting any investigations where they were able to exploit VPN traffic because of a hole in OpenBSD, it would show up in a trial somewhere (and if the prosecution do not present how they got the data, a defense can find out through discovery).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: