Hacker News new | past | comments | ask | show | jobs | submit login
Speculation on "BULLRUN" (mail-archive.com)
295 points by danieldk on Sept 7, 2013 | hide | past | web | favorite | 88 comments



I was chair of the IPSEC working group at the IETF, and I think I know what John Gilmore was referring to with respect to the question of the Initialization Vector. The person who expressed a concern with a per-packet IV (which by the way had indeed done a lot of consulting for the NSA; nice guy, though) didn't like the idea because it's too easy for people to screw up and use an incrementing counter for the IV. This is bad, because if the first couple of bytes in the packet are mostly fixed (becuase you are encapsulating a TCP or UDP packet, for example), the resulting plaintext in the first block of these packets will have a very low hamming distance between them, and this can enable certain cryptoanalytic attacks (i.e., such as differential cryptoanalysis). He was not advocating using the same IV across the entire session, in the sense that we would use the same IV for each packet (which would have been a cryptographic disaster), but rather we use the previous packet's cipherblock as the IV for the next packet (i.e., just keep chaining the cipherblocks).

As is often the case in standards bodies when there is disagreement, neither side won, and we left things unspecified. The RFC doesn't constrain how the IV should be chosen, so you can initialize the IV either way.

There was language indicating that the IV should never be the same, as well as an additional method of generating the IV, by using a Linear Feedback Shift Register (LFSR), which also will eliminate the potential problem of related plaintext being sent to the crypto algorithm. So although the RFC doesn't say this, I would recommend either the use of a LFSR, or the last cipherblock in the previous packet encrypted using that key, and to avoid using a simple counter in your IPSEC implementation for the Initialization Vector (IV) in each packet.

So to be fair, in this particular case, the person with "known associations with the NSA" was actually trying to make things better, although he did feel constrained in not explaining all of his reasons in the open wg meeting. Which no doubt tripped Gilmore's paranoia, and I can't blame him for that either.


An LFSR? Really?

If this person was really NSA-qualified to recommend this stuff, he would have known that known plaintext was the least of your problems.

I'm sorry, tytso. It sounds like you all got expertly socially engineered.


What's the attack you're thinking of against CBC with IVs drawn from an LFSR?


LFSRs are not cryptographically secure random number generators. So you can, given a few IVs, recover the internal state and predict all subsequent IVs.

Then it's just a matter of mounting an (adaptive) chosen plaintext attack against the system knowing what the next IV will be. This is something you can't do against AES CBC mode if the IV is actually random.

It fails in the same way if you use the last packet's cipher-text as the IV and there are pauses between packets.


The point of using an LFSR for IV generation is that it allows you not to transmit the IV on every message. Can you go into more detail about how you, a passive observer, are collecting enough information to reverse the LFSR?

(This is aside from the fact that we're talking about an extremely elaborate way to construct a CPA attack against a protocol that does not automatically give attackers chosen plaintext).


There are a verity of techniques[0] for breaking LFSRSs if you have the outputs. CBC modes security requires that IV's actually be random or indistinguishable from random. LFSR output streams are not indistinguishable from random, so the system is on face not secure.

A slight digression: Asking how you'd actually figure that out, in the context of arguing for the security of the system, is the wrong question and a flawed approach to designing cryptographic standards. The same question could be asked about the last packet IV issue Rogaway raised. No one knew at the time how to exploit it then, but it was still a bad idea, as time has proved. I can't see the attack is not the right design philosophy for cryptographic system. To anyone reading this, please please don't use it. It usually leads to broken shit and, if some of the people proposing the standards are malicious, certainly will if they are cleverer than you ( and the NSA probably is).

Now, if you are just curious how to break it: All you have to do is get a few IVs. Suppose the system does encrypt then mac. Let's assume you can get the system to decrypt its own messages and give you the result (this is a weaker version of a chosen cipher text attack and authenticated AES CBC is still safe even with this issue provided IV's are random). Simply replay a given message a few times. The IV changes each time, and hence you get the result of m xor IV_1, m xor IV_2, m xor IV_3, etc. Minimally, this gives you the XOR's of the various IV's with each other which may be enough to calculate the LFSR state. If you happen to know m as well, then you have the IVs. Game over.

Lastly, assuming you don't get a chosen plaintext attack for a protocol with as broad uses as IPSEC seems like a bad idea. Minimally, you clearly get one if it's used to secure http with javascript in it via the same mechanisms as BEAST. You also get one in the case of ICMP echo. File sharing protocols would seem to also readily provide them.

[0]http://crypto.stackexchange.com/questions/5293/cryptanalysis...


I don't think I'm sticking up for LFSR-based IVs, but here you need your messages replayed in a pretty specific set of constraints: almost back to back (to get the samples you need to infer the state of the LFSR), and all in the same session. By comparison, Lucky 13 doesn't get to assume those constraints, right?

I don't think LFSR CBC IVs are a good idea (though again note that they work in CTR), but they seem like an awfully dumb way to backdoor a standard, and it's straightforward to see how a well-intentioned suggestion could propose their introduction.


Two technical notes:

1) AES CTR needs a nonce (number used once) not an IV. So a maximal LFSR works up until it repeats. People call it an IV, but strictly speaking IV's need to also be random.

2) I have no idea if you need the replays in a very specific order or not. If you know the m being decrypted, then you always get the IVs. I suspect if given random parts of the output sequence of an LSFRs and the distances between them, it's still easy to recover the seed.


Strong agree on the distinction between IVs and nonces (I wrote about them in a blog post recently and the confusion annoys me too).


Oh, to be clear, I don't think the NSA back doored the standard that way. I think the IETF was just doing what they do: ignoring principled cryptographic design because they thought they knew better. Hence why I made the point about the incorrect design philosophy. It happens to be the case that this kind if philosophy is also easy to subvert if someone is malicious. I don't think that's what happened in this case.


An LFSR system may not an explicit backdoor, but it's piling pointless complexity upon general confusion by well-intentioned people who were apparently being manipulated and misdiredcted.


We have no idea if they were being manipulated. The NSA of 1998 is not the same organization as the NSA of 2013.


Did you miss the part where John Gilmore recalls a long list of evidence that the NSA was manipulating the IETF IPsec process?


Elaborate?


Discussion continued in other thread https://news.ycombinator.com/item?id=6347924


> which by the way had indeed done a lot of consulting for the NSA; nice guy, though

Brings up an interesting point. Going forward should it be disqualifying to be currently working for the NSA to be on one of these committees? To have formerly worked for them? To done a lot of consulting? To have done any consulting? To have ever had a security clearance?

Witchhunts have a deservedly negative connotation, but what if you are actually surrounded by witches?


The hard part is that the NSA has two missions --- cracking foreign crypto, and protecting US crypto. When an NSA employee speaks, we will have no idea whether he or she is speaking with the "protect US crypto" hat on, or the "SIGINT enablement" hat on.

I'd say the more important thing is that we not trust any one single person (or corporation). Everything needs to be open, and everything needs to be studied and verified by independent authorities.

This also applies to how you implement your security. You want two person controls for everything, and separation of roles --- those people who can bypass file access controls had better not also be able to wipe log records. The NSA is relearning that lesson the hard way as they try to figure out what records Snowden might have leaked --- apparently Snowden had the ability to impersonate other users, as well as wipe the log records which were _hopefully_ being made when person A uses an "su"-like command to impersonate person B.

After all, when someone stands up to give a suggestion at a standards meeting, or when someone goes through the hiring process to be one of your sysadmins, they won't be wearing a sign saying, "I do a huge amount of consultant work with the NSA and I have Top Secret/SCI/POLY clearance". Neither will job applicants have signs saying, "I'm getting paid by the Chinese MSS", or "I'm here to steal your SSL private keys".


Unfortunately the NSA has irrevocably damaged their ability to fulfill the "protect US crypto" mission.

Thanks for all the details on the process tytso, not to mention your work for all these years.


Well, "protecting US crypto" pretty much seems restricted to "protecting official government US crypto", not just the unwashed masses.


Recent revelations clearly indicate that the latter is merely secondary to the former, so I don't think this is a hard question at all.


No it shouldn't. NSA has more than one goal. Their goal is to secure USG communication and to break everyone else's.

While an argument that in the recent years the pendulum has gone way out into subverting is valid this has not always been the case. They just cannot make USG secure without giving some of that security to everyone. And they cannot weaken everyone's security without endangering themselves.

Of course for now US doesn't have global adversary (that they know of). But with the world moving from sole superpower towards multiple power center this may not be the case for long. And then it will swing in the other direction again.

Just wait for an real cyber attack through one of their backdoors to happen and then you will have them working full steam ahead towards security.

Btw - one of the other things about the extremely potent NSA that are not mentioned at all - if you are a smart adversary then you should just infiltrate NSA and let them do the heavy lifting and have the needed information for the cheap.


So the rest of the above comment where it was explained that the whole issue is actually kind of nuanced, has important security implications when it comes to implementation details and is not at all "intentionally compromising", you just skipped over that hey?


He brought up another point. Given the known NSA revelations so far, it seems ignorant at best to welcome people who have ties with the NSA without putting their opinions under some scrutiny.


I say sure, extra scrutiny there. But we should also hold hackers (in the laymen sense of the word) who steal either information, finances, or actual things under just as tight of scrutiny too. I don't trust either to be honest.


There's a temptation to throw the baby out with the bathwater.

The NSA have shown that they, unsurprisingly, have some ability in this area. The trick is surely to discern constructive and destructive contributions.


This remembers me the position the bible takes on demons: obviously they have power, and some might want to help even, except they are specialists in their stuff, a human in not, thus how you know what one to trust? You don't, thus dealing with them is forbidden outright.


What you said makes no sense. The assumption is that a block cipher is secure to differential cryptanalysis. CBC mode requires unpredictable IV, and as the BEAST attack shows, chaining is a bad idea. RFC 3602 specifies using a random IV. Could you please cite minutes and an RFC, and maybe some more information so we can judge for ourselves?


> The assumption is that a block cipher is secure to differential cryptanalysis.

That assumption was not taken for granted back in the 90's. It was only with years of cryptanalysis of AES (which had had resistance to known plaintext an explicit design goal) that designers felt comfortable with things like CTR mode.

In practice, we've learned that the block ciphers are by far the strongest part of the system, and it's the block cipher modes that provide the weakness for attackers.

IMHO this is the type of poisoned knowledge that the US spies should be blamed for.


..isn't a good place to be paranoid the core of AES, namely substitution-permutation network hocus-pocus. (Joan Daemen' and Vincent Rijmen's). A backdoor transformation would suffice to make all other paranoia superfluous.


I'm not saying its impossible, but decades of public cryptanalysis have failed to produce a significant dent in the security of 3DES or AES. On the other hand, we've seen a total discrediting of LFSR-based schemes and a steady flow of issues resulting from the block modes (padding and timing oracles).

We should be paranoid, but mostly that we might be not worrying about the right things.


You are right that padding/timing attacks on block modes and weaknesses in PRGs form the bulk of contemporary cryptanalysis. Nonetheless, if there is one single backdoor transform with a unique key on AES substitution-permutation network which would make further attacks possible, it would be nearly impossible to devise a adversary/challenge to reveal it using current cryptanalysis techniques. The standardization history for AES, and the organizations involved does not give much hope neither.


There are a couple of reasons I have some hope for the security of AES:

1. There's this proof that a trapdoored symmetric cipher is equivalent to a public key system (there's a link in the metzdowd thread somewhere). No one seems to have figured out how to build a public key system with anything close to the efficiency of AES.

2. The specification of AES itself isn't that large. It's not minimal, but neither does it seem to have large arbitrary-looking constants. https://en.wikipedia.org/wiki/Kolmogorov_complexity https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number

The NIST Dual-EC DRBG fails on both counts: 1. it is based on an obvious public key system, and 2. it recommends the use of large arbitrary constants that could easily be a trapdoor.

Now is it possible that NSA occasionally intentionally fails to backdoor a NIST standard in order to help us underestimate how often they succeed? Can we ever develop a coherent threat model with the possibility of this kind of adversary?


I think if you're going to be a paranoid, Marsh, you might as well be an equal-opportunity paranoid and not exempt the algorithms you happen to like (or need, professionally). I think you're obligated to provide a real rationale for why we shouldn't all switch to Camellia.


How about RC4 -> GOST -> AES -> Camellia, all with unrelated keys?


Now I am genuinely curious: at what point did the standard game definitions of security become widespread/presented? Because given those it is obvious that CTR is secure given a PRP, which any block cipher should be to be semantically secure when encrypting a single block.


My impression is that it was NSA/NIST's AES specification process that made it "officially OK" to stop worrying about known plaintext.

But really we need to ask some the folks older than I am. It would be a good question for the randombit or metzdowd crypography lists. Also, I'll tweet at Bellovin.

Edit: https://twitter.com/marshray/status/376600321637638144


Um, so you realize that the BEAST attack is pretty TLS (and heck, pretty implementation specific --- it only works against certain websockets implementation). That's because it is a chosen-plaintext attack, so it is dependent on TLS being used to encrypt Javascript, where the Javascript libraries are hosted on a server under the attacker's control.

Also, remember in the early 90's, when IPSEC was still going through its indeterminable standardization process, this was pre-AES, and most of us were still using DES (or if we could afford the CPU --- CPU's were much slower back then) 3DES. Heck, some of this work was going on before Javascript was even invented! :-)

BTW, I was wrong, the comments about not using a counter did make it into an RFC. See RFC 2405, which does state that using a counter or some other low-hamming distance is preferred.

Note that there are dangers with using a random IV --- if you do this, the RNG used to generate your IV's had better be independent of the RNG used to generate your session keys, or else you might potentially leak information about your RNG state that might be yet another wedge into attacking the crypto. (In particular, I would NOT recommend using RDRAND in bare fashion --- see comments on my G+ post #ThinkLikeTheNSA --- about what fun and games you might have if you could compromise the RDRAND instructuction.)

If you really want to be paranoid, generate a separate key (different from the key used for encryption and for generation of the MAC) and use that key as the basis of a Cyptographic RNG, and then use the output of that CRNG for your IV.


BEAST does not require encrypting of Javascript. It uses Javascript, which can just as well be delivered unencrypted, to induce the browser to make separate requests to the HTTPS site, which include the chosen plaintext.

In the context of network stacks rather than browsers, inducing the target to encrypt chosen plaintext is often possible - ICMP ECHO, for example. Of course it seems rather difficult to arrange for your chosen plaintext to be right at the start of an IPSEC packet, since there will usually be a TCP or UDP header - but perhaps it is possible (and even if it isn't, that looks like rather flimsy protection).


He is right about how specific the circumstances are that give rise to Bard's CBC IV attack. Not only that, but BEAST itself was the product of a former NSA cryptologist, now a Wisconsin university math professor; it is vanishingly unlikely that it was common knowledge inside of NSA, since the guy PUBLISHED it just a year or two after leaving the agency.

Either way, IPSEC was formulated in the dark ages of crypto. Almost nobody (except Phil Rogaway) knew this stuff about CBC back when the standard was being drafted.


> He is right about how specific the circumstances are that give rise to Bard's CBC IV attack. Not only that, but BEAST itself was the product of a former NSA cryptologist, now a Wisconsin university math professor; it is vanishingly unlikely that it was common knowledge inside of NSA, since the guy PUBLISHED it just a year or two after leaving the agency.

I don't think BEAST was a product of Bard. If we read Bard's papers then we probably never came up with BEAST [1]. Bard wanted to apply Rogaway-Dai's attack to SSL, but he didn't understand how browsers worked. His papers didn't target cookies - in fact he didn't even mention that word - but he wanted to decrypt short PIN sent in POST requests, which is impossible as far as I can tell. The reason we cited Bard was because his English is much better than us, so we'd thought that we could re-use his explanation of Rogaway-Dai's attack in the context of SSL. In other words, we both attempted to exploit the same vulnerability, but we used different approaches and targeted different secrets.

Another way to look at this: imagine if one reads Bard's do you think if he or she could use any of Bard's ideas to invent new BEASTies attacks, e.g., CRIME, Lucky 13, the RC4 attack, <your attack here>?


I think it's fair to say that BEAST introduced and established the modern CPA browser attack methodology, and that (for instance) Lucky 13 depended on it.

But here, we're talking about IPSEC, not HTTPS/TLS, and the fundamental cryptographic design weakness we're talking about is, I think, Bard's. I could be wrong, though.


> But here, we're talking about IPSEC, not HTTPS/TLS, and the fundamental cryptographic design weakness we're talking about is, I think, Bard's. I could be wrong, though.

No, it wasn't Bard's. It's Rogaway and Dei's.


Rogaway published though: was lack of attention the result of the NSA influencing industry to ignore academic work?


Rogaway didn't publish academically. He commented on the mailing list, and a bunch of standards group denizens pooped all over him. The same thing happened to Dan Bernstein on namedroppers. Is Perry Metzger an NSA agent? I have reason to doubt it.


http://www.cs.ucdavis.edu/~rogaway/papers/draft-rogaway-ipse... If you could provide some more information on responses, perhaps emails dismissing it, I think we can figure out what went wrong.


What went wrong with IPSEC was that a bunch of standards group people set out to build a crypto standard that would impact every networking vendor on the planet, and did so with frank and open hostility towards academic cryptographers. The idea that NSA would have tried to subvert IETF implies that IETF was organized enough to subvert. But it wasn't and isn't. IETF is like the Internet's student council.

Lots of smart people have involved themselves with IETF, but those among them that I have talked to about it seem universally and profoundly frustrated by the whole process.


Could this hostility be the result of the NSA subversion? It seems as though the same few hardware vendors keep popping up as the people slowing down security fixes.


No, I don't think so.


Is the argument whether using either of the proposed IV generation modes is wrong or if it was do to NSA tampering? It certainly is wrong to do so (and should have been known to be wrong by the time of Rogaway's comment).


I don't believe it was broadly known to be wrong during IPSEC standardization.


Is that true for the LFSR proposal?I thought LFSRs were basically never thought to be cryptographically secure.


I don't know the specific proposal, but I know how CBC designs with LFSR-generated IVs work, and why you'd want a deterministic generator for your IVs, and it's because it allows both sides to agree on a non-repeating progression of IVs without actually sending the extra block on every message.

LFSRs are also sometimes used in counter mode, too.

Again: obviously, a simple LFSR isn't cryptographically secure, and anyone who would deploy one in a cryptosystem probably already knows that. The question is: how are you mounting the attack you're talking about?


Do you know of protocols that use LFSRs for IV generations of cryptographic purposes? Can you link to them ? I'd like to see if they are breakable.


I've got nothing I can cite to you; it's also true that a lot of bespoke crypto enjoys a lack of usable attacker chosen plaintext, which is admittedly not a good property to rely on for a standard.


The author, John Gilmore, shared these sorts of stories with me when I was complaining about the NSA intervention on the crypto work I was doing on Java. Sun said I had to have them sign off that we were 'compliant' (which means they would tell the State Department to sign off on our export license) and that was a total PITA. At the time I was working on an end to end loadable strong encryption package [1].

[1] http://patents.justia.com/patent/6546487 - if that looks complex and twisted, it is. The original version wasn't "compliant."


The Sun patent appears to be listed on Oct 19, 1999. Out of curiosity, when did the compliance discussions with the NSA occur?

(I'm quite interested in '90s crypto/political interaction, and had thought that by '99, the NSA and State/Export restrictions had been mostly lifted, but I'd love to learn more from the inside).


I was building a strong crypto package for Java in '93-'94. It was part of a capabilities system for Java which would have had the JVM unable to decrypt/instantiate methods in classes for which capabilities had not been granted. I modelled it on the Joule system which Mark Miller had worked on earlier. In late 1991/ early 1992 Sun had engaged RSA Data Security for a license to use their asymmetric crypto system we know as the RSA algorithm. I had started that because I was building a high trust authentication system for our name service (NIS+) and I really wanted to be able to pass around public keys for servers that a client trusted to be legitimate. When I started working on Java security in 1993 I sought to use some of those same techniques in creating a way for the JVM to "load" a cryptographic class which it could trust, and the class could validate that the JVM loadinging was trustable as well (avoiding MITM attacks). Since I had already been engaged with the issues of shipping crypto code in ONC-RPC, I knew it would be an issue to release source code that implemented arbitrarily strong RSA so when it came up, the NSA came out to talk to us. Our "rep" was a woman (and I'm not making this up) named Cindie Spies. She understood what I wanted to do but wasn't particularly keen on us doing it. We went back and forth with me sending code to her, and her sending back suggestions. It was a tedious process and ultimately moot since the capabilities version of Java never actually shipped.


Fascinating stuff. The "Cindie Spies" negotiations (and not just the spot-on name) provide good context for what it would be like developing crypto in a tumultuous, formative period for our current security technology/regulations.


It wasn't, it was just raised to 56-bit from 40-bit.


"In 1999, all restrictions on key length were lifted, except for exports to embargoed countries." [1]

http://en.wikipedia.org/wiki/56-bit_encryption

If you have a source that contradicts this, I'm all ears. I'd just always believed (based on what I've read) that the period from '96-'99 opened up arbitrarily strong crypto (excluding embargos/restrictions to states like North Korea).


I know there are some hoops you have to jump through if you're selling an iOS app that uses encryption, which arguably could include HTTPS. There is a registration/approval process, but an exemption for certain cases. More details here:

http://stackoverflow.com/questions/9609901/when-to-check-the...


I think it technically applies to any sale of programs that include encryption stronger than 64-bit.



"Further relaxation of encryption export controls took place in September 1999, when the Clinton Administration announced that encryption items of any key length may now be exported under a license exception, after a technical review, to individuals, firms, and other non-government end-users in any country except for seven state supporters of terrorism.25 After a technical review, retail encryption commodities and software of any key length will also be exportable under a license exception to any recipient in any country except for the same seven destinations."

http://www.au.af.mil/au/awc/awcgate/crs/rl30273.pdf

I think we're pretty much in agreement, considering that the time period between Sep. '99 and Jan '00 is about the length of time for new policy to fully propagate. Regardless, the latter portion of '99 and early '00 was when NSA and State/Export restrictions had been lifted.


Patents look complicated and twisted no matter what they're about.


-1 on the title change. "Linux ipsec" vs "bullrun"? Really? Someone just flexing?


I may be digressing but the symbolism they chose for these names is rather spooky. The US program is named after the first battle in the American Civil War, and its UK correspondent after the first in the English Civil War.


How are project or operation names chosen? I'd always assumed they had to be randomly selected so knowing the name wouldn't give you any clue (even accidentally) to the content.


You are correct, they are randomly chosen. See my previous comment: https://news.ycombinator.com/item?id=5841740


Like "PRISM?"


It does seem a pretty fair that the IPSEC process, and IPSEC in general, has been an utter clusterfuck. Raccoon, etc. are horrible (really anything with IKE). Free S/WAN had its own problems, but at least Hugh was fairly reasonable. That doesn't necessarily point to NSA, though.

(most of this criticism applies to DNSSEC too)


I can't _prove_ that there were people who was purposely delaying consensus to give their product more time to make it to market. But there were those who had their suspicions...


Racoon is a nightmare to set up. I would say that I'm reasonably technical but gave up after trying to set up a road warrior config for home use passing through dd-wrt and terminating in a server.


Ah, crypto back in the late 1990s, the period when NSA was trying to restrict export of encryption stronger than 40-bit. The funny thing is that 56-bit crypto is still in use in the form of PPTP with MS-CHAPv2. What is even worse is they thought it was non-exportable 128-bit encryption at the time, the actual 40-bit and 56-bit versions are even worse.


So PPTP is insecure, and IPSec is fundamentally broken… How about OpenVPN? Is that compromised too?


OpenVPN uses SSL for encryption. It's an "SSL VPN". Any more questions?

There are more reasons than just current events to avoid using SSL. But if this NSA hoopla is what it takes to make people aware of how poor SSL is as a solution, then let the conspiracy theories flourish.


IPSec is not fundamentally broken, it's just needlessly complex.


If the unnecessary complexity has resulted in most implementations being broken then I'd be okay with calling it fundamentally broken, but I have no idea if that's actually the case.


Is there a simple (and auditable) subset of IPSec that could still compatible with IPSec clients using a full/complex stack?


As I understand it most of the terribleness is in the key agreement stuff; the actual packet processing is a little ugly but is practical. So you could in theory set up (and regularly update) SAs manually with setkey(8), instead of having racoon do it. For a small network you could presumably lash together some cronjobs to keep this working. Would that be as secure as IKEv2 as actually implemented? I dunno :)



> because of bullheadedness in the maintainer who managed that part of the kernel. Instead he built a half-baked implementation that never worked. I have no idea whether that bullheadedness was natural, or was enhanced or inspired by NSA or its stooges.

Who knows, maybe he was paranoid of others because he was incompetent. He could look at your code and its complexity would astound him, therefore you're trying to confuse him and mislead him into accepting your code without knowing what it was doing. He may have already known that there were NSA employees in kernel development.


It will be interesting to see if the NSA can still push their way into standards bodies. I wonder if similar techniques were used on WebRTC security.


I vaguely remember some incident back in early '00 when they found some "interesting" patch in FreeS/WAN code added by someone who was either consulting or tied otherwise with NSA. There was a flurry of discussion on a mailing list followed, I think, by the patch reversal.

Does anyone know what I am referring to?


This sounds like a good time to start doing crypto research outside of the United States.


Oh well. Is OpenVPN still considered solid?


I think someone needs to set up a fresh committee, and fix IPSEC, and deprecate the existing version.


IPSec is a kind of broken, what happend to CurveCP btw?




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: