Hacker News new | comments | show | ask | jobs | submit login
N.S.A. Foils Much Internet Encryption (nytimes.com)
923 points by ebildsten on Sept 5, 2013 | hide | past | web | favorite | 388 comments

You can't have read Applied Cryptography from the mid-90s and not understand this to have been NSA's M.O. from the jump. Bruce Scheier, who was quoted in the Guardian piece about the same story, is America's foremost popularizer of the notion of NSA as crypto's global passive adversary. People who build real cryptosystems have never, ever been allowed to rely on the goodwill of the NSA not to cryptanalyze their systems.

Entire crypto schemes, from the RIPEMD hash to the specific parameter generation mechanism in DSA, are premised on the idea that USG-sponsored crypto concepts aren't inherently trustworthy. Similarly, all of Applied Cryptography was premised on the idea that NSA was decades ahead of commercial and academic crypto.

Of the revelations about NSA, this has to be the least revelatory (it's up/down there with the "revelation" that NSA employs teams of people whose job it is to break into Windows computers); it essentially restates something we were already supposed to have taken for granted.

That's not to say this isn't a fascinating story. It is; just keep it in context. Things to remember:

* You really want to know whether NSA is directly attacking cryptographic primitives or whether they're subverting endpoints. I think if you talk to cryptographers, you'll get a slight bias towards the belief that it's the latter: that there are implementation weaknesses at play here more than fundamental breaks in crypto.

* You want to keep in mind that breaks in cryptosystems represent new knowledge, and that the enterprise of breaking cryptosystems is an issue distinct from the public policy concern of where NSA is allowed to deploy those breaks.

* Bear in mind that in the legacy TLS security model, before things like pinning and TACK, NSA would only require a viable attack on a small subset of CAs to gain (along with pervasive network taps) massive capabilities. The payoff for these kinds of capabilities is radically degraded by the anti-surveillance mechanisms of modern browsers like Chrome, which is something you probably want to be thanking people like Adam Langley, Trevor Perrin, and Moxie Marlinspike for pushing so hard to implement.

What amazes and saddens me about this, though, is that I was one of the people who thought that we could draw a line -- the NSA was obviously going to keep its cryptanalysis techniques secret, they probably listened to everything, but the idea that they were actively sabotaging cryptosystems just seemed like to far-fetched a conspiracy theory. Half their mission is to protect US communications from foreigners, and backdoors are the most obvious way to not achieve that goal.

Yet here we have proof that the NSA is truly in the business of sabotaging cryptosystems that are in general use. Those systems protect US interests as much as foreign interests, and now they are not trustworthy. Now I am left wondering -- PGP, for example, deviates from theoretical constructions of non-malleable encryption; might that have been the NSA's doing? What about the problems in various versions of TLS? Now it is hard to say what is an honest mistake and what is a deliberate effort to undermine computer security.

We are now past the point of not blaming on malice what we can attribute to stupidity, because we have evidence that there is actual malice on a grand scale. It is a truly sad day for this world...

No, that's not NSA's doing. PGP predates the theoretical constructions you're referring to. Bellare/Namprempre was something like 5 years after the first "modern" PGP (IIRC the original PGP used a terribly broken cipher of Zimmerman's own design). Also, malleability is not a particularly lucrative capability for NSA to have, even if you want to assume that the integrity mechanisms in PGP are broken.

I am pretty sure that the OpenPGP standard has been updated since that work, and that it is still not quite following the constructions.

Also, I do not think the NSA would have no interest at all in malleability. Suppose the NSA is trying to track messages sent through anonymous remailers (Type I, maybe because the target is using a nym server) and there is a "Max-Count: 1" header. An easy attack that exploits malleability would be the maul the message somewhere after the headers and see where a mauled messages exits the remailer network. This is probably possible with the NSA's resources and expertise, and the NSA is probably concerned about anonymity systems in general (and perhaps looking for ways to attack them).

My real point, though, is that we need to stop for a moment and re-evaluate pretty much all the cryptography standards we depend on. We really cannot say that these systems have not been deliberately sabotaged by the NSA, not with this latest revelation.

That security systems are designed in the most paranoid fashion possible doesn't tell you anything about the real nature of the threat. Schneier's book doesn't tell you that the NSA has been strong arming corporations into giving up their private keys and into installing backdoors on chips.

In fact Schneier himself is outraged to the point that he seems to be calling for a redesign of basic Internet protocols and governance in his article today, http://www.theguardian.com/commentisfree/2013/sep/05/governm...

Yeah, I'm a little baffled by Schneier's reaction to this. The revelation is advanced cryptanalytic capabilities at NSA, which is literally an article of faith with Schneier. Why is he freaking out about this when he didn't instead freak out about wholesale call record database dumps or AT&T fiber taps?

This is not just about cryptanalysis. The NSA has been deliberately introducing weaknesses into cryptosystems used by the general public. That is beyond keeping cryptanalysis techniques secret, which we all assumed they would do and which few really drew any issue with. We are talking about an honest-to-goodness conspiracy, one that yesterday many would have written off as a conspiracy theory that was not even worth considering.

Basically, what we thought were the rules of the game are not the rules of the game. We thought we knew where we stood with the NSA -- they would try to attack, we would try to defend. Now we need to be thinking of a much different set of rules, one in which the NSA is not just attacking ciphers but also deliberately sabotaging our defense, and doing so covertly. We cannot even assume that mistakes really are mistakes anymore -- they could be the NSA's doing.

Basically, what we thought were the rules of the game

Er...speak for yourself buddy. If you thought that you could get proper crypto security from a boxed software product then I'd like to offer you a fantastic deal on a bridge.

Do you think you can write one yourself? How many people are actually qualified to do that?

Where do you think one would get proper crypto?

Yes, and I think anyone who is reasonably good at math is qualified to do so. I'm not saying I could build undefeatable crypto, mind; I'd have to spend a year brushing up my number theory before I'd make an attempt and I don't fool myself that I'm smarter than the average NSA analyst, so I might well fail.

But if you want proper crypto and are willing to invest some time in it, I'd say take a strong open source algorithm and then rewrite it. Sure, maybe there's backdoors in the compilers, in the chips, maybe they have quantum computers and there's backdoors in the fabric of reality.

My point is not that I know unbeatably secure crypto, but that I have always assumed the NSA was using any and all means available to defeat crypto, and if you ever thought otherwise you were telling yourself fairy stories.

There is a difference between incompetence / mistakes, which we know to expect from cryptosystems, and deliberate sabotage.

I said nothing about incompetence. Rather, I assume that any commercial product of that kind is compromised, because spies have such an obvious interest in compromising it. I mean, if I were a spy I wouldn't just ask companies to put backdoors in (although I would do that too), I would actively spy on the software companies. I have always assumed powerful intelligence agencies adopted a zero-sum approach to things, because ultimately they are judged on results, not a purity score.

I think there is a fundamental difference advanced Cryptanalysis (which we always assumed they had due to hiring practices and history) and being able to break crypto by subverting infrastructure.

If the NSA said, "Our super smart brain trust figured out how to own your stuff with math five years ago ... ha ha!", I think we would be Totally Fine with that. Hats off to them for winning that game, but at least they played mostly fairly. (In theory.)

However, this is different. Winning the cryptanalyis game because they backdoored protocols, gained access to trusted entities' private keys, etc, just means that they are really good at the SPYING game, not at the cryptanalysis game, and somehow that just feels worse.

But speaking as a non-American here, what do you expect? The NSA is in the spying business, and ultimately its performance is measured by results, not methodologies. All this hand-wringing is a bit like people expressing horror over the discovery that the CIA sometimes stoops to burglary or deception.

I mean, in an ideal world the only way to compromise my password would be to for a beautiful lady spy to seduce me and trick me into revealing it in a moment of passion, but in the meantime it's a safer bet that they'll just try and fish it out of my modem/router/ISP/etc.

Note to NSA: I'm actually happily married, so please don't send over any beautiful lady spies, which would be totally awkward.

And I would imagine that your password is complex enough that it would be hard to recite in a moment of passion. That being a rather single-thread activity.

Don't underestimate my multi-tasking ability.

>> what do you expect? The NSA is in the spying business, and ultimately its performance is measured by results, not methodologies.

The NSA, as a government agency, is in the business of serving the US citizens who pay its salaries and acting in their interests. Ultimately its performance is measured by us.

My boss would fire me if I put a backdoor in his email.

> My boss would fire me if I put a backdoor in his email.

Then why is it OK for Snowden to do the same to his employer?

The way I see it, Snowden reported the misconduct of his employer (the NSA) to their employer (the public).

I think there's some cognitive dissonance at work here in the hacker community. It's easier to look up to the NSA et. al. if they're just better at math than you. It's so clean, so pure, if you ignore the black-bag jobs and kinetic side of their work.

Obligatory xkcd: http://xkcd.com/538/

I would be keeping my hat on. They would have done it with the taxpayers' money but without their consent or even knowledge. They would then be withholding a major scientific breakthrough from the public that financed it. A scientific breakthrough that might have all sorts of applications that could make our lives better.

They would be exposing all the people that rely in strong cryptography to major risks. Including people that have done nothing illegal and helped fund their research.

And more importantly, they shouldn't be reading our emails to being with, independently of them being encrypted or not. That was never the deal, no democratic process ever gave them the right.

Paying some 22 year old deskjockey a couple mil to code a backdoor into an encryption app isn't a scientific breakthrough, it's just traditional spycraft. Using the weight of the US government to force Microsoft to code a backdoor into Bitlocker isn't a scientific breakthrough, it's the sort of things governments do.

A big part of the difference is that cryptanalysis weakens us against the NSA; sabotage weakens us against everyone.

> If the NSA said, "Our super smart brain trust figured out how to own your stuff with math five years ago ... ha ha!", I think we would be Totally Fine with that. Hats off to them for winning that game, but at least they played mostly fairly. (In theory.)

I disagree. Certainly they'll be doing that too, but breaking crypto is hazardous to the populace independently of how it's broken, right?

Either way from NSA's perspective they are fighting a war, with terrorism, with other nation's crypto efforts, etc. In that context there are very few "unfair" ways to fight. And indeed, the U.S. has done something like this to a certain Soviet pipeline, as I recall.

Besides, this at least leaves open the possibility of like-minded people to maintain countermeasures. If crypto is broken in general then we're all naked. If weak implementations are weak then we would need to be fixing those anyways.

I just wish I knew which one it is we're looking at.

>> breaking crypto is hazardous to the populace independently of how it's broken, right?

Breaking crypto from the outside proves that it's breakable; if the NSA can do it, it's just a matter of time until others do.

Undermining cryto from the inside means deliberately exposing all communicates to increased risk of hacking by anyone, anywhere.

Quite different.

> Undermining cryto from the inside means deliberately exposing all communicates to increased risk of hacking by anyone, anywhere

Well, that does depend on how they weaken it. If it gets weakened such that it goes from "impossible" to "nation-states can crack" then there's still only 3-4 agencies in the whole world that could decrypt.

But that would also tend to preclude passive wideranging cryptanalysis, which is what I'm sure NSA would prefer to be able to do.

> If it gets weakened such that it goes from "impossible" to "nation-states can crack" then there's still only 3-4 agencies in the whole world that could decrypt.

You'd have to be talking about a gigantic change for it to benefit them. I want my crypto to take 10 billion years to crack; intelligence agencies want to crack it in a week.

And what they can crack in a week today, hobbyists will be able to crack in a day a few years from now.

Weakening crypto means opening it to every criminal in the world. Computers get faster and secret backdoors get leaked.

If it isn't safe from everyone, in the long run, it isn't safe from anyone.

Aren't they doing both?

There's a difference between assuming something because it's prudent to do so and actually knowing it's true. And even if Schneier was extremely confident about it, he was still, in the minds of most people, just a paranoid guy on the corner screaming conspiracy theories about what the NSA may have and what they may be doing with it. Now he has some ammo, and he'd be foolish not to use it.

Becuse he assisted the Guardian in working on the story and must have seen some documents that made him hit the ceiling. Even if he isn't explaining the nitty gritty, I trust his reaction.

I keep hoping that Schneier's position is going to be some kind of guiding light forward because of his longstanding position that these "revelations" should be taken for granted. Since he does seem to be freaking out, do you have another voice that's worth listening to about how to think about all of this going forward?

What do you mean by, "freak out?"

I don't mean that dismissively.

OK, but what do you mean by it?

It may be widely believe in cryptography circles, but this release wipes away the plausible deniability that governments and American corporations have always depended on. Just last week, the German government was pooh-pooh'ing claims that Windows and TPM chips had backdoors inserted by the NSA.[1] These documents all but confirm it.

[1] http://www.zdnet.com/german-government-refutes-windows-backd...

What? Which documents confirm backdoors in TPM chips?

Even without naming the companies involved, it's very hard to imagine they are inserting backdoors in less-valued products while somehow missing the crown jewels of Windows and TPM.

I keep finding myself in the awkward position of trying to refute conspiracy theories, but not being at liberty to share everything I know about these scenarios (I really need to work somewhere besides DC), so I'll tread lightly.

Taking for granted that the NSA actually backdoored TPM's (which I can assert professionally is very unlikely, but I don't expect anyone to take my word for it), they are far from "crown jewels".

The only "meaningful" large scale use of TPMs is actually within the department of defense. It's been a pretty uphill battle getting them deployed and used in other environments.

You realize that these are exactly the same arguments that were brought up to argue against the details revealed in these documents, so perhaps appeals to authority and use of the words 'conspiracy theories' may be taken with a few more grains of salt. NSA backdoors have been alleged for decades now, and the response is always that they're a 'conspiracy theory'.

My argument isn't that the NSA hasn't backdoored TPM's (which I freely admit I can't convince you of), it's that TPM's are not "The Crown Jewels".

TPM 2.0 is a crown jewel for the NSA. Windows 8 full-disk encryption is based on TPM, and Windows 8.1 certification requires a TPM 2.0 module. It already is or soon will be universal in PC hardware. The NSA was involved its creation, and resisted changes to the standard. At the same time the German government was claiming there were no backdoors in Windows or TPM, privately they had already concluded it was compromised.

Source: http://news.techworld.com/security/3465259/is-windows-8-a-tr...

Yeah, I have to agree. The wide distribution of Windows makes it an important thing to have access to. In fact, I would go so far as to say that every commercial WDE is suspect.

"I keep finding myself in the awkward position of trying to refute conspiracy theories, but not being at liberty to share everything I know about these scenarios"

There are things I want to say about that sort of thinking, but I am afraid to say them. What a wonderful world...

Disagree. Over the medium term, TPMs (which message board geeks have been unhelpfully demonizing for years) are part of a system of technologies that could make laptop encryption much harder to break. Laptop encryption is a real operational challenge for both HUMINT and law enforcement.

That's true, but I've spent a good portion of the last year and half dealing with them and disagree on the likelihood of them ever achieving any widespread adoption. My company would love for me to be wrong about this.

Ah, so if we can imagine it, it must be true.

No, but now we cannot just assume that cryptosystems are being developed in good faith or that mistakes are not actually covert sabotage. We need to check these systems before we put our trust in them.

But why would you ever have assumed this? I mean, I don't really care whether something was a mistake in good faith or covert sabotage; the useful question is whether something is secure or not as far as I can tell. Assessing the motivations is a complete waste of my time as an individual.

It does matter if the NSA is actively sabotaging our cryptosystems. If people are making mistakes we can solve the problem as a community by improving the techniques we use to develop, document, and test cryptosystems. If we are dealing with people who are deliberately weakening our cryptosystems, it will be harder to push better techniques because our adversary will push back against them, or sabotage the techniques themselves.

In my view this was true anyway, since any mistake could be the result of foolishness or malice - if not on the part of the NSA, on that of the Russian, Chinese, British, Israeli, (etc.) security services. Crypto is an arms race between people with conflicting interests, and always has been; I don't mean to be rude, but I think your former view of the way things operated was a bit naive.

Well, that's what happens when they lose the "good faith".

Crypto noob here: Is it feasible to subvert "unknown unknown" vulneraties by applying multiple layers of different forms of encryption? While a chain is only as strong as its weakest link, it intuitively seems like encrypted data is as strong as its strongest link.

While taking for granted the NSA's M.O. (also having read Applied Cryptography some years ago) this leak somehow hits harder than the rest.

Yes, we had to assume it was happening and we'd have been foolish not to. But to have it laid out in no uncertain terms, is somehow quite devastating.

Can you expand a bit on chrome's anti-surveilance capabilities?

They pin certificates, so that a CA compromise that would enable MITM attack by the global passive adversary would be detectable (and in fact that mechanism has already been used to detect CA compromises.)

Why do you say "passive adversary"? I wouldn't call an MITM with a fake cert "passive".

I wouldn't call a MITM with a fake cert an effective global attack in 2013.

As we've already seen, NSA and other such agencies already have direct connections into the under-sea cables that connect countries across the globe. MITM is exactly what they do ALL THE TIME. To not see it as effective is to miss the point of Total Information Awareness.

This is parody, right?

Would you say that using a browser like Chrome and using TLS 1.2 with 2048-bit RSA keys and AES, is likely to be safe for many years to come?

That question cannot be answered, sorry. Cryptographically it is sound today. But that point is entirely moot if the NSA has the CA private key, or has access to your computer.

Do you think it's easier to discover attacks on AES or court order CA's?

You need to trust your OS, Chrome cryptography implementation, AES and RSA, and the end point, its OS and its possible role as a mute puppet, oh and don't forget everyones hardware!.

The point is not mooted if NSA has compromised a CA, because Chrome does more than simply trusting the CAs.

It was these types of capabilities that revealed Iran (IIRC) managed to break DigiCert to issue a GMail certificate a year or two ago to surveil Iranians via MITM.

What do you think the likelihood is of NSA doing (active) SSL MITM attacks using NSL'd CA keys?

Users running extensions that use things like EFF's SSL Observatory (SSL Everywhere has an option to report to that) will cause those NSA-generated certs to show up, and someone will get suspicious eventually. The only reports I've seen recently on that front were things like middle eastern users sending in samples of MITM certs. I'm not saying the NSA can't do it, but what evidence is there that the NSA has done MITM on SSL traffic? For all we know, couldn't the MITM ssl certs in the mideast be an NSA false flag op?

I agree that they are probably subverting endpoints. Nobody has mentioned it yet, but I suspect that differential power analysis plays a big role. The published attacks using DPA have been both devastating and trivial.

The allegations of widespread hardware backdoors are ludicrous. The backdoors would eventually become public, requiring the replacement of billions of dollars worth of equipment, and the several times that cost in audits. Only a spymaster with a suicidal death wish would chain his career to that. More likely is that people are misinterpreting backdoors in a few chosen endpoints, which we can take as standard operating procedure.

I am so glad I resisted pressure from engineers working at Intel to let /dev/random in Linux rely blindly on the output of the RDRAND instructure. Relying solely on an implementation sealed inside a chip and which is impossible to audit is a BAD idea. Quoting from the article...

"By this year, the Sigint Enabling Project had found ways inside some of the encryption chips that scramble information for businesses and governments, either by working with chipmakers to insert back doors..."

Thank you for that, and I hope that the people who keep arguing "if you can't trust your hardware, you can't trust anything, so we may as well just blindly trust the hardware" keep this in mind.

A bug deliberately introduced in an AES instruction, or in general purpose instructions that detects crypto operations and leaks information somehow, is much, much harder to implement and hide than a pseudo-random number generator that passes all tests that you apply to it, but produces predictable output for someone who knows some secret key.

Was that really a seriously considered plan? I don't see how that would ever be a suitable /dev/random replacement. Obviously it works for /dev/urandom, but it should be added to the entropy pool for /dev/random at most.

Matt Mackall, the former maintainer of /dev/random, actually stepped down over this issue, because Linus overrode Matt and applied Intel's patch that used their hardware random number generator directly:


> It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections.

> From a quick skim of current sources, much of that has recently been rolled back (/dev/random, notably) but kernel-internal entropy users like sequence numbers and address-space randomization appear to still be exposed to raw RdRand output.

Ted Ts'o later reverted this, separating out Intel's hardware random number generation into a separate function that could be used to seed the entropy pool but wouldn't be trusted directly as the main kernel source of random numbers:


If Matt protested, he did so quietly/privately. I wasn't aware of the fact that he had stepped down until the authors of the paper described in http://factorable.net showed up and pointed out we had a really bad problem for embedded devices on the internet. I had always assumed he had gotten too busy and distracted on other interests, since I do follow LKML, and I didn't see any kind of public debate/controversy about the change to the random driver described above.

If I had to guess what happened, some intel people pushed this as a feature, probably pushing it via one of the x86 git trees, and Linus either (a) didn't notice, or (b) didn't understand the implications, and then Matt quit in a huff --- by just stopping to do work, and not even updating the entry in the MAINTAINERS file. (That didn't happen until I took over the random driver again.)

Ah, here's the thread I was looking for:


It doesn't really look like he had NAKed it on paranoia grounds, but more on design grounds; others brought up the paranoia arguments. You were even involved in that thread, so you should have seen his stepping down, although he didn't submit a patch to MAINTAINERS.

You're right, if he did so, it must have been in private; I searched for a while to find a message on a public mailing list about it, and could not, so resorted to linking to that later message.

Regardless, I'm glad that paranoia did eventually prevail, despite Linus's original strong objections.

Sounds like Linus has some explaining to do...

Indeed. Where's Mr Sweary now, eh?

Not only did it happen before, just TODAY I had to fight back an attempt by a Red Hat engineer who wanted to add a configuration option which would once again allow RDRAND to be used directly, bypassing the entropy pool: https://lkml.org/lkml/2013/9/5/212

"It's unlikely that Intel (for example) was paid off by the US Government to do this, but it's impossible for them to prove otherwise --- especially since Bull Mountain is documented to use AES as a whitener. Hence, the output of an evil, trojan-horse version of RDRAND is statistically indistinguishable from an RDRAND implemented to the specifications claimed by Intel. Short of using a tunnelling electronic microscope to reverse engineer an Ivy Bridge chip and disassembling and analyzing the CPU microcode, there's no way for us to tell for sure."


"The NSA's codeword for its decryption program, Bullrun, is taken from a major battle of the American civil war. Its British counterpart, Edgehill"

"N.S.A. spends more than $250 million a year on its Sigint Enabling Project, which “actively engages the U.S. and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs” to make them “exploitable.”"

-- http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryp...


Bull Mountain, is Intel's code name for both the RdRand instruction and the underlying random number generator (RNG) hardware implementation.


bull [mountain|hill] [intel|processor]

http://www.googlewhack.com/ https://xkcd.com/936/ http://subrabbit.wordpress.com/2011/08/26/how-much-entropy-i...


did Linus ever comment on the roll-back ?

Because strong encryption can be so effective, classified N.S.A. documents make clear, the agency’s success depends on working with Internet companies — by getting their voluntary collaboration, forcing their cooperation with court orders or surreptitiously stealing their encryption keys or altering their software or hardware.

That's the money quote there- the NSA hasn't cracked encryption. They've just put back doors in.

And we can't even be that angry at the (e.g.) Microsoft execs that authorise the back doors- they potentially face jail time if they resist NSA requests. All the while presumably not able to talk about the requests publicly.

EDIT: and the really fun part - did you know the former head of the NSA serves on the board of directors for Motorola Solutions? http://en.wikipedia.org/wiki/Michael_Hayden_(general)

I'm guessing this is what tripped up Lavabit. Mr. Levison probably didn't have the back doors and balked at being complicit once he came onto the NSA's radar.

From the article: "Intelligence officials asked The Times and ProPublica not to publish this article, saying that it might prompt foreign targets to switch to new forms of encryption or communications that would be harder to collect or read."

Also: “Properly implemented strong crypto systems are one of the few things that you can rely on,” - Snowden

I would assume that because Snowden used Lavabit & they shut down that the NSA took issue with how secure Lavabit actually was.

Plus (form the Guardian article) there are covert agents in all the companies, presumably lifting all the certs, which may well be unauthorised, but you can't prosecute.

Do you know who your covert agents are?

Why couldn't you prosecute, if you found out? I assume theft is still theft, even if done by a government employee.

Prosecuting would require contacting an authority that would be willing to take the case, and would also require going public with the fact that one of your "trusted" employees had invalidated your security systems, potentially opening you up to untold amounts of liability from customers who may believe their security has been compromised.

Go ahead, call the cops and media. I hope you have a ton of money on hand and some jewelry stashed away in various locations before you do so.

Why couldn't who prosecute? That's a function of the government; a prosecutor is not obliged to engage in a case (formally, get an indictment out of a grand jury) whenever they believe a crime to have been committed.

Undercover agents at all levels of law enforcement commit apparently criminal acts every day, with no fear of prosecution. There's no reason for this to be any different.

Good luck with that. You would need the government to prosecute and the government makes it illegal to talk about what you want to prosecute.

Perhaps they could arrest you with the old "interfering with a government agent in the performance of his duties" bit.

> That's the money quote there- the NSA hasn't cracked encryption. They've just put back doors in.

It's not like they've just gotten secret keys. They've specifically gotten chip manufacturers to add backdoors to hardware, as well as significantly influenced actual cryptography standards themselves:

> "The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

> “Eventually, N.S.A. became the sole editor,” the memo says.

Presumably that's DRBGs[1]? Does anyone actually use them in that form?

[1] http://en.wikipedia.org/wiki/Dual_EC_DRBG

I don't understand that part of the article, what algorithm standard was it? AES?

That's the quote that jumped out at me too. The solution for those who want to stay out of NSA's reach is to use your own hardware, and use open source software (where it's hard to put a backdoor without being discovered) and strong encryption.

That's a false sense of security. You can inspect every line of code in SSL but unless you are a world-class cryptographer yourself, how will you spot a backdoor in the algorithm?

It's a bit harder to sneak in junk in open source projects. You can see the checkins. However, you are right, if the flaw is in the algorithm itself, it's hopeless.

Is it harder to sneak in junk in open source projects? I'm reminded of Ken Thompson's Turing Award lecture, "Reflections on Trusting Trust". http://cm.bell-labs.com/who/ken/trust.html

Could someone add a backdoor to git that hides backdoors from showing up in git? Could gcc be backdoored to add backdoors to arbitrary software? How likely is it that NSA has a few zero-days lying around they could use to hack into the servers that host git or gcc or any other tool you rely on? What if they had agents among the committers and maintainers of these projects?

Security against a well-armed, well-funded, well-organized, secretive adversary is hard.

Could you spot a "trusting-trust"-style backdoor in an FPGA you were offloading crypto to? How would you even start?

There are countermeasures concerning the Trusting Trust attack: http://www.schneier.com/blog/archives/2006/01/countering_tru..., though I'm not sure if anyone has ever seriously attempted to deploy them.

I have given some thought to this kind of thing, and one thing I realized is that the limits of the trusting trust attack can be exploited as well. Let's support you only have one compiler. Now, it is going to try to insert the worm into any compiler it compiles, right? The problem is that it must be able to detect that it is actually compiling a compiler.

This, however, is not a decidable problem. It is possible to construct a program that will fool the worm and thus you can create a compiler that you know you can trust for this test. It will probably be a hard compiler to use, but you will need it at most twice -- once to check for an attack, and if there is an attack once more to bootstrap a clean compiler.

But in order to create a disguised compiler, you need to know what method a compromised system uses to decide whether something is a compiler.

i.e., you actually have to have an example of a compromised compiler, which pretty much solves the problem in the first place.

If you decidedly don't-trust the only compiler on your system, and don't trust outside sources, the only solution is to hand-assemble a new compiler on the system, and hope that at least the hardware is trustworthy. which it isn't, necessarily.

http://underhanded.xcott.com/ <-- a contest all about sneaking bad behavior into code without being noticed

You can't rely on a backdoor looking like this:

    if(!strcmp(username, "secretagentman")) { … }

The Debian OpenSSL snafu showed that even rather blatant changes get missed.

Anything in the leaked docs on that particular incident?

The OpenSSL snafu showed us all that _code commenting_ is a really good idea.

Can we combine multiple algorithms such that having any one of them be secure is safe? For example, instead of encrypting with just RSA, do one pass with RSA and then another pass with ECC. Instead of just using AES, do one pass with AES, another pass with Twofish, and a third pass with RC4. Does that actually help?

You can do this but it will probably make it less secure, not more.

Nobody seems to know if the NSA actually has practical attacks against primitives like AES or SHA-2. We do know for sure that they go after higher level implementation flaws. The more complex your encryption scheme is, the more likely it is that you'll introduce a grave flaw. It only takes one.

I'd suggest that our best bet already exists: NaCl[1]. It's by Daniel Fucking Bernstein, so the implementation is as flawless as it gets. Better yet, it doesn't use a single US-approved primitive (not even the NIST curves Schneier was warning against in his Guardian piece).

Funnily, before the leaks Bernstein's use of all his own primitives was seen as a bit wacky and concerning, but now it seems almost sensible.

[1]: http://nacl.cr.yp.to

DJB has always been laughed off as an eccentric paranoiac, and yet as the years go by he almost always ends up being proven right. It's been kind of a funny pattern over the past few decades.

Yes, though there are some subtleties, it's fairly straightforward overall.

The reason we don't do that is, of course, CPU cost.

The strength of open source lies in the number of eyes with access to the code.

Perhaps I lack the wherewithal to identify security vulnerabilities in deployed code, but there's a good chance that there are others who are able to spot said vulns.

Actually, we don't know if there's a good chance or not. All the best cryptographers in the world, who aren't already working for intelligence agencies, are reliant on government funding (i.e. they're in academia). The fact that these have gone undiscovered so long suggests that finding them will not be trivial.

ability to spot algorithmic vulnerabilities on that level? and doing it for free? the chances are nill

Not completly for free if you use that software, and if you are an expert crytpo guy, it is probable that you use that software.

I'm talking about vulnerabilities in crypto software no in email clients, browsers or office software (that probably they use too)

Your statement reveals a real lack of understanding of opensource. Hint; it's open to all, all includes world-class cryptographers. Each contributes their ability for the greater good of all. WCC may not spend their time coding or packaging or whatever. Others will.

No, I'm afraid it is you who are mistaken... About a great many things.

Now, witness the power of my fully operational Utah data center.

Remember when Microsoft would trash Linux because it was open source and "not secure." Well, this settles it. Using your own hardware and open source software helps but someone determined will still get in...

Even "your own hardware" is going to be pretty damn hard:

working with chipmakers to insert back doors

So you're going to need to make your own chips, too.

RMS doesn't seem such an extremist any more.

I don't agree with RMS on much, but this just goes to show that calling people "extremist" is a logical fallacy. There is only correct and incorrect, and the margin by which something deviates from a commonly accepted norm is irrelevant to judging that.

Refusing to rely on the RNG in Intel processors doesn't seem particularly unreasonable in the light of this revelation does it?

The following isn't directly applicable to your suggestion, but it's a reminder that an FPGA, just like a CPU, may not be doing exactly and only what you told it to do:


> Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. [....] The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. [....]

Isn't this the fact that NSA has access to the internet companies private key for the SSL certificate? There by giving them the tools to decrypt the initial TLS handshake and then from there you can get the symmetric key and decrypt the rest? Or is there more to it, reading the article I didn't see any hard proof of this.

>the Bullrun program, the successor to one called Manassas — both names of American Civil War battles. A parallel GCHQ counterencryption program is called Edgehill, named for the first battle of the English Civil War of the 17th century.

Spying on your own citizens codenamed as civil war. How nice.

>Only a small cadre of trusted contractors were allowed to join Bullrun. It does not appear that Mr. Snowden was among them, but he nonetheless managed to obtain dozens of classified documents referring to the program’s capabilities, methods and sources.

Once again, the people spying on everyone suck at keeping their own secrets. How many others have taken the information with them and sold it off instead of leaking it?

>In one case, after the government learned that a foreign intelligence target had ordered new computer hardware, the American manufacturer agreed to insert a back door into the product before it was shipped,

If you're a non-US company how can you keep trusting US IT vendors? I wouldn't want to be one of these companies' reps at Airbus for example.

Spying on your own citizens codenamed as civil war. How nice.

Nowhere in the article does it state that these methods can be used against US persons separate from other protections against surveillance on US persons, nor does it give the impression that this is special to US persons:

The agency’s success in defeating many of the privacy protections offered by encryption does not change the rules that prohibit the deliberate targeting of Americans’ e-mails or phone calls without a warrant.

Let's keep in mind the fact that an intelligence agency is built to gather intelligence on other governments/organizations and that often involves breaking other jurisdiction's rules.

"The agency’s success in defeating many of the privacy protections offered by encryption does not change the rules that prohibit the deliberate targeting of Americans’ e-mails or phone calls without a warrant."

Rules which are enforced internally, with an inspector general chosen by the same executive branch that commands the NSA's leadership. Yes, we can really rely on these rules when push comes to shove.

>the rules that prohibit the deliberate targeting of Americans’ e-mails or phone calls without a warrant

The previous leaks show these rules to not be particularly effective. For me and most of the world that distinction is irrelevant anyway. The position of the US government is that it can order its tech companies, with whom I have contractual/financial relationships to give them all my data with no warrant.

You must have been living under a rock for the past few months. Welcome to September, where we now know that to not be the case.

I know we're not supposed to make these kinds of comments in HN, but yours made me laugh.

Welcome to September, where the NSA can keep any data it accidentally collected about US persons for five years if its plaintext. And if that data is encrypted, it can keep data on US persons forever.

Humorously, the United States Army lost both Battles of Bull Run to the Confederates.

What's in a name? Perhaps your intended targets. Can't wait to see Project Auschwitz.

This is really damaging.

Not only will this cause other countries to put up barriers against US (and UK) services and products, it's going to affect uptake of standards developed here.

On the lighter side, a treasure hunt was just announced. Can you find one of these vulnerabilities, or evidence of the NSA having attacked a particular system to steal keys?


[Edit 1] Some speculation:

By careful hardware design -- and lots of it -- the NSA may be able to find keys large enough that we would be mildly surprised but not shocked. It's not well known that searching for many keys in parallel amortizes well -- it's much cheaper than finding all the keys individually. DJB has a great paper about this:


If I were looking for subverted hardware, I'd be really interested in reverse engineering Ethernet chips and BMCs. The CPU would be an obvious choice as well -- could there be some sequence of instructions that enables privilege escalation?

On protocols, the best sort of vulnerability for the NSA would be the kind that is still somewhat difficult and expensive to exploit. They want the security lowered just far enough that they can get the plaintext, but not so far that our adversaries can.

There is some history with not taking timing attacks seriously enough. Perhaps careful timing observation, which the NSA is well positioned to do, could give more of an edge than we suspect. Or perhaps you could push vendors to make their products susceptible to this kind of attack, secure in the belief that it may be difficult for others to detect.

[Edit 2]

I gave a talk that discussed what I think we as engineers should do here:


And Phil Zimmermann and I discussed a number of these issues in a Q&A session:


I would not be at all surprised to learn that the major advance these disclosures refer to is an on-demand RSA-1024 factoring capability. RSA-1024 is already known to be unsafe (Eran Tromer estimates a 7 figure cost for a dedicated hardware cracker, which is approximately the threshold DES was at in the late '90s, when nobody believed DES was secure). On-demand offline RSA-1024 attacks would have major implications, would be a huge advance in the state of the art, but also seems feasible given an effectively unlimited budget.

That makes sense. I think it unlikely they've discovered an actual break through. They do have their own fab, how many chips do you need to build to Mae that worthwhile? It's the US government after all, a machine with 10million specialized RSA chips doesn't seem impossibly difficult, just expensive.

Governments are big, dumb animals, so make whatever you're trying to protect very expensive ($20-50 Billion range) to brute-force within usability constraints.

Btw... apart from Scrypt paper, has anyone put together a practical guide on crypto parameter brute force costs? (say volume pricing of gear and asics in huge qty)

I think we know very well which encryption has been foiled by the NSA. This is not speculation, but quasi-certainty: 1024-bit RSA.

- Crytographers all acknowledge 1024-bit RSA is dead [1].

- Attack cost 10 years ago was estimated to be a few million USD to build a device able to crack a 1024-bit key every 12 months [2].

- "Much of" the "secure" HTTPS websites use such weak key sizes [3].

- NSA had a budget of 10.8 billion USD in 2013.

Drawing a conclusion is not very hard.

[1] http://arstechnica.com/uncategorized/2007/05/researchers-307... [2] http://www.cs.tau.ac.il/~tromer/twirl/ [3] https://www.eff.org/pages/howto-using-ssl-observatory-cloud

Some popular browsers still do not support newer versions. We tried turning this on with a newer, more secure key and ended up having downtime for some customers.

Which browsers in particular?

Android Browser on Google TV, and Java libraries hitting our APIs. Google TV and Android browsers are critical to our business.

I have written lots of Java code accessing HTTPS sites with 2048 or 3072-bit RSA. This is perfectly supported. You do not even need the Unlimited Strength Jurisdiction Policy Files to use such RSA key sizes (other algorithms are restricted).

I can't comment on Android Browser on Google TV, but I very highly doubt it fails to support 2048-bit RSA keys. If that was the case, half the HTTPS websites would be unbrowsable(!) [1]

[1] Per the EFF SSL observatory dataset, roughly 1 in 2 websites uses key lengths strictly higher than 1024 bits.

We had downtime for this, so I am 100% sure. We isolated it to the key, and reverting the cert/key back to 1024 fixed it. It was just an option on GoDaddy one of the engineers picked to generate a 2048 cert. They only offer 1024 and 2048. One key worked, the other didn't.

It must have been something else that broke it, not the key size. Android Browser definitely supports 2048-bit RSA certs. Maybe a root cert was absent from the browser (GoDaddy would be using a different root for 2048-bit certs?). Or maybe intermediate certs were missing in the certificate path. It sounds like your engineer did not spend much time trying to figure out what aspect of SSL/X.509 was actually causing the problem.

There were no problems with accessing the site with Chrome or other modern browsers. What you described would have been a problem with all browsers, and anyway GoDaddy supplies all the files you need in a single zip file, including the intermediate certs. We did simply revert the SSL key, once we isolated that to be the problem. There is no pressing business need for a 2048-bit key.

That is incorrect. Mobile browsers, the JVM, etc, notoriously lag behind desktop browsers when it comes to updating the list of root certs (and intermediate certs too, but that seems irrelevant in your case). The consequence is that a site can be accessed from the desktop, but not from a mobile.

It was a recurrent problem at a previous job with a Java app accessing HTTPS sites. We could not always update the JVM (which comes with the most recent list of roots in "cacerts"), so we had to develop a solution to push the latest cacerts truststore to our application. Problem fixed.

Do you know if Android Browser users reported at least an ability to click through an SSL warning to get to the site?

Yes, now I remember, I think you are correct. They were able to click through the SSL warning but because we use socket.io they had additional problems. Some of our customers do not employ full time engineers and whatever script they were using with our API were using libraries that couldn't handle the SSL cert change and they could not easily update them. We couldn't ask our customers, mostly sales and customer service oriented directors, to handle a complicated certificate change either.

Very unlikely. Virtually all browsers support 2048-bit RSA. Keys larger than 2048 bits, however, are not always supported (which is probably what you tried).

I am confused. When I see HN or facebook certs they show 128 bit encryption in the browser box. 128 bit seems pretty low.

In that case, you are seeing the size of the symmetric key being used. The bigger numbers mentioned above (1024, 2048, etc) are referring to the size of the public/asymmetric keys. The public keys are only used to set up the initial exchange of symmetric keys, which are then used to secure your browser's encrypted connection.

This is the size of the AES key. AES is a symmetric algorithm and 128 bit are still considered solid there, although the trend is moving towards 256 bit. What we're talking about here is the key size of RSA, which is an asymmetric algorithm. If you don't know the difference, go find a basic crypto tutorial. As you can read above, 1024 bit RSA is probably borken. I wouldn't trust 2048 bit too much as well. Also, progress in breaking RSA is happening a lot faster than with AES.

In the context of SSL, an assymetric algorithm like RSA is used to exchange symmetrc keys, which are used afterwards.

That said, 256-bit isn't really that much of an improvement for AES - its favored since that's the US standard for Top Secret classification, but in practice any attack which brings down AES-128 will almost certainly get AES-256 as well. I've switched most of my SSH servers over to default to 128-bit AES ciphers since the difference in difficulty seems small enough that it won't matter if someone actually tries targeting it and can succeed.

128 talks about symmetric key encryption, not RSA.

I'm not sure what the crypto best practice is regarding key strength for 128 bit for symmetric crypto, but presumably it would depend on the cipher used.

Probably AES, not RSA. IIRC a 128-bit AES key is about equivalent in security to a 2048-bit RSA key.

2048 bit RSA is usually described as roughly equivalent in security margin to a 112 bit symmetric key, and 3072 bit RSA to be 128 bit symmetric equivalent.

The article you linked to in [1] doesn't explicitly say that generalized 1024-bit RSA is dead. They found a way to exploit a special case key (Mersenne number keys). Searching around the internet, I found a bunch of articles about supposed cracks, but they all involved additional sources of information. I'm not doubting that the NSA has found ways to crack all sorts of crypto, but is there really a known way to break 1024-bit RSA without other special qualifiers?

When asked whether 1024-bit RSA keys are dead, Lenstra said: "The answer to that question is an unqualified yes."

Generalized 1024-bit RSA keys are dead. Lenstra is making a comment on generalized 1024-bit RSA keys in this sentence. Not on Mersenne number factorization (which is, yes, the main topic of this article).

My link [2] tells you concretely how to break 1024-bit RSA and estimates the cost to $10M, well within NSA's capabilities.

Considering http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/ I wouldn't bet on "deep encryption" either ...

There are two very different questions:

1. What are the chances that crypto will keep the NSA out of your communications generally? and

2. What are the chances that crypto will keep the NSA out of your communications if they really, really want to read them?

Those questions are very different. I wonder, for example, of what would happen if all internet traffic was encrypted end to end with something as weak as DES. Could the NSA brute force it? Of course. Could they brute force all of it? doubtful.

One of the clear in-between-the-lines things in the article is that crypto is still problematic to the point where the NSA prefers to attack endpoints and get access that way instead of attacking the crypto itself.

One of the vulnerabilities was already discovered by researchers in 2007: http://rump2007.cr.yp.to/15-shumow.pdf

At the time, it wasn't clear if this was a deliberate backdoor or an accident, but it was proven that there there was a possibility that there was a secret key that would allow someone to predict future values of a pseudo random number generator based on previous values. Now it looks pretty clear that it was a deliberate backdoor.

This really reduces trust in US based cryptographic standards. And US based cryptographic hardware, as they mention in the article that they convinced hardware manufacturers to insert backdoors for hardware shipped overseas.

This is almost definitely not "one of the vulnerabilities" implicated in the story today, because nobody uses CSPRNGs based on Elliptic Curve.

so what was the vulnerability found by ms in 2007 that they are referring to? (search for 2007 in single page version at http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet...)

edit: reading in more detail around there, i am pretty sure that section of the article is referring to the CSPRNG vulnerability above. the article covers a lot of ground and not all of it is about problems with ssl. that particular section seems to be arguing that the nsa is trying to put backdoors into standards wherever it can.

I don't know. I'm just saying, weakening a CSPRNG design that nobody uses or is ever likely to use (it's extremely expensive) is not a particularly meaningful action.

It sounds an awful lot like that's the one the New York times was describing. Can you think of any other standard that was published in 2006 by NIST which two Microsoft researchers discovered a flaw in in 2007? That sounds exactly like Dual_EC_DRBG

> Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

> Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology, the United States’ encryption standards body, and later by the International Organization for Standardization, which has 163 countries as members.

> Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

> “Eventually, N.S.A. became the sole editor,” the memo says.

Now, that may not have been an effective technique, as you point out it's so slow that no one is ever going to use it, and this vulnerability was discovered not long after it was published.

So, that's obviously not a vulnerability that they are actively exploiting. If they are actively exploiting a vulnerability that they introduced, it must be something else. It wasn't clear from the article that that's actually the case; it may be that the vulnerabilities they are exploiting are ones they've found, not introduced deliberately.

But it does appear to be an example of a vulnerability that they were able to get standardized, in the hopes of being able to exploit it. Until now, it has been only speculation that it was a deliberate vulnerability, but it now seems clear that it was.

Sure. I think we agree. If "it" is a crypto weakness they are actually exploiting, "it" is not Dual-EC DRBG.

Ah, yes, I wasn't trying to say they were exploiting that particular vulnerability. Just that we now have better evidence that that really was a (rather poor) attempt to subvert standards to make them easier to decrypt.

The NSA seems to be really divided between SIGINT and COMSEC. COMSEC wants to provide good, strong encryption, that can help secure US government and corporate communication. SIGINT wants to be able to read everyone's traffic.

For example, they changed the DES s-boxes in a way that made it more secure against differential cryptanalysis. They've released SELinux. There is a part of the NSA that does actually try to make encryption standards stronger.

But then there's the part that advocates for the Clipper chip, advocates for controls on exporting strong crypto, or strongarms NIST into standardizing Dual-EC DRBG. And that part does real damage, as everyone suffers from the weak export crypto (either people overseas have to work on strong crypto, or products are released with weak or no crypto because regulatory compliance is too complicated), or people stop trusting US software and hardware.

The NSA seems to be doing some real damage to technology companies in the US. I had thought that they had gotten better about it, after they gave up on the clipper chip and lifted most of the export controls, but it looks like I was wrong, they've just decided to take more covert routes to do the same thing, with the hope that none of the tens or hundreds of thousands of people who could find out about it would leak that information.

Nitpick: they changed DES's S-boxes. DSA doesn't have S-boxes. Skepticism about NSA's involvement in any crypto standard (a decade ago!) led NIST to document precisely the mechanism used to generate DSA's parameters.

I think maybe it's the fact that I started in the industry during the era of Clipper that stuff like this doesn't faze me much.

Gah. I even noticed that typo while writing it, then forgot about it after having edited another part of my comment. Yes, I meant DES, not DSA.

As you may well know, the NSA has its own ciphers (Suite A) it uses for top secret classified traffic, which to me is positive proof you can't trust anything they recommend (AES) - when they don't even use it themselves.

Not positive proof -- merely suggestive. Which is true of a lot of things in the secret world of the intelligence services.

The more you use a secret cipher, the easier it is to break. It is simply good operational practice to use a different cipher for a small fraction of communications -- namely, the most secret ones. Just like certain antibiotics are reserved for drug-resistant organisms. You don't want it to lose its effectiveness through overuse.

First, security-by-obscurity does indeed buy you some additional time. Because the cipher is secret, your opponent has to figure out the algorithm as well as the attack.

Second, this reduces the amount of traffic that the opponent can analyze. For example, suppose that only 1% of messages use Suite A, and 99% of messages use Suite B. With fewer messages to analyze, the job of breaking the cipher becomes much harder.

Third, the reduced volume also makes known-plaintext attacks more difficult. Especially if you avoid committing the cardinal sin of repeating the same message using two different ciphers.

I'm not sure that Suite A is actually stronger than Suite B. In fact, it may be weaker, for practical reasons (efficiency of encrypting high-bandwidth streams in resource-constrained devices), and so they are relying on an additional layer of security-through-obscurity to help keep it safe for longer.

There is some information known about some of the algorithms. Wikipedia has pages on BATON https://en.wikipedia.org/wiki/BATON and SAVILLE https://en.wikipedia.org/wiki/SAVILLE. You may notice that these are frequently used for hardware implementation in radios, smart cards, encrypting video streams, etc; devices that are probably fairly resource constrained, and would be hard to replace with new hardware if attacked.

If you look at the description of BATON, it has a 96 bit Electronic Code Book mode. Yes, ECB, the one that is famous for leaking information, as you can tell which blocks are identical and get a good deal of information out of that.

But even with fairly efficient hardware implementations, adversaries have been able to use off-the-shelf software to intercept Predator drone video feeds because encryption was disabled for performance reasons: http://www.cnn.com/2009/US/12/17/drone.video.hacked/index.ht...

The NSA has approved both Suite A and Suite B for top-secret material. I really don't think that they have any worries about the security of Suite B (though as Schneier points out, you may want to be a bit paranoid about their elliptic curves, as it's possible that they have ways of breaking particular curves that other people don't, like they did with the Dual EC DRBG that they promoted). I suspect that Suite A is around for legacy reasons, as they have been implementing it for longer than Suite B has existed and many of the implementations are in hardware or otherwise difficult to update.

They use AES when dealing with "outsiders", but I have trouble believing that they use it internally (and instead use the Suite A ciphers -- it's impossible for others to break them if they don't know anything about them, right?)

> (and instead use the Suite A ciphers -- it's impossible for others to break them if they don't know anything about them, right?)

It is possible to break an unpublished cipher. Just more difficult, because you've got to figure out the algorithm as well as the key. As long as it is similar to existing algorithms, you can try and look for differences.

For example, American cryptanalysts broke the Japanese Purple cipher during World War II entirely from encrypted messages. It was only at the end of the war that they managed to recover parts of one machine from the Japanese embassy in Berlin. No complete machine was ever found.

(In contrast, Enigma machines were captured, so cryptanalysts could directly examine the mechanism and use this knowledge to look for weaknesses.)

Of course, if the algorithm is completely novel, and bears no resemblance even to any principle used in published cipher, then it's a lot more secure. It would be hard to even begin to analyze it.

That said, it's unlikely the NSA has truly novel algorithms. They recruit from the general public like everyone else. There principle advantage is that they're big (working for the NSA is appealing) and can classify in-house breakthroughs.

> The NSA seems to be really divided between SIGINT and COMSEC.

Anybody care to guess which group was responsible for the FUBAR that gave Snowden the keys to the kingdom?

Heh. It would be funny if people in COMSEC actually allowed this material to be leaked because they were disgusted about SIGINT putting so many vulnerabilities into publicly available crypto, and wanted to let it be revealed to stop that practice.

Unlikely, though. More likely that Snowden was just acting on his own. And he didn't really have "the keys to the kingdom"; just more access to a fileserver that had lots of PowerPoints on it than he should have had. If you note, almost everything that has leaked so far is PowerPoints where various branches of the NSA describe to each other and other government agencies what capabilities they have, but not the actual details of those capabilities. He probably had access to some fileserver used by the higher level executives at the NSA, but they do compartmentalize information and as they mentioned were very secretive about exactly what those vulnerabilities consist of, so there's a good chance that he didn't have access to systems where that was described.

This may be a case of the Times assuming that since 1.5 rounds to 2, 1.5 + 1.5 = 4. "The NSA breaks crypto" + "The NSA backdoored Dual-EC DRBG" = "The NSA breaks crypto via backdoored Dual-EC DRBG".

Not a crypto expert at all, but did they knew in advance that nobody would use it? Otherwise it could just be a failed attempt.

I don't know what they expected, but Dual-EC is self-evidently noncompetitive.

A RNG that is reducible to a different believed-hard problem has possible features, so it's not like there could never be a reason for someone to choose this generator. What we could be seeing is the discovery of one failed attempt of a shotgun approach to promulgate insecure primitives. It's hard to know what will happen to become commercially successful, so spray and pray.

Something this blatant does seem like a severe misstep, but perhaps what led to discovery of this case is the wide body of public knowledge on number theoretic crypto. The energy of the public sphere seems mostly devoted to studying problems with interesting mathematical structure. Symmetric crypto has been around a lot longer, and is sufficient for state security purposes, so one would expect the NSA to have a deep analytic understanding of it (hence the differential analysis olive branch). It's not hard to imagine that they'd have ways of creating trapdoor functions out of bit primitives, generating favorable numbers with plausibly-impartial explanations, etc.

> A RNG that is reducible to a different believed-hard problem has possible features

I think you got it backwards... shouldn't you reduce hard problems down to the problem whose difficulty you're trying to understand?

Yep, I misspoke. I simply meant 'is based on', and shouldn't have used big words so cavalierly.

Nobody uses them because they came out of the NSA with little precedent in the open literature, and independent analysis quickly uncovered this vulnerability.

Also nobody used it because it's a CSPRNG that requires bignum multiplication.

Imagine this is proved in France, this would add some weight to the investigation and case against the US, esp if they can prove personal encryptyed information was stolen!


> but not so far that our adversaries can.

Please clarify what you mean by "our".

Please clarify what you mean by "adversaries".

Come now, we know enough about the NSA at this point to know that

our, adversaries = America, !America


The NSA are "our" adversaries.

I think that "our = {NSA,U.S. government}, adversaries = !our" is more accurate nowadays.

If you are not the U.S. government, you are their adversary (even if you are a U.S. business or citizen).

In this context, an adversary is any encrypted data they want to be able to decrypt. Anything.

Adversary in security means anyone you don't want reading your data.

In this pdf[1] , they discuss security issues in intel chips.They mention strange responses from intel. Also it's possible, but very hard to exploit those issues , which is optimal in this case.


Forgive me if I don't open a PDF from a .ru domain. (and yes, I know how silly that response is)

If you were actually interested in the content, there are countless ways to open a PDF safely.

* Open the PDF in a non-adobe reader such as Foxit and Sumatra w/ JavaScript disabled

* Both FF and Chromes internal PDF viewers ignore JS

* You can preview a PDF in Google drive

* Open the PDF in a sandboxed VM.

Guessing for your history of spammming 1 line pointless comments, you probably already know this.

You are right. I didn't know that PDF.js in Firefox was somewhat safer, though.

What's truly frightening is this line from the Guardian's article on the topic:

> The NSA describes strong decryption programs as the "price of admission for the US to maintain unrestricted access to and use of cyberspace".

What does that even mean? That statement is at the same time paranoid, arrogant, and subtly threatening. It's as if to say that without the ability to decrypt interesting traffic, the NSA would be forced to take stronger measures to curtail internet traffic.

Look at what's happening in the UK, in Australia, in France, in Italy, in Spain... the Chinese model is winning hearts and minds of politicians everywhere, and how could it not? If you're into politics, you likely want to reach a Platonic ideal of harmonic society, where nobody is offended, nobody is threatened, and all laws are perfectly respected and enacted. You can't have that on a fully-open network. How can you keep your people from enduring child porn and Islamic propaganda, without censorship?

So most states are slowly moving towards implementing their own little firewalls. The only notable absence? The US. Despite occasional campaigns from religious nutters of various sizes and shapes and continuous pressures from commercial telcos, subsequent US administrations repeatedly affirmed that fundamental Net freedoms would not be curtailed.

This document states that such a position is not coming from idealism or even commercial convenience: it's a way to persuade the rest of the world to do business over networks and protocols that the NSA can tap at will. Should this capability be forcefully contained, there wouldn't be a political incentive to keep the Net flowing freely through US routers.

It's a perfectly reasonable and plausible position, and that's why it's so terrifying.

Completely off-base. The US has, by longstanding tradition, had a more expansive attitude towards free speech than Europe. Consider blasphemy laws in the UK, which were only abolished in 2008 but would never have been constitutional in the US. Consider laws against Holocaust denial or displaying Nazi symbols in continental Europe that would be unconstitutional in the US. In Germany you can be arrested for displaying a swastika. In the United States, the courts (in National Socialist Party of America v. Village of Skokie) allowed a Nazi group to march through a neighborhood populated largely by Jewish Holocaust survivors. None of these have to do with controlling networks. They have to do with the first amendment and with both jurisprudence and attitudes towards freedom of speech that are different in the US than in many other countries.

> The US has, by longstanding tradition, had a more expansive attitude towards free speech than Europe...Consider laws against Holocaust denial or displaying Nazi symbols in continental Europe that would be unconstitutional in the US. In Germany you can be arrested for displaying a swastika.

These laws were included in the German constitution following the "denazification" of Germany by the USA, where Nazi symbols were banned and literature burned.

The laws against Holocaust denial and Nazi symbols were pretty much forced by the USA. It's extremely ironic how often they're mentioned as an illustration of the USA's devotion to free speech.

> The laws against Holocaust denial and Nazi symbols were pretty much forced by the USA.

So why doesn't German remove the laws now that they've served their wartime reconstruction purpose?

And that is why they were put in, the same reason that even in the U.S. free speech was curtailed in many areas during the American Civil War.

Are you asking why a German politician doesn't start a campaign seeking to alter the German constitution in favour of allowing Nazism?

I think you know the answer to that one ;-)

Maybe. In practice, however, even in the US there are and there have been censorship instruments, from the FCC all the way to Sen. McCarthy and Hoover. Today, US military personnel and civil servants are "protected" from wikileaks material by blocks at the network level. Federal pornography filters have been proposed several times, and on occasions it looked like they would become a reality. The US Constitution might be more benevolent than average on freedom of expression, but it doesn't mention TCP/IP anywhere.

I think the key to understanding this is to remember that it was written by the NSA for the understanding of the NSA, or other highly authorized eyeballs in the government.

In many government documents, use of the name "U.S." is shorthand for the U.S. national government, not the entirety of the nation. Sometimes it is even shorthand for the particular agency that authored the document (since, in theory, they represent and act on behalf of the entire nation).

So what this internal NSA document most likely means by "unrestricted access and use" is the NSA's unrestricted access to, and use of, whatever data they want.

Think of it like a budget justification (since that is the purpose of at least half of all internal government reports). "You need to keep spending a lot of money on this program if you want us to keep getting all that data you like so much."

That was what caught my eye also. It seems to imply that if they can't read our Internet traffic then they'll have to take the US off the Internet. That's a pretty drastic threat.

It means there are two choices for America's participation in the global internet: decryption capabilities or America's Great Firewall.

The statement implies that in the absence of "strong decryption programs" then there would be only restricted access to and use of cyberspace. I'm sure the intelligence leadership in the US Government look at China's Great Firewall with both trepidation and admiration.

It makes sense if "the US" = "the NSA". Stuff's been encrypted, for the NSA to continue to have access, they've gotta break it.

Up until very recently, the received wisdom was: the crypto wars are over, we fought the law and the law gave up, the NSA has quit trying to crack encryption, they have decided the USA is best strengthened by having a reliable internet which business rival nations can't just read like the morning's news. The NSA knows the problems in crypto and their suggestions make it stronger against attacks we don't know. Trust the NSA.

Would that it were true! It would make sense. This makes no damn sense. Just recently I would have ruled out huge conspiracies as implausible because they inevitably leak (roll save against ethics how many times?). The joke's on me, folks. The NSA has no sense. And the conspiracy leaked.

So now every single decision that was taken with help from the NSA (SELinux, TLS, elliptic curves, etc) needs unpicking and running by a cryptographer who isn't a shill. What a damn drag. And meanwhile, the aftershocks will run for years trashing trust in the networked economy.

Fuckin' brilliant, NSA. You screwed the pooch. You accidentally the whole internet.

    So now every single decision that was taken with help
    from the NSA (SELinux, TLS, elliptic curves, etc) needs
    unpicking and running by a cryptographer who isn't a
Cryptographers have already been looking very carefully at everything that comes out of the NSA. Lots of security researchers, in and out of the US, would love to find NSA-introduced flaws.

It would even be ironic if people's aversion to things like SELinux caused them to use software which is even less secure, and correspondingly easier for NSA to break. They know the long game too...

Note that SELinux isn't crypto - it's code to implement mandatory access control, which is just regular bitbashing that any software engineer is completely qualified to audit.

>they inevitably leak (roll save against ethics how many times?)

Haha, nicely put. Note too that sooo many "roll save against temptation" must happen to avoid abuses of the NSA capabilities.

Reminds me of this: http://marc.info/?l=openbsd-tech&m=129236621626462&w=2

As someone who has been following the NSA and government monitoring of online activity for close to 15 years the Snowden leaks just keep taking the wind out of me. It's like everything that we thought might be going on was actually going on. When Theo de Raadt wrote the above mail I, like many at the time, assumed it was tinfoil hat territory. I was clearly wrong.

In that particular instance you weren't wrong[1], but that's the problem when stories like this come out, is that it makes it much harder to know what's a crazy conspiracy theory and what's real.

[1] Those claims made by Greg are completely untrue. I ran the professional services group for that company and will happily attest to whomever asks that at no time did we insert a backdoor (or anything that could even be construed as such) into IPSEC.

>Those claims made by Greg are completely untrue. I ran the professional services group for that company and will happily attest to whomever asks that at no time did we insert a backdoor (or anything that could even be construed as such) into IPSEC.

Somehow I doubt if you did that you could tell us. You might even have to lie to be able to comment on that letter at all.

I'm still unclear on the government's ability to compel falsehoods (even the discussions around National Security Letters seem to indicate that they prevent disclosure, but can't require lying), but I don't think I can convince you of that.

When all the hullabaloo around the alleged IPSEC backdoor occurred, it was frustrating to not be able to be as open about it as I wanted (not because of any government/security issues, but because at the time I still worked for the company and we were advised against talking about it).

You are free to assume that even right now as I type this, a shadowy figure in an ill-fitting Brooks Brothers suit is standing over me dictating my responses, and then chastising me for spending my time on HackerNews.

The effect of all of this is mistrust and suspicion of nearly everything. Which, compared to the alternative of implicitly trusting nearly everything, may not be a bad thing.

I'm hoping open development models(open source, peer production, peer review) end up providing the correct institutional incentives for us to innovate away the mistrust.

To misquote Linus: “given enough eyeballs, all backdoors are shallow.”

Normal people don't need 256-bit symmetric encryption. That's assault encryption and should only be used on the battlefield. 40-bits is enough and anything over that should be banned.

I'm only joking, but the same argument is used against other technologies that governments seek to control/dominate.

Edit: Skipjack was 80-bits I think. It was used in Clipper Phones: http://en.wikipedia.org/wiki/Skipjack_(cipher)

People don't take a 256-bit cryptoalgorithm into a middle school and kill kids with it, so I don't think the analogy works exactly. Maybe if you print it out on paper, or use a floppy disk or CD, you could cut a few people.

People who intend to enter a middle school and kill kids can hide their plans and communications using 256-bit encryption.

Edit: Devil's advocate.

Ever heard about presumption of innocence? Just because you have physical capability to do something, does not give right to spy on you. Now if there is clear evidence that you are predisposed to do something, then you go to the judge and get a warrant for surveillance. And no, expectation of privacy by using strong encryption is not any kind of evidence, it is matter of personal choice/preference.

Fortunately using encryption now means you have something to hide & thus are gives the government reason to spy.

Or they could be loners or they could meet and communicate face to face.

You're right. That's why I'm introducing a bill to make it illegal to have a conversation without a certified government agent (or authorized private contractor) present. To improve citizen's security, a rider on the bill will also make it illegal to talk about, write about, or represent in interpretive dance the existence of those agents.


I suppose most people who shoot schools have no partners helping them, but at some point they may need to find information to help them carry the attacks, and encryption would help them conceal the fact that they have this information, and how much information they have.

I just think that a politician moved by the desire to do something could construe non-backdoored encryption as something that "helps the enemy."

Clearly we must curtail this dangerous "face-to-face" communication that cannot be monitored at will by our benevolent government.

Another difference: You don't need a gun to perform the most basic of functions securely.

They occupy exactly opposite quadrants on the useful/dangerous axis.

The ability to defend one's self is a basic function. Being dangerous can be useful. Encryption is a tool for guarding privacy, and weapons are tools for guarding against physical threats.

Not having widespread access to firearms in a society doesn't imply that its citizens are defenceless. I.e. some societies skew towards longer-term strategies like reducing desperation or increasing self-control.

There is a cost to having a society saturated with firearms. The vivid, individualistic, but rarely used benefit of personal defence has to be weighed against the boring, common case of excessive violence and escalation due to access to and glamorization of firearms.

It is used to aid in the creation and distribution of child pornography, so the analogy is exact - unless of course you don't view the molestation of those middle schoolers to be an attack on them (as the downvotes seem to indicate).

The funny thing is 56-bit encryption is still in use in the form of PPTP with MS-CHAPv2. I bet most of the decrypted VPN traffic mentioned in the article uses that.

Yep, I'm ashamed I didn't make the connection last week when I signed up for a PPTP VPN. They've been broken for a while now.

"In one case, after the government learned that a foreign intelligence target had ordered new computer hardware, the American manufacturer agreed to insert a back door into the product before it was shipped, someone familiar with the request told The Times."

Wow.... this really puts all the furor over Huawei contracts in the US in context.

All that furore over Huawei contracts in the US was just projection wasn't it? You're might be more secure buying your network kit from Huawei than from a US manufacturer.

Sounds like it was justified really, the NSA know since they've done similar things themselves that it's possible, so it's not a stretch to assume China is doing the same thing.

So at this rate are there any encryption methods that we're pretty sure that the NSA cannot crack?

  By introducing such back doors, the N.S.A. has
  surreptitiously accomplished what it had failed 
  to do in the open. Two decades ago, officials 
  grew concerned about the spread of strong 
  encryption software like Pretty Good Privacy, 
  or P.G.P., designed by a programmer named Phil 
  Zimmermann. The Clinton administration fought 
  back by proposing the Clipper Chip, which 
  would have effectively neutered digital 
  encryption by ensuring that the N.S.A. always 
  had the key.
Link to Paragraph w/ highlighting: http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet...

Should I bother to read up on PGP?

The N.S.A. hacked into target computers to snare messages before they were encrypted. And the agency used its influence as the world’s most experienced code maker to covertly introduce weaknesses into the encryption standards followed by hardware and software developers around the world.

This is mostly a confirmation of what has been supposed: No magic, mostly bribed and coerced cooperation from the people who should be keeping our communications secure.

And while it doesn't do anything for the credibility of US-based companies, N.B.: "hardware and software developers around the world."

So, should we re-evaluate if Intel/AMD's chips (and possibly even the new ARM ones) contain hardware backdoors for the NSA?

I would assume the NSA has evaluated every plausible attack, and implemented them based on what they want to get out of it, and that they have global reach into chips, peripherals, and software.

If you are a foreign government, hostile or friendly, I don't see much of a case to made for "Naw, they wouldn't..." They would, they probably can, and the probably already did.

If you are a consumer, the main problem is the creepiness factor. Who wants to use incrementally more technology if along with it you get incrementally more surveillance?

If the source code/hardware diagrams are kept private you should assume backdoors, always, with everything. How is there any other way to know for sure otherwise?

These government agencies are obviously dug much deeper in private industry than many expected so I wouldn't put it past them

If you see the diagram, and someone else makes the chip, how do you know the diagram matches exactly with what's on the chip? Unless you can make your own chip from the diagram, you still cannot be sure.

Very valid point, and I you're right you really would have no idea. You could check on some devices if you knew enough about hardware to compare the internals and the diagram, but that is a select few people.

This is why customer-company relationships and company integrity is becoming increasingly important, and frankly not many US companies are doing well in that regard.

Snowden claimed a while back that encryption itself was not broken by the NSA, but that the endpoint security usually was (no surprise there): http://www.theguardian.com/world/2013/jun/17/edward-snowden-...

Can someone who actually knows about encryption comment on whether it's actually physically feasible for the NSA to have actually broken, say, SSL 3.0 (which has 128 bits of entropy, IIRC) on a large scale (i.e., when you're sifting through petabytes of data on a daily basis)?

And if this were really an issue, couldn't you just use 4096-bit RSA (unless they have managed to surreptitiously insert a backdoor in it)?

Brute force is only required if there isn't a vulnerability (either in the algorithm or that the NSA has a key).

> Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

> N.S.A. documents show that the agency maintains an internal database of encryption keys for specific commercial products, called a Key Provisioning Service, which can automatically decode many messages. If the necessary key is not in the collection, a request goes to the separate Key Recovery Service, which tries to obtain it.

Neither of those apply to SSL 3.0, do they?

Well, it's not that simple, is it? Generally speaking, the public key algorithm used in SSL is used to protect the private key that is used to protect the data:

1) Site sends public key certificate to browser. 2) Browser verifies certificate against in-browser store. 3) Browser extracts public key from certificate. 4) Browser generates symmetric/private key. 5) Browser encrypts that symmetric key with the site's public key. 6) Browser sends encrypted symmetric key to site. 7) Site decrypts symmetric key with its private key (the one associated with its public key certificate) 8) Site and browser encrypt and decrypt data using the privately shared symmetric key.

If I break the public key algorithm(s) used in SSL, I break all of that. But we think that's tough (as in NP hard).

If I break SSL itself (find flaws in negotiation, etc.) I might be able to break all of that. That's been done a few times (SSL v1.0 is garbage; v2.0 is borked; v3.0 a little broken; only TLS 1.2 is borkless, so far. As far as we know.

If I break the symmetric algorithm(s), then I can get at the data without breaking SSL itself. But we think that's tough, too. As far as we know.

If any of the software used in any of the above is borked, then usable attack vectors may be exist. Or not. We don't know until we find them.

It's a complex box of moving parts with many potential attack vectors, many potential vulnerabilities, etc.

Who really knows if the NSA might have found a multi-vector, multi-vulnerability attack that allows them to get at a lot of encrypted data without having broken all of any one of those things.

It's purest speculation until someone with a sufficient clearance and sufficient need-to-know decides to speak out, and even then it would remain unconfirmed.

SSL relies on a chain of trust, and it's prudent to assume that the NSA has the private keys necessary to produce valid certificates that will be accepted by the certificates that ship with Windows, OS X, Firefox, etc out of the box.

So man-in-the-middle attacks are certainly within their capability and fairly hard to detect. As to whether the NSA can passively intercept and decrypt SSL traffic, I don't know, but they may not need to.

It looks like there's no chance for even the slightest expectation of privacy. Even if the data is encrypted they can ask American companies to decrypt it, after all they store the encryption keys. Even if the encryption keys are stored on the client side, they can push fake updates through major browsers or straight out compel American companies to insert backdoors in their software (e.g. Google Chrome) and get access to those keys. Our reliance on these services is what most likely would need to be avoided in the pursuit of privacy, but could you live without Google Search, Google Maps, GMail, Outlook, and on and on?

You want to use something like TLSpool (http://www.tlspool.org) and DANE. For a browser something like Firefox's Certificate Patrol is a great solution.

How are they hard to detect? Wouldn't solutions like certificate pinning prevent this?

Yes, certificate pinning would alert the user to a MITM attack, but it's not commonly used. By "hard to detect", I meant that it's impossible to see simply by examining the certificate if it's genuine, you can only detect when the certificate changes. And since SSL certs expire and are re-issued all the time, it makes it a fairly large headache to continually try and guess whether the other party changed their own cert or if you are experiencing a MITM attack.

Isn't it considered best practice, especially for situations where you can control the client - i.e. banking apps on your phone?

Your self-signed 4096-bit key is probably fine. Even better, be your own CA.

People assume CAs should be trusted but it's a huge game of chicken: If the NSA can't break SSL, you have to assume that either SSL is opaque to them, which these revelations seem to contradict, or they have corrupted the CAs.

Or that SSL is irrelevant because they've tapped whatever service you're connecting to.

This is all theoretical, but you could use decentralized services/protocols that would eliminate such an opportunity.

If I was in the NSA (which I am not) I would place a backdoor in the browser themselves, and since the browsers auto-update from the internet anyway, I would change the DNS provider for the machine being watched (remember the DNS settings generally default to that provided by your ISP) to point to the NSA-version of the browser, and then the user would be browsing securely, but after decryption and before display, the payload would be sent elsewhere to be collected.

I don't think this would be particulary hard either. For IE, the NSA can just get MSFT to do it. For Firefox, they can compile from source, and for Chrome, well, they can probably compile from source too, because they probably have access to the build source of Chrome, with or without GOOG mgmt knowledge.

Can anyone come up with a (technical) reason the NSA could not be doing this?

For a specific target? Sure, why not.

But if they did that to everyone? Surely it would be noticed. Probably very quickly. There are a LOT of smart security researchers scouring browsers for bugs and running them in carefully controlled environments every day. Someone would also eventually notice that the production binary doesn't match the version built from source, especially for open-source browsers.

Browsers auto-update over SSL and hopefully use certificate pinning.

Your machine would be showing an extra outbound connection.

Very few people have any idea how many outbound connections their machine opens or which software is opening them. There seems to be just enough people paying attention to this that it would be caught but most people would never know.

It might be but it is highly dubious. However, they MIGHT have put some effort into "plugging" each implementation and planting a subtle bug in them. You never can tell.

It is not somuch the protocol what matters but the implementations.

Imagine they "rig" all those beatiful hardware RNG. Could you tell the difference?

Are you sure renowned developer X van Y is not an NSA mole?

Remember the OpenBSD IPSec backdoor allegation?


The protocol itself would still seem to be safe (or rather, have safe combinations of key exchange and encryption).

But it is certainly feasible that if they manage to find cracks in popular-but-old communications protocols that they are able to automatically decrypt them, or use prior key recovery successes to bootstrap fast attacks on new communications from the same host.

What would be interesting is if NSA's own "Suite B" crypto recommendations are susceptible to these risks, as that would potentially represent a rather significant break in the U.S.'s own COMSEC, and COMSEC is one of the things NSA is very specifically tasked with ensuring are safe with no backdoors for anyone to jump through.

Backdoors in the NSA recommendations could be due to trapdoor functions that only NSA has the key for. Other parties would therefore be unable to utilize that backdoor (short of the secret being exfiltrated).

Inserting such a keyed backdoor is much more difficult to do undetected, and more limited in scope (has to be done separately for every crypto algorithm being backdoored), than introducing a flaw in a hardware or software RNG implementation.

This is likely a minority view, but I have no problem with the NSA being able to break encryption, that's in fact part of their job. Decoding encryption has long been part of their mission. I also suspect they're not alone in terms of signals intelligence groups in having this capability.

The issue to me has always been how and what data they access and store, and how it is used.

The larger issue here is that they "covertly introduce weaknesses into the encryption standards". It's not that they cleverly and fairly break encryption, it's that they sabotage the standards.

I have a problem with encryption being breakable, regardless of who's doing the breaking. I want encryption to be mathematically solid with the only option being brute-force older-than-age-of-earth time. When we get to quantum computing, then I don't know what we'll do...

Quantom computing cannot break all of crypto. Anything based on P!=NP is believed to be secure against quantom computing, and there are several encryption methods backed by P!=NP

> Quantom computing cannot break all of crypto.

Correct (except for the spelling of "Quantum").

> Anything based on P!=NP is believed to be secure against quantom computing, and there are several encryption methods backed by P!=NP

Incorrect, well mostly. The deal is that there are problems that can be done in "polynomial time" (how long it takes is not exponential in the size of they key) for a normal computer (or person); the set of these is called "P", the ones that CANNOT be done on polynomial time is "NP". And there are problems that can be done in "polynomial time" (with reasonable limits on errors) by a quantum computer; the set of these is called "BQP". If P = BQP it would mean that quantum computers can (in reasonable time) solve all the same problems that classical computers can. But in fact, P is a subset of BQP: there are problems that are "hard" for classical computers but "easy" for quantum computers.

An example of this is factoring numbers. Shor's algorithm is a way to factor numbers using a quantum computer and it runs in polynomial time. Now it isn't practical today: the biggest quantum computers in existence are hard put to factor the number "10", much less some 40-digit monstrosity. But computers only get better.

Fortunately, there are problems which are NOT in BQP -- problems that are hard even for quantum computers. And these are the ones you want (not those in NP) if you want to stymie a quantum computer.

For more details, see http://www.scottaaronson.com/papers/bqpph.pdf or frankly ANYTHING written by Scott Aaronson (http://www.scottaaronson.com/blog/).

> the ones that CANNOT be done on polynomial time is "NP".

I see you've solved one of the great open problems!

NP is defined as problems that a nondeterministic turing machine can solve in polynomial time. Imagine, if you will, a turing machine that when it "branches" always chooses the right path (Or: chooses "both" without overhead)

Yes, and that part of his comment would also imply P != NP:

> Fortunately, there are problems which are NOT in BQP

We don't know yet if NP \ BQP is non-empty (and neither do we know if BQP \ NP is non-empty).

But we do know that BQP \ P is non-empty, which is what I was trying to say. In fact, factoring large integers lies in BQP \ P, and is also the basis of some commonly used encryption algorithms.

No, we don't know that either. For all we know, we could have BQP = P: today, we don't know any algorithm in P to factor integers, but that's not a proof that it doesn't exist. If we had such a proof, as mcpherrinm points out this would directly lead to a proof that P != NP (since we DO know that factoring is in NP).

Also, I don't mean to pile on, but this wasn't what you were trying to say in the paragraph I quoted: in that paragraph, you said that we just had to pick problems outside BQP to make cryptography work despite quantum computers. I don't know if that's what you had in mind, but for such an algorithm to be tractable, it should at least be in NP (nobody with a deterministic computer wants to spend an exponential amount of time establishing an SSL connection): so the mathematical statement is whether NP \ BQP is non-empty. Did I miss something?

The latter of which sounds suspiciously like what a quantum computer does.

How sure are we that BQP != NP?

We aren't: https://en.wikipedia.org/wiki/BQP

However a quantum machine that would be capable of post-selection is described by the more powerful class PostBQP = PP, and we know that PP includes NP, so this justifies your analogy.

I don't know much about quantum physics or quantum computing, so I may be mistaken, but it seems to me that post-selection is more of a philosophical construct than something that is physically possible, though.

I've seen post-selection demonstrated in a laboratory. It definitely isn't just a philosophical construct, except inasmuch as what it says about time making people want it to be less than real, and logic doesn't work that way.

As for whether you could make a PostBQP-capable computer, though.. I don't think so, at least in the most general case. I don't understand this nearly well enough to be sure, but from what I've heard, tricking causality like that has the problem that you're increasing the chance of your circuitry failing right along with the chance of getting the right result, and quantum computers are already hard enough.

The latter of which is exactly what a quantom computer does. The problem is that when we make a measurement, we randomly select one of the execution paths and see its result. The problem is that once we make a measurement, we would have to repeat the experiment in order to make another measurement.

Oops... my bad. Sorry.

Everyone focuses on the decrypting power of quantum computing, which is warranted as it will break _modern_ cryptography (in theory). However, think about the encrypting power a quantum computer will have; being able to generate completely random keys among other facets of crypto that seem implausible with modern computers. I think it will just open the door to a whole new science of crypto we haven't reached yet.

The most concerning point is that transitional point where all of the old, quantum breakable, standards are in place when quantum systems roll around.

Either everything will fall apart or we'll have wised up before hand.

We've seen this phenomenon, just on a smaller scale. When an encryption standard is compromised, people either switch to something stronger or face the consequences of essentially leaving their data in the open.

DES was an acceptable standard until the late 90s, but you'd be foolish to use it now. RSA is the accepted standard now, but it is said to fall within 5 years or so. At that point, people will either start paying for a license to use Elliptic curve encryption algos or something new will be found. It's a vicious cycle, but I certainly don't think encryption ends with quantum computing, but maybe disrupts it quite a bit for some time.

If we get quantum cryptography out the door in a reasonable space of time, then we can do secure one time pad exchanges on there. Doesn't matter what you throw at that, since with the right keys you can derive any message of the same length from it.

I guess I'm with you on the ability to crack. Any researcher should be able to try as hard as they want, and succeed.

I draw the line at collecting everything without specific warrants, regardless of what they do with it, against their charter and the Constitution.

I draw the line at hardware backdoors for equipment that I buy, and insertion of vulnerabilities into encryption standards that I take advantage of. Or I guess I should say that take advantage of me.

I'd agree with that. I've often been wondering if where we're headed is a some kind of reform compromise. Not that I think it's ideal or right, but for example I could see the NSA having a Chinese wall around data for Americans, such that FBI and other investigators could not use data collected by the NSA, but could open their own collections with a warrant.

I'm quite opposed to what the NSA has done - but I don't see anything happening that will change it. If the recent revelations haven't done anything to stir Congress to action I don't know what will. Additionally, until and/or unless the NSA ever uses data collected this way against an American citizen in a judicial/criminal way, the courts are quite likely to find that petitioners lack any standing.

As a non-American, I find your lack of support for people who weren't born in your little patch of land quite uninspiring.

Well I also assume that many other signals intelligence agencies are collecting data. Having worked at a time for a large global PR firm, I am sure many if not most of our communications were intercepted, especially with work once done for a Chinese based phone hardware maker. I think it would be a massive mistake to assume this is a United States only issue, so far the United States is the only country to have someone leak the information.

So - yes, my most immediate legal concern is how the US government could use the data collection against me contrary to protections given. That doesn't mean there are many more and larger concerns to be thought through, I was merely speaking to one of them.

> I have no problem with the NSA being able to break encryption, that's in fact part of their job.

Their "breaking" of encryption is a combination of purposefully introducing vulnerabilities into standards, surreptitiously altering software and hardware to give the NSA a backdoor, hacking into private systems and stealing keys, etc etc.

I'm cool with an NSA super computer trying to brute force my VPN traffic to YouTube, I'm not cool with the NSA planting an engineer at a chip fab and changing designs to add a backdoor (a backdoor that could also be exploited by other actors).

I will bet good money that the NSA has never bothered to try and plant backdoors in encryption standards.

If the NSA recommends AES to the US government, but knows there's a vulnerability, then they have to assume that any adversary may be as good as whoever designed it. Which means an adversary would be perfectly capable of discovering and exploiting the weakness. Which in turn means the NSA has just made the entire US government vulnerable to foreign or non-state actors.

The same applies to anything you might imagine doing to chipmakers. Not only is there the risk of being found out (how do you explain to a talented engineer spotting a flaw in a schematic, how high up do you have to go to try and stop that leaking?) but there's the more serious risk that you've just added a backdoor to hardware you yourself need to be secure. Which again, could be discovered by an adversary and used against you.

This whole line of argument has always been speculative fiction on the part of the internet: it's looking for unicorns because you heard hoofs.

> I will bet good money that the NSA has never bothered to try and plant backdoors in encryption standards.

Did you read the article?

> By this year, the Sigint Enabling Project had found ways inside some of the encryption chips that scramble information for businesses and governments, either by working with chipmakers to insert back doors or by surreptitiously exploiting existing security flaws, according to the documents.

Seems pretty cut and dry.

Which in turn means the NSA has just made the entire US government vulnerable to foreign or non-state actors.

Yes, and it says this in the article.

And it makes perfect sense too. If you were the NSA, wouldn't you want a heads up if someone was talking about budget cuts?

I was wondering about AES myself. When you look at the wikipedia entry for AES it says it was "approved" by the NSA, which doesn't exactly inspire me with confidence. I don't think they would approve something that they haven't already cracked or backdoored.

Did you even read the article?

The leaked documents confirm that this is exactly what happened.

"The leaked documents" is as much detail as the article goes into. Given the history of reporting on this matter and the walking back done on all the original leaks, color me skeptical its as dramatic as it sounds and not what I said: the NSA engages in a bunch of spycraft since encryption is based on trust, trust is easy to compromise, but again: I'll bet AES encryption has no easily exploitable weaknesses, and SSL is completely secure provided you don't need to use the root chain of trust associations which all ultimately wind up at "the government".

If the NSA can, others can. It makes the whole thing useless.

Encryption is still useful. The NSA can read your messages - but not everyone. Encryption will still protect your bank transactions and wikipedia reading. No encryption will protect you from a goverment turned Evil, and I hope you realise this. If the US government is good or evil is all a matter of perspective. War is peace, Mr O wrote, and surely there has been a lot of that going on in US foreign policy the last few decades.

The problem isn't that they're working to break encryption. The problem is that they're maliciously inserting backdoors and subverting crypto-research and publication, which puts all of our security at risk. (Not to mention runs counter to their stated mission)

There are a couple of other issues, even if one agreed with your view:

1. They obviously can't keep their own secrets, so it's unlikely that they will do any better than keeping yours. Eventually, your data will leak out to non-NSA people.

2. By adding backdoors, they weaken the encryption. This implies that anyone with sufficient skill who goes looking for backdoors may be able to exploit the hole that the NSA opened up. This is a big deal, especially if you have secrets that you need to protect.

It's definitely great knowing that our government is willing and able to commit pretty serious industrial espionage, and if anyone tries to do anything about it, hey, we have nukes too. Don't worry everyone, we're the good guys! We promise to send you some foreign aid after you've come to terms with your subjugation. /s

We cannot rely on them to be always good so we have to also have the prior, encryption to protect us if the law or law makers are not working for our best interest.

"Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of Standards and Technology, the United States’ encryption standards body, and later by the International Organization for Standardization, which has 163 countries as members."

Wonder if it is referring to the Dual_EC_DRBG RNG.

Well, it goes on to say "Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency." The Dual_EC_DRBG vulnerability was revealed by two Microsoft researchers in 2007: http://rump2007.cr.yp.to/15-shumow.pdf

So I'd say yes, it sounds like that's what they're talking about.

Speaking of which, I'm really quite frustrated how many of these recent reports about the NSA elide the technical details. You have to read between the lines to figure out what's really going on, what weaknesses there really are.

As a matter of security, it would be better to know specifically what vulnerabilities there really are. Merely the announcement of vulnerabilities can allow a dedicated black-hat to find and exploit it; but someone who's trying to secure their system, and isn't following cryptography incredibly closely, won't know what they need to do or change to make their systems more secure against these types of attacks.

There's a reason that the security community advocates for full disclosure (or at least responsible disclosure, if it's possible to selectively disclose to a few vendors so they can do a coordinated release that fixes the vulnerability before it becomes public), in which you completely disclose a vulnerability so people aren't left guessing about it.

> Speaking of which, I'm really quite frustrated how many of these recent reports about the NSA elide the technical details.

Are you? Well please sign up to work for the NSA, learn the technical details, then go public with them. The reason that the NYTimes isn't publishing the technical details is because they DON'T KNOW THEM. (They might not publish them if they did.) They don't know them because Edward Snowden was a system administrator not a cryptography expert and he's releasing memos about the process.

From the article:

> "Intelligence officials asked The Times and ProPublica not to publish this article, saying that it might prompt foreign targets to switch to new forms of encryption or communications that would be harder to collect or read. The news organizations removed some specific facts but decided to publish the article because of the value of a public debate about government actions that weaken the most powerful tools for protecting the privacy of Americans and others."

NYT, the Guardian, etc do have access to these details, but chose not to publish them.

They say that they were asked not to publish at all, but did so anyway and chose to remove some specific facts. I don't understand how you get from that to concluding that they know (and are suppressing) the particular vulnerabilities that the NSA is exploiting.

I assumed it was that and that case is puzzling but benign as the algorithm is much too slow to be chosen compared to the alternatives[1]. As far as anyone can tell this wasn't their best work:

If this story leaves you confused, join the club. I don't understand why the NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it. And it makes no sense from a backwards-compatibility perspective: Swapping one random-number generator for another is easy.

[1] http://www.schneier.com/essay-198.html

There is an old saying that states that a jealous husband or wife can't be trusted. They don't trust you because they are, have, or are thinking about fucking someone else.

When the combined '5 eyes' come out and ban Lenovo / Huawei from being used on any of their secure networks, because of fears of back doors [1], one has to imagine that the same is true of themselves.

The hardware is most likely backdoored as well as firmware, the OS and installed software. I would not trust anything, even open source, because to be perfectly honest, there a very few people who really are smart enough to understand the in depth cryptographic requirements. If there are people, then they probably already work for the NSA or GCHQ.

If you want to plan a terrorist attack or become a politician or business leader who does not want to be blackmailed, don't do anything on the internet apart from share pictures of cute cats.

My advice to any terrorists is to go dark. Speak in private. Write it down pass the note and then burn it. Use old methods like book ciphers. Touch and electronic device and they have you.

Legal note: Of course I'm not advocating 'advising' terrorists, well only the good ones, you know those ones that we call 'freedom fighters'. The ones western governments like to back when it suits their purposes.

[1] http://www.infosecurity-magazine.com/view/33679/lenovo-compu...

I feel like these kinds of articles are meant to induce a sense of hopelessness regarding the ability to push back against the NSA.

If it turns out one way functions actually don't exist, I'll give in and learn to love big brother. Withstanding that, I'll continue considering communications freedom (and all that it implies) as our manifest right and view these types of breaks as implementation errors.

You mean ability to push back _technologically_ against the NSA, right? This sort of article makes you think you can't beat the NSA tech, they will outsmart you.

What this sort of article does to me (unlike you, I make no claims to know what the article was 'meant' to do, other than report the news) is make it clear that we need to push back against the NSA _politically_ to win, make what they are doing illegal, change the gag order laws, etc. We aren't going to beat them technologically, but (for those of in the U.S.), it's theoretically a democracy, we can tell them to stop.

I've seen that argument made before, several times, in essays linked to on HN. It's a political problem, not a tech problem, that the NSA can force corporations to install back doors and give the NSA the keys.

The problem is technological, as deficiencies in relied-upon communication technologies is what have allowed surveillance to scale from human intelligence on prioritized targets to dragnet scrutiny of everybody. No matter how much effort is required, "law enforcement" will always be snooping on some suspects - what we'd like to prevent is an institutionalized fishing expedition.

You're signing up for a losing game. The myth of Democracy (tm) is another layer of control over individuals.

1. Most people will never have a problem with what the NSA is doing. They support the NSA's goals (tautology, since as you've mentioned, it is responsible to the majority), and if its methods end up causing harm to enough people, they will simply be adjusted to reduce aggregate harm (not to rule out any possible harm). The feedback loop of democracy works on specific actualities, not hypothetical corner cases.

2. The most memetically fit ideas are the simplest ones that elicit the strongest feelings (see: bikeshedding). Outrage peddlers swamp the political reception bandwidth with lowest common denominator controversy - usually judgments on other's lifestyles.

3. Even if there is a widespread preference to reduce the scope of the NSA, the people simply do not have the transmit bandwidth to make this preference clearly known. And they are easily led into squandering their input on the aforementioned manufactured controversy.

4. Elected figures don't actually run the government, the entrenched bureaucracy does at an imperceptible glacial pace. The elected figures run interference by making the majority believe they voted for this shit.

This essentially bolsters the claims in this article that the NSA has "neutralized" SSL.


This "SSL Locksmith" software isn't really a top-secret NSA tool: http://www.accessdata.com/products/cyber-security/ssl-locksm...

It's a MITM solution that injects fake certificates, i.e. nothing groundbreaking and equivalent to compromised/corrupt CAs (which, as we know, exist and are able and willing to hand out fake intermediate certs etc. to rogue entities). The Whole CA ecosystem is broken and basically snake oil and pretty much everyone knows it.

Some organizations have IT security departments that attempt to foil encryption already. They use devices to terminate SSL before it leaves their network and forge certs back to clients and basically act as a MITM for the clients making the TLS/SSL request. They do this to inspect the traffic before it leaves the network.

I predict that in the next 5 to 10 years, many organizations across all industry sectors will drop/reject encrypted packets (SSL, SSH, SFTP, etc) that they cannot decrypt. And the reason they'll give is that it makes them more secure.

The concern I have (as a security technologist) is that most people who use encryption are not bad, however everyone is punished and every packet must now be inspected because a few people use encryption to do bad things. So one day soon, I'm afraid that anyone who uses encryption will be suspect simply because they do and the stronger the encryption, then the more suspect they'll be.

Will it become illegal to do encryption research or use OpenPGP unless you agree to escrow your private key or will everyone be forced to use very weak ciphers? In today's climate (encryption is evil), I see all of these things as very real possibilities.

Cisco firewalls, by default, perform a MITM protocol downgrade attack on the SMTP sessions they see. They modify the SMTP setup to prevent the endpoints from negotiating STARTTLS and cause them to fall back to cleartext communication. Has been true for years.

You can turn it off... but how many admins do? If you want an example of behavior which is completely plausibly-deniable, but which immensely reduces internet security, this is a good one.

Speaking as an American, it's not a problem that the capability to break encryption exists and the NSA has it. It really does make national security stronger if your intelligence people can read enemy communications.

The problem is that the NSA apparently used those capabilities on basically everyone, millions of innocent Americans whose activities should be of no interest to intelligence agencies, not just the handful of genuine spooks and terrorists our intelligence agencies are supposed to protect us from. (To international people: Cosmically speaking, you're not less important than we are, but the NSA's first responsibility is to protect and serve the USA, so them spying on innocent Americans is at least as bad as them spying on innocent foreigners.)

And it has been shown that the NSA provided information to ordinary criminal investigations with no links to terrorism or foreign intelligence, having police say "it's a lucky traffic stop," where the government actually knew the drugs were in that car ahead of time due to a decrypted phone call. This makes a mockery of the Fourth Amendment because, when prosecutors/police lie to the courts about the origin of evidence, the courts cannot properly answer the question of whether their methods of gathering evidence violate the defendant's Constitutional protection against unreasonable search and seizure.

In short, this is coming out -- which, as the article said, will weaken those capabilities -- because the NSA went too far outside their mission scope. If they hadn't done those two things, I'd be willing to bet Snowden wouldn't have leaked this data.

A political counter-argument is that this program may represent terrible value for money in the long run. If we are in a security arms race this money neither buys weapons or a defence that can't be overcome by opponents simply buying better weapons and defences.

The NSA could have made more of an effort to harden American business and infrastructure to attack. They could have spent the money on developing intelligence sources who actually work for opponents instead of US telcos. They could have fixed zero day exploits.

We are rapidly approach a time where oponents will be able to attack completely annonymously. American infrastructure or buisness could be damanaged and know one ever know who or why. If that happens cold war tactics will seem hopelessly naive.

Can someone elaborate on how secure the underlying algorithms still are? Most of the NSA's "foiling" seems to be done via coercing corporations and side-channel attacks. Are TLS, AES, etc. still thought of as secure?

Bruce Schneier has seen the documents, and here's his advice: http://www.theguardian.com/world/2013/sep/05/nsa-how-to-rema...

Saw that too. Thanks!

"the agency used its influence as the world’s most experienced code maker to covertly introduce weaknesses into the encryption standards."

This is the part that truly disgusts me.

I think people were speculating this on HN with this article: http://www.wired.com/politics/security/commentary/securityma...

Why are you surprised, or even disgusted? They've moved on from governments (http://www.bbc.co.uk/news/world-middle-east-23762970) to using technology to do it. I use the term "moved on" loosely, because it is still happening no doubt. The US has long used it might and influence around the world, esp Latin America, to do very questionable things and really no one has batted an eye lid apart from the little guys getting screwed over. Terrorism is the just the new guise they are using to justify their actions.

> The documents are among more than 50,000 shared with The New York Times and ProPublica, the nonprofit news organization, by The Guardian, which has published its own article. They focus primarily on GCHQ but include thousands either from or about the N.S.A.

Is this the first time we've seen a 5-digit number to describe the number of documents Snowden has? Of course, these are just the ones used for this story...

I noticed that number as well and don't recall seeing it before, but I do seem to recall reading something about "multiple laptops" of Snowden's. Obviously, one can store a helluva lot of documents on three or four laptops.

So does this means they have broken or fund a bug in RSA, fast enough computers to brute force or solved the P versus NP problem. In decreasing chances of possibility. I am also an encryption noob, so I gather that if they have broken a crypto then my 4096 bit files will be no more secure than 1024 bit ones. Right?

The best publicly known attacks on RSA reduce the attack time by a few orders of magnitude at best. A functional quantum CPU could reduce that by a few more orders. Your 4096-bit RSA key is still 2^3072 times harder to break, so even with reductions we're still talking about "heat death of the universe" amounts of time to brute force.

RSA has issues but as of yet hasn't yielded entirely to cryptanalysis.

As the article says, it's easier to attack the system and try to get the plaintext, or coerce you into giving up your key through legal means.

Edit: adding a link to Wikipedia's article on post-quantum crypto, it's a good place to start understanding how to answer these type of questions:


"Your 4096-bit RSA key is still 2^3072 times harder to break,"

No, because the difficulty of breaking RSA keys doesn't scale in the same way as symmetric encryption. Integer factorisation is much easier than a brute force search of the keyspace. A 1024-bit RSA key is believed to be roughly equivalent to an 80-bit symmetric key. A 3072 bit key is about as hard to brute force as an 128-bit symmetric key.

(Source: http://www.keylength.com/en/4/ )

Ah shoot, you're right. I'm an armchair crypto geek at best.

In any case, you can choose a public key exponent large enough to still make it a hard problem to crack in a reasonable amount of time. Barring some huge vulnerability in RSA that hasn't been discovered in 30 years of public scrutiny, of course.

Are you sure about that? As far as I understand it, generic quantum computation would cut that '3072' in half, and using quantum computers specifically for factoring reduces problems to a low polynomial time.

Correct. Shor's algorithm renders any use of RSA... pointless.

And while there are limits to the applicability of Grover's algorithm, you're correct that it effectively cuts the number of bits in any cryptosystem it applies to in half. Which, to my nonexpert eyes, looks to be most of them.

Hmm, yes, I think I conflated the asymmetric vs symmetric cases.

Shor's algorithm is very tasty, but when the real world demonstrations at top research facilities are saying, "yes, we factored 21 into 7x3, but WITH ENTANGLEMENT"[1] it makes me think that scaling to RSA-size prime factors is still a good way off.

Listen, the US government is powerful, but building a full scale quantum crypto decoder ring in complete secrecy _decades_ ahead of everyone else? I just don't think so. Maybe I'm a sheep for not wanting to believe the government so powerful and corrupt, but the whole thing sounds like a tin foil fantasy.

I don't doubt they would if they could, though. And they've done as much as they can with present day tech: supercomputers, mass data collection, penetration of target systems, exploiting SSL's many weaknesses, tapping undersea lines, and legally strong-arming perceived threats into giving up their encryption keys. I just don't think we need to get science fiction involved.

[1] http://www.nature.com/nphoton/journal/vaop/ncurrent/full/nph...

See http://en.m.wikipedia.org/wiki/Shor's_algorithm

Hey, I wasn't claiming practical quantum computers existed. Just that, at whatever point running Shor's algorithm becomes practical, RSA will be pointless.

But what if they have some vulnerability in the crypto itself? thus skipping the need to brute force.

If they solved P vs NP... well. Yeah. That would be interesting.

Well, if they found a constructive proof, that is.

The most fascinating part of this article to me is this part, which proves that even a super-secure intelligence agency can still have very weak links that can be penetrated:

Only a small cadre of trusted contractors were allowed to join Bullrun. It does not appear that Mr. Snowden was among them, but he nonetheless managed to obtain dozens of classified documents referring to the program’s capabilities, methods and sources.

Who knows what other documents other internal hackers could have stolen?

Can someone boil this down and tell me the same thing from the technical side? I.e. what technical barriers have they managed to break (RSA, DSA, AES, etc.) ?

I think you have to assume the answer is: all.

> A 2010 document calls for “a new approach for opportunistic decryption, rather than targeted.” By that year, a Bullrun briefing document claims that the agency had developed “groundbreaking capabilities” against encrypted Web chats and phone calls. Its successes against Secure Sockets Layer and virtual private networks were gaining momentum.

This paragraph interests me the most.

For one, it's clear that their goal is opportunistic decryption; that is, decrypting everything and being able to search through it, rather than targeting known endpoints. This is an important point that a lot of people miss when debating cryptography. While it's fairly likely that the government can find ways to access any communication they want in a targeted manner, as they have so many means to do so (hacking the endpoints, physically breaking in and performing an evil maid attack, etc), widespread encryption is generally good enough to prevent opportunistic data gathering.

The other point I note is that they only mention "web chats and phone calls" in their breakthrough. It doesn't sound like the breakthrough is something that works well for arbitrary SSL connections. The main link I can see between web chats and phone calls is that they are long lived connections, with bursty traffic (HTTP or email protocols, on the other hand, tend to stream a lot of data at once, and then the connection is closed). I'm wondering if there's some kind of traffic or timing analysis vulnerability that they've discovered.

Also interesting is this quote from the Guardian article:

> To help secure an insider advantage, GCHQ also established a Humint Operations Team (HOT). Humint, short for "human intelligence" refers to information gleaned directly from sources or undercover agents. > > This GCHQ team was, according to an internal document, "responsible for identifying, recruiting and running covert agents in the global telecommunications industry."

Various technology companies have been adamant in maintaining that they haven't been been giving the NSA direct access to their data. However, with HUMINT programs like this, you always have to wonder if the NSA has hired anyone within such companies to put backdoors into their systems, without authorization by the company. Obviously, they'd have to be subtle about it (it's hard to install new gigabit fiber pipes to siphon off the data without anyone noticing), but just setting up a way for the NSA to covertly run queries, disguised as some other type of job that would normally run on the system, would probably not be too hard to do.

    (it's hard to install new gigabit fiber pipes to siphon
    off the data without anyone noticing)
If you have access to manufacturers that can put in back doors for you, I reckon you don't even need your mole to install stuff for you. Instead you just ask your mole to inform you want is going to be installed and then make sure that the company gets backdoored systems when hardware is installed/upgraded/replaced.

I'm wondering is datacenter monitoring utilities like the stuff Boundary[0] is working on could be used to identify anomalies in how network hardware is behaving versus how it should be behaving. I know that in my conversations with cliff, they are trying to get their monitoring solution to the point where they can visualize "the circulatory system" of a data system with the goal of spotting things that don't look quite right.

[0] http://boundary.com/

As Snowden showed you need only one rogue admin most of the times to get what you want

With a byline from our very own "thejefflarson" (on HN). That's a lovely thing to see.

The Allies had broken most of the Nazi codes during WWII but they still withheld information from commanders unless the information concerned an absolutely strategic battlefield that hang on the balance. Better suffer a few dead or some minor setbacks than let the Germans grow suspicious and start doubting their cryptography. Morale of the story: unless you're the next Osama or Showden or some major narco-trafficker it doesn't apply to you.

Or the next MLK...

Out of humor and a bit of worry, I had previously posed a conspiracy theory that the NSA/etc. had undermined (coerced, compromised, whatever) the Internet's certificate authorities. I no longer am comfortable dismissing it as silly humor. I worry that such a theory has about equal parts merit as not.

I now want viable open source web-of-trust encryption for the web as soon as possible.

The worrying part is the "etc" part of your sentence, namely that all federal agencies in the US Government now have unrestricted access to encrypted communications. The DEA and the IRS are only the tip of the iceberg.

If one government agency has your data then the rest of them do too.

Oh, don't forget other governments. If the NSA can do it, China and Russia certainly can too.

From the other article on the same subject:

"Among the specific accomplishments for 2013, the NSA expects the program to obtain access to "data flowing through a hub for a major communications provider" and to a "major internet peer-to-peer voice and text communications system". "

That second one is hard to read other than 'skype'.

I really wish these guys would understand how they're impacting Internet-based commerce. What good is controlling the Internet if people stop using it because of privacy concerns? They seem completely unconcerned about how IT drives the US economy and how a lack of confidence in that sector leads to bad things.

This doesn't make sense. It can't be. Why would cryptography be subject to export regulations then? If we follow this logic, you would think export barriers would have been brought down decades ago and use of NSA cryptography highly encouraged worldwide.

Can I spotlight exactly why installing backdoor in software is especially worrisome?

Why is this not just the same as the other clever ways the smart NSA listens in things (not that I'd like but there's something more)?

Well, the thing about backdoors is they get installed on the outside of everyone's software/chips/machines and then ... someone else, someone with less to loose than the NSA, starts to use them for more crudely nefarious reasons, either criminal activity or spying by other nations.

All of this bears resemblance to the former USSR. Once bureaucracy claimed unlimited political power, the next step was for the "mafiya" to take advantage of the universal silence and surveillance.

> The N.S.A.’s Commercial Solutions Center, for instance, invites the makers of encryption technologies to present their products to the agency with the goal of improving American cybersecurity. But a top-secret N.S.A. document suggests that the agency’s hacking division uses that same program to develop and “leverage sensitive, cooperative relationships with specific industry partners” to insert vulnerabilities into Internet security products.

That sounds a lot like "the division to provide security advice was providing advice that would make it easier for the NSA to break".

Page 4 of the article was the most interesting.

This thread reminds me of an xkcd comic which is a good description of what might be happening: http://xkcd.com/538/

I wonder if the recently discovered problems with non-unique key parameters could be the result of the cooperation of particular network gear vendors with NSA.



TLS/SSL has a whole bunch of options that are negotiated between client and server to find one that they can both accept. I speculate that some of these may be badly broken by the NSA but the exact ones haven't been revealed so we don't know which ones need taking off the table.

Is there anything unusual about the cipher options offered by NSA/GCHQ servers? Or any recent changes at The NYTimes or Guardian's servers.

Just shame on the rest of countries around the world to let the USA control and abuse the internet and all relevant technologies. Every major chip, OS and software is created in the USA. If people elsewhere lack the brains and innovation of USA, they should accept the consequences. Of course, I'm part of the dumb ass rest of the world.

As is usually the case with an article in the mainstream media, the most interesting part is what isn't in it. If much Internet traffic is vulnerable to the NSA (and to the UK's GCHQ), doesn't that imply that much Internet traffic is also vulnerable to other governments? Such as, oh, say, China and Russia?

It always seemed likely to me that governments can generate fake trusted certs for browser TLS traffic and then man in the middle the traffic, but what are the likely modes of attack otherwise? I don't really see what they are from this article - do they have a database of keys they have acquired nefariously?

If you need me, I'll be off changing every password that I've ever stored in LastPass (incidentally enough, I just realized that their Corporate HQ is just outside Washington, D.C.).

So, NSA has solved P vs NP and they're just not telling us?

What they have is a bunch of telepaths in tanks that can see dimly into the future and they recover the keys.

The NSA promises to make our country stronger, then they purposefully weaken it. Then they name the programs after battles in the Civil War.

Where are the original documents (primary sources)?

I am not really concerned about this encryption business...but have they managed to solve P=NP in the process of cracking crypto algos??

I wonder if RHEL and Ubuntu distros have NSA/FBI root kit backdoors in their kernel binaries and/or subscription services.

You think no-one's tried recreating various distros' binaries from their published source, to check they're the same? E.g. Jos van den Oever did that for Debian, Fedora, and OpenSUSE here[1].

Which isn't to say that backdoors inserted into the binary that aren't in the published source are impossible, only that they need something more subtle than the crude/easily-detectable 'merge backdoor, compile, ship'. Something like a Ken Thompson 'Trusting Trust'[2]-style attack. (Though there are ways of at least having a good chance of detecting even those - see [3]).

(More likely, IMHO, are just deliberately-introduced, plausibly-deniable bugs in the source - think [4]. Yeah, they might be found & reported by an outsider reviewing the source, in which case you thank them, fix it, and introduce another couple somewhere else next week).

[1] http://blogs.kde.org/2013/06/19/really-source-code-software

[2] http://cm.bell-labs.com/who/ken/trust.html

[3] http://www.dwheeler.com/trusting-trust/

[4] http://underhanded.xcott.com/

> In effect, facing the N.S.A.’s relentless advance, [Lavabit] surrendered

I disagree with this characterization. Surrendering to the NSA would be Google/Facebook/Microsoft's approach of unconditional cooperation. Lavabit's refusal to work with the NSA -- even though apparently the only alternative was shutting down their business or going to jail -- is more along the lines of a scorched earth retreat (destroying your own stuff when you can't hold the line).

It would be so damaging for Intel or AMD if a credible leak revealed they had backdoors built in, is it this really conceivable?

> The N.S.A. hacked into target computers to snare messages before they were encrypted.

I wonder which computer viruses belong to the NSA.

Windows, Mac OS, Android, iOS, Symbian, and any Linux distribution you haven't culled together and compiled yourself.

> ...any Linux distribution you haven't culled together and compiled yourself.

And maybe even ones you have compiled yourself "from scratch":


Knowing what kind of encryption NSA uses internally, can tell all about what is compromised and what's still secure.

Every publicly known cipher is compromised then? The ciphers that the NSA primarily uses (internally) are classified.

wouldn't it be logical for them to use ciphers that they do not know how to compromise yet?

I am saddened by how out of control this is

So the Feds mandate data security, e.g. HIPAA, and then actively subvert our ability to achieve that security.

If we are going to vilify encryption, then we should just stop teaching math. That's all encryption is.

That is a really weird conclusion to draw from this article. I don't know who you think is "vilifying" encryption as an application of mathematics.

Wow .... so SSH is broken? Wow ....

Shouldn't this result in a drastic devaluation of crypto currencies like bitcoin?

All the more reason for choosing Chinese own brand.

Best to only use Open Source encryption software.

Search this thread for "Theo de Raadt" and read the link...

yeah, cuz there's no way the NSA could contribute code to that too.

It might be time for the community to do some thorough code audits on stuff like this... :(

"The NSA is just doing its job."

In a sense, yes. In fact, it is good that they put the effort into breaking these systems, and good that Snowden let us know about it. Now we know that the vulnerabilities exist, we can go about fixing it.

I want to take a step away from the personal privacy violations here, and approach from an angle that (unfortunately) would motive those with money to lobby against this: your business secrets are out there being collected and reviewed by an organization composed of the smartest and most secretive people in our country.

There really should be no doubt at all that there is corporate espionage and insider trading going on. On one hand, if the NSA approached this with giving helpful 'heads up' when a US-based multinational's overseas factory might be planning to strike, or provide their foreign competitors' private dealings etc etc, they could win brownie points.

But you know it won't stop with screwing around with overseas business. If they are not already, you can bet that internal insider information is going to be traded and sold. You can't trust a rogue, so as long as it is not dismantled they are indirectly if not directly a hostile threat to your ability to conduct business.

I've take a look at half the article so far and still can't find a single specific example of a "backdoor".

The entire thing seems hand-wavy.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact