Hacker News new | comments | ask | show | jobs | submit login
How is NSA breaking so much crypto? (freedom-to-tinker.com)
1005 points by sohkamyung on Oct 15, 2015 | hide | past | web | favorite | 246 comments

"Since weak use of Diffie-Hellman is widespread in standards and implementations, it will be many years before the problems go away, even given existing security recommendations and our new findings. In the meantime, other large governments potentially can implement similar attacks, if they haven’t already. "

can someone explain to me why this cant be fixed over night. im no crypto expert, but

" If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. "

why can't you just switch the large prime number and then continue on sending encrypted data?

It turns out that verifying whether a given prime is appropriate for Diffie-Hellman is rather slow. This means that, if you accept a prime supplied by someone else (e.g. whoever you're communicating with), and you don't carefully verify the prime, then you're at risk of ending up with a maliciously chosen weak group and/or generator.

For some protocols, this is fine. For other protocols, it's a problem. IIRC one variant of the triple-handshake attach on TLS involved tricking the client into using a weak group.

From memory (too lazy to look up all the details right now), the issue is that the order of the multiplicative group mod p is p-1. If p is a prime greater than 3 (which it is in standard Diffie-Hellman), then p-1 is certainly not prime, which means that the multiplicative group will have subgroups of varying sizes. Some subgroups might be small. If one party sends a public share in a small subgroup, then bad things can happen.

Unfortunately, validating the DH parameters is too slow to do with each handshake. So either you carefully design the protocol to avoid this problem (which is certainly doable in many case), you use a well-known group (which should be fine as long as the group is large enough), or you use something like ECDH. ECDH is nice because the index calculus attacks don't work and you can have a nice group structure.

Aside: While it's possible for an EC group to have a prime order, for various tricky reasons, the better modern EC curves deliberately have an order that's h*p for some small h (h is called the "cofactor" and h=4 or h=8 are common) and a large prime p. This gives some nifty benefits, but it requires a bit of care under some circumstances.

Well, the lazy way to do this is for everyone to use the same prime, but to pick a safe prime so large it is out of reach of the NSA.

Isn't the all promise of cryptography predicated on the idea that you can impose exponential effort by merely incurring polynomial effort?

Validating EC is even harder. Accepting arbitrary curves is a bad practice. The draft of TLS 1.3 uses named curves and named primes for both DH and ECDH. And it starts with 2048 bits, because the possibility of NSA cracking 1024 primes was mentioned in the logjam paper.

See https://tools.ietf.org/html/draft-ietf-tls-tls13-09#page-49

There is an easy way to solve this at the protocol level: maintain a hash of everything that was said and sign it during authentication. Any downgrade attack will show up and cause the wrong hash to be signed.

As I understand it, this fix has already been proposed for TLS by the developers of the triple-handshake attack, but it may take some time to be implemented and may never be implemented everywhere.


I'm not quite sure why this Internet-Draft expired without a replacement or if this work is still continuing somewhere in the TLS WG.

> I'm not quite sure why this Internet-Draft expired without a replacement

Oh, because it was issued as an RFC!


If the entity you are communicating with is out to subvert the encryption you are using with them, couldn't they just plain-text transmit everything in your communication to other channels?

What you are saying seems to only make sense for MITM attacks?

That seems right intuitively, but in existing TLS, there is a way that this can also be used to attack your communications with a third-party site. It is described in papers that were published at


although that site seems to be down right now.

While people can agree that this is ultimately a flaw in the design of the TLS protocol, in some contexts, it's still a reason that it matters whether your connection with an arbitrary site is using a safe Diffie-Hellman group or not.

Because there is not magic switch to deploy software on millions of servers?

Getting rid of old software is hard, people don't know, then there isn't enough incentive to upgrade, then you need to support legacy clients so you still keep old crypto support (which now means that you can attack clients which support stronger crypto through various downgrade/fallback attacks) etc..

i understand not every piece of software can be updated, but when something like heart bleed was release - correct me if i am wrong here - but millions of machines were updated.

if i read this article correctly, if we could update millions of computers in a short period of time with a new prime number that would be enough to solve this problem in the short term - long term all we need to do is change the prime number every X number of months, were X would require billions (as opposed to hundreds of millions) of dollars to solve

That was one specific piece of software, the overwhelming majority of deployments of which were on machines that could be updated easily. Weak DH parameters for TLS are deployed in tens of thousands of middleboxes that are not easily updated.

That and you could be attacked by anyone easily. Most ordinary people don't have the NSA as a problem in their treat model compared to the Internet at large.

Thanks for that explanation; I had thought similar to the parent until I read this and it just "clicked".

Well, that is sounds plausible but hard to prove. And I don't know what you mean by "middle boxes", especially that would be involved with DH. Most boxes other than endpoint and client are only doing TCP. And if I'm wrong I desperately want to know!

You're right, except that every "middle box" has both endpoint s and clients. HTTPS, FTP, SSH, IRC, all sorts of databases, caches, queue servers, task runners, announce servers, everything. DH is used any time you're leaving your box.

Middlebox = proxy.

ok that makes sense ... thank you

> but millions of machines were updated.

Millions more weren't. Shitty routers (did you know WEMO devices still run an old Linux 2.6 series kernel? How about Cisco firewall appliances? Yup). Home automation systems. Boxes people have half forgotten about. Systems running software from vendors who won't support you if you run anything more than half a decade old.

There are still millions of machines that affected with HB. And HB was different it was 1 single software and only limited versions, so if you haven't updated OpenSSL for about 2 years there was a good chance you weren't affected in the first place.

There are probably 1000's of various DH implementations that are utter crap, it's not like you can release a single advisory and make big news about it.

Note that the findings referenced by your first link are total B.S.: http://blog.erratasec.com/2015/04/no-75-are-not-vulnerable-t...

I think that may be the link I wanted, actually, but I took what I found after a quick search. Anyway, Heartbleed wasn't quite fixed overnight, or even in a week or more.

You can, but there's a cubic fuckload of software deployed using the old parameters, and a whole lot of it doesn't know how to update itself.

Beyond that, generating prime numbers >= to 2048 bits takes a non-trivial amount of time, sometimes close to a minute. This isn't feasible for servers to do on launch, and might not even be feasible to generate during install---which brings us back to predistributed parameters.

I think "close to a minute" is far too optimistic for the time required :-( -- note that the prime for DH is supposed to be a safe prime.


You can contrast

  openssl dhparam -5 2048
which generates 2048-bit Diffie-Hellman parameters, to

  openssl genrsa 4096
which generates a 4096-bit RSA key (containing two 2048-bit primes) and note that the second is dramatically faster than the first. On-the-fly DH parameter generation is really slow.

However, generating 1024-bit parameters is probably fast enough to do on launch or install. Under the authors' estimates this might be fairly safe today because the adversary will have to spend many millions of dollars to attack your individual service (and you could change the parameters once a day or something if you wanted). But I think the authors agreed that large predistributed parameters make a better tradeoff for most cases.

There is an RFC about to issue listing such parameters


Edit: I possibly shouldn't refer to "the authors" in the third person here, because you are the lead author.

Further edit: "Close to a minute" is actually not a bad estimate, but the variance is very high! So it can easily be considerably worse.

    hobbes@namagiri:~$ time openssl dhparam -5 2048
    Generating DH parameters, 2048 bit long safe prime, generator 5
    This is going to take a long time
    -----END DH PARAMETERS-----
    real	0m37.073s
    user	0m33.464s
    sys	0m2.924s
Edit: second run:

    real	0m35.362s
    user	0m31.992s
    sys	0m2.880s

Would somebody post the steps I should use to make my own OpenSSH sshd(s) use non-standard DH parameters? Do I need to do anything to my clients?

Bit late to the party here, but this is why I wrote https://2ton.com.au/dhtool/ ... I have 48 CPU cores still dedicated to constantly generating safe primes and generators (have amassed a huge number of 2k, 3k, 4k and 8k DH parameters if anyone's interested)

Any of the sizes' most recent 128 parameters can be downloaded in SSH moduli format, e.g. https://2ton.com.au/dhparam/4096/ssh

If you wanted, you could start a small business selling CDs with those DH parameters. Be of public service and make some money at the same time.

Very interesting. Any chance you'll be attending the Ruxcon security conference in Melbourne?

The EFF addressed this today. For SSH, they suggested this guide.


Indeed, the guide recommends generating new contents for the /etc/ssh/moduli file (IFF you keep `diffie-hellman-group-exchange-sha256` enabled).

Apparently `/etc/ssh/moduli` is where I should store newly generated DH parameters.

See the manpage moduli(5).

Hm, the format of the existing file does not match the format of what `openssl dhparam` generated. I'll research...

The dhparam command has mostly been used with Apache, as far as I know; ssh provides its own commands as part of ssh-keygen, which are described in the man page you just mentioned.

However, the paper authors say

"If you use SSH, you should upgrade both your server and client installations to the most recent version of OpenSSH, which prefers Elliptic-Curve Diffie-Hellman Key Exchange."

That might ultimately be a simpler and safer course unless you have to deal with old clients that you don't have the ability to upgrade.

I ran that command:

    Generating DH parameters, 2048 bit long safe prime, generator 5
    This is going to take a long time
Haha! It also made my CPU sweat nicely. I Ctrl+C^d after it ran for nearly 10 minutes with no sign of stopping though :( I see what you mean when you guys say it isn't feasible on every install of everything and why these are reused. I do think software maintainers should make an effort to do this regularly when new patches are released.

This actually opens up a new question in my mind: how does Open Source Software manage to keep these keys secret? Off to google...

The DH parameters created by "dhparam" don't contain any secrets; they're entirely public.

The secrets used in Diffie-Hellman are chosen by the participants in the key exchange at the moment that it's used.


Here, the dhparam command is creating g and p (well, you're supplying g as a command-line argument, and the command is choosing p); those are the public parameters which can be published anywhere, can be given to anyone, and can be re-used by multiple sites and services (although the last case makes life easier for attackers if the p value is too small, as the researchers indicate it is for some popular Internet standards and software configurations).

When a Diffie-Hellman key exchange happens, the two sites agree (not necessarily in a secret way, indeed normally not in a secret way) on what g and p to use, and then one site secretly chooses a, the other side secretly chooses b, and they do the math that results in both sides knowing the same combined secret without other parties being able to determine it. Only a, b, and the resulting combined secret g^ab mod p are confidential; g and p aren't.

Diffie-Hellman can also provide forward secrecy because you can deliberately forget what values of a and b you used on a particular occasion, and then you can't reconstruct them (unless you can compute discrete logarithms, which is supposed to be difficult). That's not true for key exchange using RSA for confidentiality, where, if you still have the private key, you can still read your messages that were sent to you in the past using that private key.

Ciphersuites in TLS that use Diffie-Hellman in an ephemeral way ("DHE") will still use RSA, but for identifying the server (and optionally the client) and confirming that no man-in-the-middle attack has happened. So, they only use RSA for signatures, not for confidentiality, and they still get forward secrecy if they forget the DH private values associated with individual sessions. (Some servers use the same b value for multiple incoming connections, which reduces the degree of forward secrecy they get: if someone hacked them or seized their server while it still contained a particular b value, that person could decrypt previously recorded TLS sessions with that server that used the same b value.)

... thank you for trying to explain. I am still too thick to get it on first read. I have saved this in Evernote and will keep coming back to it after researching terms etc., until I understand in full.

I'm sure you'll want to keep reading about this, but let me just set out:

Diffie-Hellman involves four numbers, which are called g, p, a, and b (and which are used to calculate other numbers).

g and p are public, and are often used "for the long term" (including sharing g and p values between different Internet sites and services). The researchers found that the problem is that a lot of sites use the same p values, and the ones they use are too small, so that there is a single calculation that a government can do which allows them to break DH that uses those particular p values. Although this calculation is phenomenally expensive, it seems likely that NSA did it several years ago.

a and b are private and must remain secret. Commonly a different a and b are chosen each time Diffie-Hellman is used (like for each new connection or session with an Internet service).

Diffie-Hellman provides a property called forward secrecy because it allows two ends of a connection to derive a session key (which gets calculated using information derived from g, p, a, and b, and gets then used as a key for some other cryptosystem, normally some form of AES today, but in theory it could be anything) but then later "forget" what the session key was and how it was derived. Unlike other ways of doing key exchange, when the parties choose to "forget" their session key and associated information this way, they no longer have a way to recover or reconstruct it!

This is useful because, for example, if someone hacks the computer of someone who was using protocols with ephemeral DH, the computer likely won't contain information that could be used to reconstruct the session key and decrypt old communications. (That's assuming that the software scrubs key material from memory when it's no longer needed... which might not always be true.)

My totally unscientific estimates tend to be about 1 minute for 2048-bit primes and 20 minutes for 4096-bit primes.

So maybe we can do on-the-fly 2048-bit DH primes for securing access to a single-line dial-up BBS! :-)

Or worse, it knows how to update itself but those ways don't work any more.

Change is a bitch. As are proprietary and one-off updating schemes.

Do we need to upgrade that software to use an eliptic-curve fuckload?

Specific protocols might even specify that you have to use a particular prime.

Then the protocol would have to be rewritten and the change would not be backwards compatible.

FWIW, SSLLabs has recommended either not using 1024-bit DH, or using a non-standard prime for a while now.

Isn't the much simpler explanation that, for particular servers, they either have permission (e.g. companies have agreed to hand over the encryption keys or allow monitoring of the data after it is unencrypted on arrival) or some other means (man-in-the-middle attacks, server backdoors, hacking vulnerable software) to bypass the encryption entirely without you knowing? For instance, I find the idea that they would direct huge amounts of computing power to crack individual keys implausible given the previous example methods are so much easier.

Every time you hack a server or strongarm an engineer you risk detection. Breaking a widely used key would be mostly invisible and seems to better fit the description of capabilities described in their documents.

When you introduced the cascading vulnerability at the most fundamental level, you have a key to everything. Society, humanity even, would be best off scuttling any fundamental technology and everything that has not been thoroughly and fully vetted by multiple diverse and counter-interested groups of individuals. Remember, the most fundamental protocols and technologies were built on and with government funding, which directly injects the interests of long-view organizations.

Excuse my ignorance in this -- is there any verifiable evidence that such a fundamental cascading vulnerability exists, and was the result of an NSA action, or are you speaking in hypotheticals?

Or you could simply lie and tell people it's been thoroughly and fully vetted by multiple diverse and counter-interested groups of individuals. Those claims won't undergo the same scrutiny. They can't - otherwise it'd be vetting all the way down.

It is surely easier to target and then hack specific engineers at VPN and certificate authorities, then steal their private keys and bingo, a great little lookup key database.

GCHQ and Belgacom spring to mind:


Let's have a show of hands. Anyone got unencrypted private keys on their machine or on their server? How hard is it to steal those?



Perhaps they do both.


> they either have permission (e.g. companies have agreed to hand over the encryption keys or allow monitoring of the data after it is unencrypted on arrival)

No, because nobody would do that and the NSA has no legal authority with which to force that. Companies wouldn't agree to this because they have everything to lose and nothing to gain. It's not like the NSA could even offer them favors in exchange, as the entire thing is ultra-classified. Also there's exactly zero chance this would ever be able to be kept secret. Companies can't even keep their products secret for the few months it takes from prototypes to launch, there's no way in hell the NSA would trust them to keep this secret for years. The first sys admin forced to do this would instantly talk about it.

> some other means (man-in-the-middle attacks, server backdoors, hacking vulnerable software) to bypass the encryption entirely

The article talks about this. That's definitely possible, but it's much more targetted and doesn't fit the scale that the Snowden leaks suggested the NSA was achieving.

> No, because nobody would do that and the NSA has no legal authority with which to force that. Companies wouldn't agree to this because they have everything to lose and nothing to gain.

Not claiming I'm well read up on this but wasn't a big part of the leaks that companies were cooperating with the NSA in secret?


"Gellman wanted to be the first to expose a top-secret NSA program called Prism. Snowden’s files indicated that some of the biggest companies on the web had granted the NSA and FBI direct access to their servers, giving the agencies the ability to grab a person’s audio, video, photos, emails, and documents."

The telcos (AT&T, Verizon, etc...) were cooperating, but that also fell under existing wiretap laws-ish and was known-ish. There had been rumors about it for years, there have been photos of mysterious governement vans of equipment showing up at sites, special locked rooms, etc... Those companies also haven't denied it.

However there is none of that for any of the other companies listed under Prism. Later leaks from Snowden suggest that the companies listed in Prism did not know they were part of Prism (places where inter-dc traffic was being spliced, that sort of thing).

Also just practically speaking with how fast companies rise & fall in this area doing this on a per-company basis wouldn't scale. Like when would you expect the NSA to approach, say, WhatsApp? Or Snapchat?

No, the prism leaks show that the NSA had access to data. It is often implied by various commentators that the access was facilitated by the companies, but there's no evidence to support that.

The evidence about companies cooperating with NSA on mass surveillance came out in the first decade of the 2000s, e.g., http://www.salon.com/2006/06/21/att_nsa/ and https://en.wikipedia.org/wiki/Room_641A

Not only that, these companies (AT&T) for example get paid to provide the access.

> No, because nobody would do that and the NSA has no legal > authority with which to force that. Companies wouldn't agree > to this because they have everything to lose and nothing to > gain.

You might say the same thing about every social engineering victim ever. Nothing to gain, why do they help? Someone asked for help / seemed like the had the authority / was afraid for no reason.

Additionally, from my experience, monitoring after decryption isn't too hard to pull off. It takes a lot of time, money, and expertise to secure the inside and egress of the network and there is very little pressure on most companies to exert themselves to defend at those layers. Further, most networks aren't properly segmented and the attacks that tend to gain access to one machine (like a windows or mac laptop) on a network can be pivoted to access more restricted places. Given how open companies tend to be (practicing zero opsec), it's even very easy to know who to attack and have reasonable estimations about their machine's worth inside a network.

>> "No, because nobody would do that..."

This is literally the most breathtakingly naive comment I have ever read on HN.

It's hyperbole. He doesn't, of course, mean that "nobody" would do it but that it wouldn't work because many would refuse.

The ECI leaks mention that the FBI "compels" U.S. firms to "SIGINT-enable" their stuff for FISA purposes. So, they must have some authority over at least some U.S. companies. If it's classified, they also can (and have) use secrecy orders.

Then, they pay them too. Many companies, esp govt contractors, were more than happy to help for tens of millions.

I agree. The NSA is probably mostly investing in the five-dollar wrench.


The five-dollar wrench doesn't allow for passive collection. You kinda know when you get hit with a wrench.

I was thinking more about NSLs and threats to companies to force them to hand over their encryption keys.

Why spend $5 on a wrench when you can make a phone call to the FBI and compromise any server on US soil?

These cryptanalytic capabilities are mostly relevant for servers hosted outside the US, I'd wager.

For the servers outside, the case that happened in Greece after the Olympic games as an example of something much, much bigger but conceptually still more an immense "wrench" than a "cryptanalytic" approach:


I always considered the "crypto nerd's imagination" panel in the comic to be that the NSA have computational and/or cryptanalysis breakthroughs and super computers so far ahead of anyone else that they can crack almost anything. My guess is that in reality things are much plainer and they can rely on cooperation and server hacks.

They very well may. For all we know, some of the Snowden revelations are distractions from some breakthrough, designed to protect the real secrets.

The scale out of the NSA capabilities may have become an issue -- adversaries have figured out that the US knows too much, as people get blown up when they pick up the phone.

Some advice from the authors on how to properly deploy Diffie-Hellman:


There's also https://cipherli.st/ which, imho, is better. What's really awful is the sets are similar yet almost disjoint, only agreeing on 4 cipher combos:

Personally I use the shorter "strong" config off cipherli.st

Our cipher recommendations on the Weak DH site come directly from Mozilla. See https://wiki.mozilla.org/Security/Server_Side_TLS#Recommende...

When I checked, I got 16 in common.

weakdh.org recommended 43, and cipherlist.st 16. cipherlist.st was a subset of the weakdh.org list.

"For the nerds in the audience, here’s what’s wrong: If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. [...] an adversary can perform a single enormous computation to “crack” a particular prime [...]"

Can someone explain to me what the authors mean by "cracking" a prime? Is the difficulty of this related to the difficulty factoring a composite number? The language used is annoyingly imprecise.

Edit: Question was already asked by smegel, and has some useful answers.

It's called index calculus[1]. Basically, it allows you to do a (huge) amount of work that can be factored out from breaking specific DH exchanges, as long as the same prime is used. Since finding good DH primes is far more expensive than coming up with good primes for RSA, these primes tend to be reused on a massive scale.

[1]: https://en.wikipedia.org/wiki/Index_calculus_algorithm

Traditionally, index calculus attempts for record discrete log calculations attempt to balance the "precomputation" step and the individual log calculation much more closely than in this work. This is more efficient if you only intend to perform one discrete log. One of the contributions here is that because of the massive reuse of primes, it would be more useful in a practical attack to shift much of the work into the steps solely dependent on the primes.

This system sounds incredibly insecure to me. What are the benefits of this over just plain old RSA with everyone using their own keys?

Forward secrecy in the event of a compromised key.

I would imagine it means creating a rainbow table for a particular set of prime numbers. From the article it sounds like many applications use a small handful of prime numbers. A DH key exchange uses two prime numbers (shared publicly) and two secrets (kept, secret...).

In theory the NSA could enumerate ginormous rainbow tables for a large set of secret keys.

Perhaps someone else on HN can provide a more detailed description of the suspected attack?

Edit: the responses to smegel's question below seem much mor e thorough and accurate than mine!

The rainbow table and the attack described here both do a lot of precomputation that can then be re-used for individual attack instances, but the rainbow table is a specific structure that only applies to reversing hashes -- a different problem from computing discrete logarithms.

I think the kind of precomputation you envision here is still too hard to do because the number of possible DH secrets is still much too big to examine all of them by precomputation!

Makes sense. With my primitive understanding I assumed this was why it was so computationally expensive still. But the descriptions by others make much more sense.

Can someone explain what "breaking a prime" means? What is the output after your year of computation?

Assuming they are using an index-calculus attack, the output is the discrete logarithms of a set of small primes. (But who knows, they may have much better attacks.)

This is how that attack works. You generate a set of small primes called the base primes:

    2, 3, 5, 7, 11, ...
Then you generate powers of p which all factor in only the base primes. This creates a set of linear equations, which can be solved to find the discrete logarithm of all base primes.

This was all pre-computation. After this, to find the discrete logarithm of x, you generate powers of x until it factors in the set of base primes. Once that is found, calculating the logarithm is easy.

This requires a trade-off: the larger the set of base primes, the longer the first step takes, but the faster the second step is.

To clarify, I made an error: you generate powers of g (or just any group element whose discrete log you know), not p.

The output is a giant table of A= g^X mod p where given A you can look up x.

There's more to it than that, though - a table that could let you directly look up the discrete log of an arbitrary A would require on the order of 2^1024 entries (for a 1024 bit group).

It's a table specifically of the discrete logs of a large (but tractable) number of small primes. Those discrete logs can be used as the input to a separate stage that can calculate the discrete log of any value in the field.

The paper mentioned in the article shows how a few complex steps reduces to very large polynomial time, most of which can be precomputed given "45 million computer-years", the rest computed on a per-session basis very quickly.

Sounds like my PMATH assignment for this week :-)

See also Martin Hellman's oral history on trap doors: https://conservancy.umn.edu/bitstream/handle/11299/107353/oh...

Apparently some Cisco products might even be using 768-bit DH as default for IPsec!

From http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_ikevpn/...:

<< Diffie-Hellman--A public-key cryptography protocol that allows two parties to establish a shared secret over an unsecure communications channel. Diffie-Hellman is used within IKE to establish session keys. It supports ==> 768-bit (the default) <==, 1024-bit, 1536-bit, 2048-bit, 3072-bit, and 4096-bit DH groups. It also supports a 2048-bit DH group with a 256-bit subgroup, and 256-bit and 384-bit elliptic curve DH (ECDH). Cisco recommends using 2048-bit or larger DH key exchange, or ECDH key exchange. >>

Malice or incompetence? (or crappy hardware that needs help to not be slow?)

The recommendation is correct so...

When I setup TLS for web or smtp, there's an option to generate custom dh params. So basically one must generate new dh params for every installation to be safe against attack presented in the article, is it correct?

That would work, but there's still a problem when the client can't verify that the parameters are safe, at least when using client certificates. See the references in section 9.7 of


As a result, that draft proposes using specific much larger parameters, on the basis of scaling considerations for the attacker. I think the authors of this paper endorse this solution.

(It's OK in general if you share parameters with someone else, it just means that an attacker who can do this precomputation for those parameters gets to attack both of your DH sessions as a result.)

Only if the parameters are too small (1024 bits and close), and if the source of the parameters aren't necessarily trustworthy (maybe because of lack of verification). 2048 bit parameters should be good enough.

Interesting paper (https://weakdh.org/imperfect-forward-secrecy.pdf) and lots of good references!

Inspired me to write a little tool to download all referenced pdfs from any given pdf: https://github.com/metachris/pdf-link-extractor

According with the estimated cost given to that machine (few hundred million dollars) and the problem's nature, what they propose is very similar to TWIRL, an hypothetical machine that could factor 1024-bits integer to break RSA. That was the reason that made a lot of people consider 1024-bit RSA not secure anymore and change their keys to 2048 bits. The same should happen with DH now.

You can check for web sites common Diffehellman primes on ssllabs.com Check section Protocol details "Uses common DH primes"

Also the latest openssh package warns against Diffie hellman ssh keys now we know why they warn us.

This might not contribute much to the discussion, but I just want to add:

I for one welcome the coming arrival of ECDH over curve25519 in TLS everywhere.

(And I really hope that comes to pass.)

Question I don't know the answer to:

Is there any chance that ECC is vulnerable to similar issues with static universally agreed upon constants. In ECC these are curves, and is it feasible to "pre-crack" a curve? Or is the space so huge in this case that it would take millions of years of compute time to do so?

Someone who knows ECC math better than I do would have to answer this one.

A big part of the whole point of ECC is that it resists index calculus.

Also, remember: this isn't a fatal flaw in all of Diffie Hellman. It's a variant of a well-known attack that makes it economical to attack 1024 bit DH, not merely plausible. So far as anyone knows, it does nothing to help attack 2048 bit DH.

So Curve25519 is pretty much rock solid unless (a) someone discovers genuinely new math or (b) we get a practical quantum computer with enough coherent qubits to actually run the ECC variant of Shor's algorithm.

Note the qualifications around (b). Just any QC would not do... it would have to (as far as I know) have a rather "wide" coherent qubit capacity among other characteristics.

So what... Curve25519 until 2050? 2100? Hard to say and new math is always a total wildcard, but it seems damn secure.

... and as you say DH and RSA are also great with >2048 bits. 4096 bits is what I typically use.

From a mathematical perspective, all of the mainstream curves, from the NIST p- curves to the Brainpool curves, are solid.

There are four reasons people use Curve25519: it's faster in software than most implementations of the other curves, it was designed to be mostly misuse-resistant, so you don't have to be as careful checking parameters (a very common vector for ECC curve attacks), it's designed to be simple to make leak-proof implementations (unlike the NIST curves, which are hard to do in constant time), and it has the Dan Bernstein brand.

But if we're talking about index calculus attacks, the NIST curves do fine there too.

One of the cryptographers that helped bring us Curve25519, Dr. Lange, is leading efforts to research post-quantum cryptography.


>>So Curve25519 is pretty much rock solid unless (a) someone discovers genuinely new math or

After the NSA news happened , schneier said the math is safe , saying/implying you can trust Diffie-Hellman. And now it's not secure. So how can we be so sure of Curve25519 ?

There's nothing new about the math here. 1024 bit DH has been precarious for over a decade, at least since Tromer costed out an RSA-1024 factoring machine.

In other words, people a decade ago were also telling you to avoid DH-1024. What we're looking at today is a more efficient way of exploiting a bug we've known about for a long time.

You could never really trust, e.g. 32-bit DH. Context matters.

The security of 2048-bit DH versus 1024-bit DH isn't the difference between "one year" and "two years", it's the difference btween "one year" and centuries.

Right. 2048-bit RSA/DH is fine, but EdDSA/ECDH is looking more attractive every day (so long as Bernstein or Langley provided the implementation).

Crap. So what are the immediate countermeasures? Switch to elliptic curves cryptography?

Don't use out-of-the-box Diffie Hellman params, I suppose. See, e.g., [1,2].

[1] https://weakdh.org/

[2] http://security.stackexchange.com/questions/42415/openvpn-dh...

Don't use out-of-the-box Diffie Hellman params

Wrong answer. Standard primes are fine; just avoid the small ones. If you try to select your own parameters you're probably going to pick wrong, either in the sense of breaking the crypto or making people wonder if you selected your parameters to hide a backdoor. The standard large primes (my preferred option is the "group #14" 2048-bit prime) were very obviously not selected maliciously, and there's no way anyone has done the precomputations necessary to index that group.

> If you try to select your own parameters you're probably going to pick wrong, either in the sense of breaking the crypto

Is there any known situation in which generating parameters with 'openssl dhparam' will give you broken/weak parameters?

> either in the sense of breaking the crypto or making people wonder if you selected your parameters to hide a backdoor

This is a concern if you're going to distribute your parameters with software, but it could make sense where a sysadmin controls both endpoints, such as a site-to-site VPN.

This is a newbie question, but how are the standard primes created? How is it ensured that they are not selected maliciously, etc?

(a link will suffice)

The standard prime groups have their highest and lowest 64 bits set, and as many as possible of the rest taken from the binary expansion of pi. The group #14 prime for example is

p = 2^2048 - 2^1984 - 1 + 2^64 * { [2^1918 pi] + 124476 }

The value 124476 is the smallest non-negative value which results in p and (p-1)/2 both being prime and 2 being a quadratic residue mod p; this ensures that the subgroup {2^0, 2^1, 2^2, 2^3, ... } is cyclic.

We assume that the binary expansion of Pi is not selected maliciously. ;-)

And as an amusing tidbit, here is the actual value in hex:

      FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1
      29024E08 8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD
      EF9519B3 CD3A431B 302B0A6D F25F1437 4FE1356D 6D51C245
      E485B576 625E7EC6 F44C42E9 A637ED6B 0BFF5CB6 F406B7ED
      EE386BFB 5A899FA5 AE9F2411 7C4B1FE6 49286651 ECE45B3D
      C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8 FD24CF5F
      83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
      670C354E 4ABC9804 F1746C08 CA18217C 32905E46 2E36CE3B
      E39E772C 180E8603 9B2783A2 EC07A28F B5C55DF0 6F4C52C9
      DE2BCBF6 95581718 3995497C EA956AE5 15D22618 98FA0510
Source: https://tools.ietf.org/html/rfc3526#section-3

> We assume that the binary expansion of Pi is not selected maliciously. ;-)

Well, I suppose that we must assume that, as assuming otherwise is self-defeating: anyone running the simulation we're living in already has direct access to the plaintexts anyway.

That somebody maliciously caused the circle's circumference to diameter ratio to be insecure in this application is indeed not an issue.

However, a possible danger is that somebody maliciously suggested using π instead of e...

True, but Pi is pretty much tradition now... enough so that when Decorrelated Fast Cipher used e instead, it raised some eyebrows.

pretty good presentation on that by djb: https://projectbullrun.org/surveillance/2015/video-2015.html...

Are there reasons other than speed/resource-usage for not using 15-18? Or put another way do you prefer group #14 above all others or is #14 the lowest level of security you are comfortable with?

Speed is the main reason. Going bigger doesn't gain you anything; in some cases it may lose you something since slower arithmetic makes it possible to exploit lower-bandwidth side channel attacks.

> > Don't use out-of-the-box Diffie Hellman params

> Wrong answer. Standard primes are fine; just avoid the small ones.

Sure, that's what I meant insofar as the out of the box params are apparently too small in many apps/libs. The links I gave cover it pretty well, I think, no?

ECC just shifts the problem domain. Just change your curve/prime modulus often and don't use NSA-recommended parameters.

That's a terrible idea, trading an extremely unlikely weakness (a problematic 2048 bit standard prime) for an extremely likely weakness (a mistake in the code you probably won't even think to write to check your constantly changing parameters).

Do what Colin says upthread.

First, to clarify: what I meant by "your" was an entity like a standards community, not an individual, and by "often" I meant within a timeframe that makes bruteforcing parameters infeasible (which itself changes over time wrt the size of the parameters). Key rotation for example is just good practice.

Nothing that I said was wrong - nor does it contradict Colin's advice - and while simple systems are better, that's a tradeoff that should be made if you care about entities with lots of resources like e.g. the NSA. As history shows, bad parameters (or parameters that are feasibly bruteforced over a particular timeframe) are not an 'unlikely' weakness. Besides, being aware of potential vulnerabilities - especially those that you know pose real threats - and not addressing them is simply bad practice.

Have you checked out http://www.cryptomove.com? The thesis is precisely that encryption is a losing game of cat and mouse, so instead continuous concealment is the last line of defense.

More ciphertext to analyze, yay! - Every TLA cryptanalysist ever

I'd prefer Tahoe-LAFS

Don't be a target the NSA is interested in.

Not possible; they interpret their mandate as universal.

By stating your intent to not be noticed, you have drawn their eyes.

Why else would you want that if you aren't trying to hide something?

It's a pretty safe bet that everyone here - a forum called "hacker news" which discusses, among other things, the several aspects of online security - is already on a watch list. The idea is to not promote yourself even further in their attention.

What is the first non-interesting integer? If it's N, then N becomes interesting, as the first non-interesting integer.

I was so uninteresting I became interesting.

There aren't any new findings here. It's merely a rehash of the Weak DH attack (by the same researchers) that was made public in May of this year: https://weakdh.org/

Still, it's a good reminder that you should not be using 1024-bit Diffie-Hellman.

> It's merely a rehash of the Weak DH attack that was made public in May of this year

Just to clarify (because I was confused when I read your comment), the weak DH attack was made public by the same people who wrote this post and the academic paper attached to it. It looks like the post and the paper are part of the same "release".

Conflict of interest disclaimer: I was a grad student of Professor Halderman's several years ago.

Indeed. I wasn't implying that the blog post authors were copycats. I've updated my comment to hopefully avoid confusion.

To rehash is to: "To bring forth again in another form without significant alteration".

If I'm not mistaken, they gave a presentation in May and have now published a full technical paper.

Perhaps rehash is too harsh a word to apply to a paper that contains more details than given in the slides.

They published a technical report in May that is almost identical to the paper submitted to the conference.

No criticism of the authors is intended (to the contrary, their findings are amazing and very important). But readers need to know this isn't a new attack before they go off panicking.

Understood. I haven't been keeping up with Security News so I wasn't aware, until now, about the presentation they gave in May.

How much software has been updated to use stronger DH either ECC or 2048 bit prime field?

Is there an easy way to check if a VPN provider has updated?

The ASICs NSA built for breaking some common 1024 bit fields are probably breaking specific RSA keys now...

Run your private key through the second command here: http://etherhack.co.uk/asymmetric/docs/rsa_key_breakdown.htm...


Care to clarify why not?

Sorry, I thought this was a troll comment to get someone to upload their private key.

That was my first impression to. Why is that impression incorrect?

The "command" is a local command you can run, not a web service to upload keys to. So it's not an attempt to trick people into doing something dangerous (although it might not be a great idea to display private key material on your screen!).

But I'm not convinced that this command will answer the question; if you take the TLS analogy, you can have a client certificate with a 4096-bit RSA key but you can then use that to authenticate to a server with a 768-bit DH parameter! So these parameter sizes are independent.

Gotcha! Thank you :)

"man openssl"

I wonder what the effort to break a 2048-bit prime would be. I suspect it's heading into "dyson sphere powered ideal computer" territory, but I'd be curious to know what it would actually be.

The thing that breaks 2048 bit conventional multiplicative group Diffie Hellman is likely to break conventional multiplicative group Diffie Hellman altogether.

Any reason not to just settle on ECDH for new applications?

It's Complicated. All else equal, yes, you should use ECDH over a good curve, implemented carefully in a curve library you yourself did not write, which library has been carefully reviewed for flaws.

But if you don't have access to that code on every platform you might need to deploy on, it might be better to do DH-2048 than to use crappy curve code.

Basically, your best choice right now is Curve25519. If you can't get a trustworthy Curve25519, though, it might be tricky to pick between DH-2048 and {other curve software}.

I understand the reasoning for this recommendation but as this paper shows there is also a danger in going along with what's popular, even if it's the best currently recommended practices. If you as a reasonably well-versed engineer can come up with a custom implementation it would be more likely that you'll be protected from mass surveillance. Even if your implementation has some weaknesses you won't be caught up in a dragnet. Assuming the eye of Sauron does not look right at you, meaning a government-level adversary targets you directly.

The opposite is true: if you implement ECDH yourself, you aren't unlikely to end up with software that thousands of different people will be able to break with their laptops. It's tricky to get right.

(1) You're saying use standardized fully-vetted methods.

(2) Parent is asking for something just slightly different enough that it escapes dragnets and automated attacks.

There is no question that (1) is much better than (2).

But in special circumstances, one could do both (1) and (2) in cascade operation (i.e., do one first, then the other). If the code for (1) and (2) run as separate processes with independently chosen keys, seeds, parameters, etc., then it doesn't matter how awful (2) is. It might help, it might not help, but it won't make the security worse than using (1) alone.

The obvious downsides are slowness, code complexity, non-standard API/interface, code maintenance headaches, and false sense of extra security (but not worse security).

You have to be careful which algorithm you run first though. If you use your crappy implementation first, it might compromise the security of the second round of encryption.


This was a great reverse engineering trick. A lot of "hackers" would make their tools super-secure by packing the binary and then protecting it with VMProtect. Because, that's way better than just using VMProtect by itself, right?

Except that you could run the VMProtected binary until just after the unpacking routine ended giving you the original binary in memory. Didn't have to understand VMProtect at all.

Thanks hackers!

The same thing isn't necessarily true with crypto, but the lesson is that thinking you're adding security by layering without knowing what you're doing might backfire.

Any handspun crypto will likely have trivial flaws. But they might escape the NSA's notice, in the same way that a hand-rolled CAPTCHA can sometimes prevent more spam than a better-designed but widely-deployed one?

Pretty much: Odds are that, if you roll your own, you’ll make mistakes which make your code trivial to break for a motivated attacker that is willing to put the effort into targeting you personally.

It’s also quite possible that the NSA/GCHQ/5-Eyes will notice that your communications are 'interesting, because different' and mark them to be stored in perpetuity in case they ever do decide that you’re of interest so that they can go back and break it all then.

So, how lucky are you feeling?

I don't think many people (let alone businesses) would rather introduce a risk of being exploited by a hacker than a huge government agency that could care less about their existence. And if you are in a position where the NSA does care to look at you closely, bad crypto is going to make their job easy.

Why is there not more hedging of bets by implementing both? That way if one is broken or improperly implemented, there's a bit of safety. Apart from being ugly is there a reason this doesn't work?

You're increasing your attack surface area. Do you use both, do you let the server choose, do you let the client choose? What if the client requests the lower-security crypto? What if what the client asked you is different to what you think you were asked from the client (MITM)?

I think the question meant literally both, always.

Yeah, setup one channel and then another inside that. Seems like hash functions could be used this way, too. For instance, are there any practical attacks in sight that'd work simultaneously on both MD5 and SHA1? 3DES does this with DES so it probably works in general?

This gets complicated fast, especially with hashes[1]. Like most of crypto, you shouldn't do it unless you have a team of experts who publicly vetted the algorithm and implementation.

[1]: http://security.stackexchange.com/questions/83881/is-using-t...

According to the paper in OP: "Precomputation for a 2048-bit non-trapdoored group is around 10^9 times harder than for a 1024-bit group, so 2048-bit Diffie-Hellman will remain secure barring a major algorithmic improvement".

about 12 years ago I came up with a pretty clever way to factor numbers that I never pursued the computational complexity of.

The basic algorithm is that you take some candidate X (which will be our 2048 bit number here) and classify your question (primality, whether it is the product of 2 primes, etc) --- once you have your question, Q, then you can pick a number Y0 to get X % Y0 = Z0 ... sometimes ~sqrt(X) works well, other times it's the closest prime factorial, etc.

now using those results, [Q, Y0, Z0], you can optimally pick Y1 and do the operation again, X % Y1 = Y2 ...

Like the Chinese remainder theorem each Z gives you information on the next optimum Y given your question Q ...

I called it tunnel factoring and saw some great early results ... but for some reason I haven't ever pursued it

Good engineers ship.

Do you actually want the source? It's similar to gnfs, but my implementation of my algorithm seems to be faster in c, under probably gcc 2.95 when compared to a reference gnfs i had found online ... enough times to be significant (but a result of "faster on my computer" doesn't really lead to any kind of "breaking news"). It could just be due to an ineffecient implementation.

I'll have to look at some old cvs repos ... Hopefully I can find it. I'll put it on guthub if i can.

I worked a good 6 months on it and I understand the gravity of the claim. I have no interest in being bogus or fraudulent.

Hopefully I can find it. I'll put it on guthub if i can.

Please do. It sounds very interesting.

Yes, please do put it on GitHub! It does sound like a big claim, but what do you have to lose?

I'm personally trying to get more of my projects out into the open, even before I feel like they're ready - having code I put lots of time into that never gets seen by anyone makes me feel like I'm wasting my time. But at least getting some feedback helps me build better things next time.

FYI this is not really new news. The authors of that research had already disclosed their findings at https://weakdh.org about 5 months ago.

Today they simply formally presented their research at ACM CCS.

If one is not enough, why not just have a million standard keys to choose from? This makes the problem space prohibitively large, but this many keys could be passed around in a standard distribution easily enough.

> For the most common strength of Diffie-Hellman (1024 bits), it would cost a few hundred million dollars to build a machine, based on special purpose hardware, that would be able to crack one Diffie-Hellman prime every year.

I'll just leave this here: http://fortune.com/2015/06/29/intelligence-community-loves-i...

Irrelevant: such a cloud would be useless for this application. NSA more likely will use a massively-parallel cluster with FPGA's. Potentially ASIC's if the numbers added up.

> such a cloud would be useless for this application

Either you don't know what is in that DC, or you do. If you don't know what's in the DC, you can't say one way or the other about what it can and can't do. If you do know what's in the DC, then you'd likely be prevented from saying what it's capabilities were or would deny those capabilities publicly if it did possess them. This is why people make tinfoil hats; because they reach cognitive dissonance quickly with logic built around untrusted data.

Because I can't implicitly trust you, or Amazon, or the intelligence agencies, I'm left with the reasoning capabilities I have and trust. Floundering in cognitive dissonance isn't going to get me anywhere in that regard. I'm left with what I know, which is that US agencies pay Amazon a staggering amount of money to run a datacenter for them. I know that AWS itself runs servers with special purpose hardware in them, such as GPUs, and that most of the value in AWS comes from federating systems and providing fault tolerance and easy programmatic access to provisioning. What sits behind all those features is anyone's guess. Amazon does not publicly discuss their relationship with the government, so that's zero help here.

If Amazon wanted to own the world's public cloud AND remain trusted, they should have thought twice about doing a deal with the government. In Germany, this shit wouldn't fly. Companies who provide services to the government there sign special agreements to ensure what services they provide to the government aren't colluded with the services they provide to individuals. It's beyond dumb that we allow this in the US and that the largest cloud provider here is mute in that regard.

"If you don't know what's in the DC, you can't say one way or the other about what it can and can't do."

Sure you can: just go with worst case scenario and assume custom hardware. I did that on Schneier's blog below:

"The room for error here is in the ASIC assessment. We need to figure out how much they can parallelize this, either in cores or custom circuits, in a given ASIC. Then, how many of those can do in one chip at 14-28nm. Then how many they can squeeze in a rack. Then how much factoring can be done with Amazon or Microsoft datacenters full of those. That would be the most paranoid assessment.

You see, the unit prices of these things are really low vs initial development costs. That's one of main drivers for hardware to shrink. The chips might cost pro's $10-30 million to develop with boards a tiny fraction of that. Then, the chips themselves are fabbed dirt-cheap with the boards being inexpensive. If algorithm doesn't need much communication, then they can use standard I/O options to farm out the jobs which then just run until they complete. They could add more capacity year after year cheaply with incremental energy cost after spending a ton of money on chip design and real estate just once. They could get 50,000-100,000 chips with multiple accelerators on each every year.

So, I think the authors upper bound is lower than the real upper bound. Need to get specialists who have implemented algorithms like this in hardware to show how it will likely be implemented. Need at least one person whose done a 28nm design to estimate how much of the chip can be dedicated to that with the other functions considered. Then multiply that by whatever Amazon has to get a decent upper-bound. "

Just to clarify (because I was confused when I read your comment), the weak DH attack was made public by the same people who wrote this post and the academic paper attached to it. It looks like the post and the paper are part of the same "release". Conflict of interest disclaimer: I was a grad student of Professor Halderman's several years ago.

Imagine the money spent on both a) measures and b) countermeasures related to the ongoing spying by the intelligence apparatus around the world.

Imagine the money not spent on more pressing issues we face these days: health problems, poverty and the destruction of nature earth, just to name a few.

Why do we, as a society, tolerate this?

>Why do we, as a society, tolerate this? //

In Western democracies [I really want to put quotes around both those words] do we have a choice?

In the UK we get to vote but it's for one of 2 or 3 sets of policies for the next 5 years, at no point do I recall any main party saying they were going to do something about NSA/GCHQ incursions in to UK life; I imagine USA have us over a barrel even if there were political will amongst the elite to change the spying on ordinary subjects [of the Crown].

You can read Chris Mullin's excellent A Very British Coup for what would happen if someone got into power that actually threatened that part of the establishment.

Of course that would never happen as nobody who actually had any intention to do such a thing would ever get elected as PM - look at the complete lambasting that Jeremy Corbyn has been getting, including murmurings that the Army would mutiny if he got into power.

[NB I am not a Labour supporter and wouldn't vote for Corbyn but I do think that the way he has been treated recently is pretty appalling.]

Because the establishment has successfully made everyone so terrified of terrorism that they (the people) are willing to throw trillions of dollars at a tiny problem.

There is an inherent overhead to free speech and democracy. One could argue this is part of it.

Spending billions to make a whole country stop working and cast a vote is another one. Should we take all those billions and put them in more pressing issues in exchange for despotism?

Maybe our society tolerates it because it is the only way to play the game with the current rules.

Because we think we can have it all, and we may just be right.

* for some ambitious but obviously not literal definition of "all".

Is there a tool to output the DH params being used when attempting a TLS connection (not dumping them from a packet capture)?

I know I can, but I'm hoping for something simpler than having to parse the TLS messages from:-

  openssl s_client -connect host:port -msg
to work it out.

There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes.

Then, if the prime number is standardized or hard-coded, why they just not use it? Why we need to break it?

It shouldn't be nor standardized nor hard-coded because someone with the funds (e.g. NSA) would need to break the encryption using this 'standardized' number only once.

If everyone used a random very large prime (spec suggests so) then NSA would have to break with every prime number possible which currently is not possible

Isn't the bulk of the https traffic using RSA, not Diffie-Hellman?

IIRC asymmetric decryption is time consuming, so it is used as the initial step to send/receive the symetric DH key. All additional encryption/decryption is symetric.

But I thought DH was an alternative to RSA, not a symetric encryption algorithm.

Here's some of the options:

Client generates a key, encrypts with server's RSA key, sends it to the server, and the session starts. This lacks PFS.

Client and server participates in DH key exchange, the server signs their DH parameters using the RSA key so that the client knows that he's talking to the right person. They now start the session using that DH generated key. This has PFS.

And there's same as the above but with DH replaced with elliptic curve DH (different way of achieving the same thing), and RSA replaced with ECDSA, and a few other options.

Yes, and in fact it's slower than RSA key exchange for the server! The big benefit is forward secrecy because the server no longer has the ability to recover the keys for old sessions (which it could with RSA key exchange).

Check your root certificates. If any of those has capabilities of "Man in the middle", they can see your SSL traffic. That's probably how they do it.

It's not that simple. A CA can intercept your SSL traffic, but to do so it has to create a fradulent certificate for the end site and proxy your traffic to the site, presenting the fradulent certificate to you.

This means it can't be done without risking that you might notice it. And it can't be done just by passively hoovering up all the traffic and then retrospectively going back and decrypting it.

And nowadays "you might notice" in an automated way because of HPKP!

Can we use distributed computing to crowdsource the computation of more/better primes? Can OpenSSL look to this pool for its primes?

We wouldn't need to do this. One computer would suffice to generate primes at a pace making cracking computation infeasible. Though, even if we switch primes every few seconds, if somebody wanted to put a few hundred million dollars into cracking a few seconds of messages, they could do it.

I wonder if that estimated cost is the COGS or the R&D? If it's the R&D, what is the cost of the second machine?

Since we don't exactly know what other ways to break crypto are there - why aren't we focusing on concatenated encryption(at least for critical apps) - while working hard to ensure no crypto vulnerable to malware type attacks, especially considering that malware isn't a good way for web scale surveillance ?

It is more commonly called cascade encryption.

The problem is that in many cases you can't afford to sacrifice enough performance for it.

Also, malware and crypto doesn't "play in the same field". Malware is solved by secure programming, access controls and users that are educated to not fall for trojans.

Encryption handles communication and storage, and only to a small degree data processing.

To support your point, virtually all crypto vulnerabilities we know of for sure are a result of bugs in implementations rather than problems with the underlying algorithms.

On the other hand there are algorithms which are believed to be backdoored by the NSA (some elliptic curves). These algorithms can be avoided.

In the end, cascading encryption has very limited benefits because the weakest point of a security application is almost never the algorithm used.

Regarding VPN usage, is the fix a client-side or a server-side solution?

Server side. They need to change parameters.

If the NSA wasn't doing this, you can bet they will be soon.

How do these findings affect the usage of Tor and Tails?

Wouldn't this be easy to subvert, though?

I mean, say we put through a few patches and started generating primes more often. Then there big-ass special purpose prime machine becomes an order of magnitude less-effective, right?

I think the best way to defend against these one-to-many attacks is to spread out the cost of decrypting large quantities of data. If we all had our own keys, even if they weren't as strong as one single key that everyone used, that much more work has to be done to decrypt data for a group of users.

I know nothing about crypto, but a layman can hear about these implementation architectures and immediately realize what's wrong with it all.

The problem is that there needs to be an agreed upon key that each of the parties knows before-hand. But yes, there are definitely viable ways to generate new ones or implement new, safer, standards. Alternatively, a much larger prime can be used. Also, the Diffie-Hellman protocol is a well known one that many many security researchers, programmers, and students have looked at. The flaws are not obvious, as it's initially unclear how "cracking" a large prime would work.

If they have special-purpose hardware specially designed for cracking primes, maybe bigger isn't better, right?

What can end-users be doing about this?

Ha ha! (Seriously, nice paper.)

Important stuff.

This is an arms race and it doesn't address the underlying cause.

The government of the people should not be spending $10B a year to monitor and track all of its people just to warehouse the data.

That is quite literally Stasi. Not vaguely like, exactly like.

> That is quite literally Stasi. Not vaguely like, exactly like.

Come on. I agree this is ridiculous and unacceptable, but this kind of hyperbole only serves to make people discount the sentiment. The NSA has not tried to use this to suppress political dissidence. The Stasi themselves called the way they manipulated prisoners "decomposition", and all evidence suggests it deserved the name. It makes the CIA torture look like juvenile detention.

Is that a possible dark future if this goes unchecked? For sure. But I think one of the main reasons serious anti-NSA sentiment and action has had trouble making it to the mainstream is that people say things like this that make most people write the movement off as a bunch of crackpot conspiracy theorists. I don't think you are one, so please don't talk like one.

>The NSA has not tried to use this to suppress political dissidence.

False. They did with Martin Luther King Jr.

To quote the Senator Church of the Church Committee which investigated this:

>In the need to develop a capacity to know what potential enemies are doing, the United States government has perfected a technological capability that enables us to monitor the messages that go through the air. Now, that is necessary and important to the United States as we look abroad at enemies or potential enemies. We must know, at the same time, that capability at any time could be turned around on the American people, and no American would have any privacy left such is the capability to monitor everything—telephone conversations, telegrams, it doesn't matter. There would be no place to hide.

>If this government ever became a tyrant, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know. Such is the capability of this technology.

>I don't want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision so that we never cross over that abyss. That is the abyss from which there is no return.


I mean the data warehousing part of the stasi.

Didn't mean to imply the entire government is 100% stasi, I mean their data warehousing obsession about their own citizens is exactly stasi-like.

Devil is in the details, I would take this with a grain of salt before I read the paper.

What if few hundred millions is 10x less than actual amount. What if it takes 10 years instead of 1.

What if it's the other way around?

They would have said that; that was the purpose of the article :)

Unless they themselves are wrong in the calculations.

Being that crypto is 'just math', why would crypto be safe? The only claim that crypto is safe assumes computational power is limited. Is that a safe assumption? Assuming the crypto math is safe, one also has to be certain the entire system which runs the crypto is safe as well.

Analysis and attempts to decode the Voynich manuscript lead me to believe mathematical patterns intended to hide information, languages in particular, are not safe in the least.

That assumption is correct, founded deep within the well-tested parts of our current understanding of physics.

When the work required to brute-force a cipher in a sane timeframe doesn't exist (or even couldn't fit) within the bounds of the observable universe, (if the crypto works as intended, which, to be fair, is a big if) brute-forcing is safely axed as an avenue of attack.

Of course, on these scales, if you really want to, you could connive and threaten your way into having a backdoor installed. Or you could spy on the victim and steal their laptop, with the key decrypted and in memory. Or you could beat the key and password out of them. Brute-forcing shouldn't be the most pressing of anyone's cryptographic worries.

Math works when all of the lemmas are satisfied. In this case, for he sake of efficiency or otherwise, a lot of people made a bad decision and only fulfilled a partial set of the requirements.

When software uses a known set of hard coded primes, it is logarithmically simplified in what it takes to defeat it.

Maybe there are good primes, bad primes, but to "standardize" a prime is to give an anchor to an adversary.

If you have any means of "unlimited computational power", why haven't you already taken over the world through financial speculation.

The trade floors are already opening in Asia, I think. You have got 24 hours. Go!!!

Breaking crypto is what the NSA was created to do, playing a cat and mouse game with it means you'll always loose. If the NSA cannot break crypto it's useless, and given 2 outcomes them giving up or them just asking for more money and being more intrusive the latter is much more likely.

No one will get their privacy "back" by fighting the NSA through technology, considering their mission, budget and capabilities they'll always win, the only way to pacify the NSA is through legislation that will ensure that they only use their capabilities when it's warranted.

It is possible for sets of very few people to communicate in ways the NSA could never break. Examples: DH/RSA with ~2^14bit keypairs, OTP, etc. It's much harder on a large scale, but in the end, I think it's doable.

The NSA can build a giant supercomputer/ASIC system/FPGA grid, but they're not going to factor a 2^14bit prime unless they have working quantum.

The only problem then is, unfortunately, having correct, secure, and non-tampered-with implementations of all the requisite libraries.

The NSA/US Intelligence can compromise small groups through other means, by either tailoring some sort of exploit or by going through the ol' hitting them with a wrench till they spill their secretes routine.

I'm not saying that currently in theory you cannot deploy or implement NSA foolproof crypto, I'm saying that in practice it will never work because the NSA mandate is to be able to break it and they'll will do everything in their power to maintain those capabilities.

And unless some one thinks that abolishing the NSA is a realistic possibility then you better pick your fights, because while the NSA all other US defense organizations are more or less superior to all others because the USA sees dominance and force projection to be vital to their national security China, Russia, and probably major EU powers aren't that far behind.

The other part of the NSA mandate is to help protect US secrets (commercial and military). Which is one reason why many were surprised when it seemed they'd thrown a wrench in the works wrt NIST/curve crypto -- if they'd tricked the US military and commercial interests to use a flawed curve, they'd undermined their own mandate.

As for "Allied Countries"... yeah, sure the NSA would probably be within its charter if it let China take over Britain (except of course, that that'd harm US interests too).

Ah, but that was the beauty of the (putative) NSA hack on the standards process: you only leaked the state of the random number generator to an entity that knew the factors of the parameter used by Dual_EC_DRBG to feed the algorithm.

Using Dual_EC_DRBG was a bit like doing your half of a Diffie-Hellman key exchange with the NSA - with the key exchanged being your internal random-number generator state - to anyone else, this communication is completely impenetrable!

Well, until someone else gets sufficient access to the internal NSA IT systems to get hold of the factors themselves of course. And one of the things Snowdon demonstrated to us was just how woefully insecure their internal networks were to a person in the right position. If someone in Snowdon’s position was able to access those keys, then it seems likely that so would other intelligence agencies, but we’ll never know for sure of course.

Such hubris. Much decrypt. Thanks NSA!

I can factor a 2^14 bit prime in my head.

Hah! I can factor any size prime in my head! (If it's not prime, though, all bets are off...)

This depends on your view of the nature of technology vs. the nature of political institutions.

I tend to believe that, with time as the X-axis, that the nature of technology is on a positive curve with regard to liberty while the nature of political institutions is on a negative one.

I'm curious why you think the nature of technology is on a positive curve with regard to liberty.

Technology is a power multiplier. In the context of a graph of liberty vs time it would simply make moving that curve on the Y axis exponentially easier.

So either you disagree with that or you have some optimistic views on human nature with regards to liberty.

Add in the problem that that the less liberty there is the easier/more likely it is for more to be removed, increases in liberty are up hill, so to speak, compared to decreases as a side issue.

Technology can be positive or negative it's all in the application, but that's not the issue.

It's not that the NSA is inherently bad or good, as long as it exists it will be able to break crypto because that is it's mission, the US needs that ability for national security but it doesn't mean that the NSA has to apply their capabilities to cast a net on the entire planet.

That said it's very unlikely that an organization with virtually unlimited funding, and a recruitment monopoly on the best and the brightest in the field of cryptography and computer security will lose on the technology front. Trying to disarm the NSA is effectively trying to disarm the US that won't fly, the only option is to ensure that they use it only when its explicitly warranted and not as a business as usual tool.

> the US needs that ability for national security


> recruitment monopoly on the best and the brightest in the field of cryptography and computer security

Again, bullshit. The NSA can't compete on compensation and there are plenty of people who refuse to work there out of principle alone.

It's not bullshit the NSA plays a pivotal role in ensuring US national security, that doesn't mean that their current actions are justified, but having the means is a national security mandate in the current geopolitical climate.

And the NSA doesn't need to compete on monetary compensation, it competes on a whole 'nother level which is giving people the biggest challenges to solve while having access to unparalleled levels of resources and cutting edge technology.

Bell Lab's didn't compete on compensation either, but it was where everyone wanted to work because of the environment.

You also disregard nationalism, patriotism, and the ability of the intelligence community to groom targets which they've perfected into an art form.

> You also disregard nationalism, patriotism, and the ability of the intelligence community to groom targets which they've perfected into an art form.

I'm not saying those things don't exist or that the NSA is incapable of hiring competent people, I'm disputing your claim that the NSA has a 'monopoly' on recruiting the best and brightest. I've seen no significant correlation in my personal experiences between skill in mathematics and patriotism.

Your beliefs are opposite the reality. Invasion of privacy has never been so high

Well that doesn't technically preclude the parent's thought; we could simply be at the beginning of the curve in his / her graph.


I wonder what world you all live in in which this is a bad thing. Theres real threats out there and i'd hate to live in a country that lacked the geopolitical leverage to make use of these tools to my nation's interests.

> Theres real threats out there

In 1972 the threat was the Democrats being elected president, so workers on the president's re-election committee burglarized the Democratic National Committee's headquarters at the Watergate hotel. Then Nixon (on tape!) hashed out a plan to have the FBI cover this up. Without a high-up leaker in the FBI, none of this may have ever come to light. (Actually, Nixon did something similar in 1968 against Humphrey, by convincing the South Vietnamese government to not hold peace talks. He not only betrayed the US but the South Vietnamese he made a deal with, since he pulled US troops out in 1972)

Then of course there is the FBI acting as a political police - disrupting the civil rights movement, bugging Martin Luther King Jr. and leaking information gained to the media, attacking those Democrats who were voting against continuing the Vietnam War etc.

Some of these excesses were pulled back in the 1970's, but in recent years they have begun raising their head again. And the country isn't even in such bad shape economically, militarily etc. I can image what would happen if there were another Depression, sit down strikes etc.

In 1929, the Secretary of State (a Republican!) shut down post-WWI domestic spying saying "gentlemen do not read other gentlemen's mail". Now we have these massive police bureaucracies who apparently don't care about the Fourth Amendment, and who are advising Congress that we shouldn't be allowed to have private communications with one another on our iPhones etc., all should be opened to the ears of Big Brother.

Keep walking down this road, and the "threats" like the mujahideen (funded by the previous generation of your type who explained to us that "Theres real threats out there") will not die down, but multiply, and will increasingly be from citizens within this country.

> Theres real threats out there

Grow some balls. I'll take a 1% chance of dying to terrorists every year over an Orwellian government that passively intercepts everyone's communications.

Besides, your actual risk of dying in a terrorist attack is nowhere close to 1% / year.

It's actually 0% per lifetime, rounded to many decimal places.

Statists and authoritarians like to overstate external threats as an excuse to consolidate & expand state authority internally. It's an old and common tactic.

Well, sure. I'd take a 1% chance of you dying in a terrorist attack every year too. But it's not clear that a terrorist attack would necessarily be so limited in its scope/damage.

> I wonder what world you all live in in which this is a bad thing.

The world where we all depend on secure communications for autonomy of our own lives?

Again that's not the issue, the NSA's mission is to be capable of breaking crypto that's their job.

Having the ability doesn't mean that they need to use it all the time which is what you should be fighting against.

As long as the fight is centered around trying to disarm the NSA you'll lose because that's not the right tactic an the US as a nation cannot accept it.

How can we know who the NSA works for?

It's the only agency that works directly for the president :)

From what I've read, I gather that they don't always tell the President everything ;) Partly that's just complexity and incoherence. Nobody reporting to the President knows everything. Or even the important stuff. And I gather that the reporting structure is extremely convoluted.

Maybe it's all just petty fiefdom bullshit. But those are standard approaches for plausible deniability. And some bits are reportedly unaccountable, walled off through "need to know basis" policy. So how can we know who they work for?

I'd love to live in an America where they did that instead of their current 'whatever the hell we feel like' policy.

I agree. All those charities ands civil liberties groups and millions of average citizens sure are dangerous to our life and liberty.

I'm glad the NSA and GCHQ are keeping tabs on them. Who knows when they might get a little too uppity and try change the status quo?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact