Hacker News new | past | comments | ask | show | jobs | submit login
Logjam TLS attack (weakdh.org)
334 points by noondip on May 20, 2015 | hide | past | favorite | 99 comments



My side project tries to give secure default settings for all major webservers and other software (like haproxy, mysql, mailservers etc): https://cipherli.st/

From the start it has listed the suggestion to set up >2048 DH keys.

If you want to test your site for export ciphers, you can try my other side project: https://tls.so/ - you can also use the SSL labs test but mine is faster for just testing ciphersuite. (And it's open source, so you can use it internally as well).

Mozilla also has a good wiki page for SSL settings: https://wiki.mozilla.org/Security/Server_Side_TLS


Any chance of adding STARTTLS support to tls.so? I've found a lack of decent tools to scan FTP servers, SMTP servers, etc.


Thanks to your guide, I didn't have to change a single thing when this news broke. Excellent work!


Thanks a bunch, it's fairly easy to find configs for HTTP servers (and SSL labs won't check non 443 ports), but I also run a dovecot server, and this made it easy to check; I had no clue SSLv3 was enabled by default, for example.

Sadly my current phone is stuck on SSLv3 so until I replace it I have no mail on my phone anymore.


Also check out Applebaums Duraconf: https://github.com/ioerror/duraconf


This is not at all surprising; even I had pointed out the looming DH problem in an earlier thread: https://news.ycombinator.com/item?id=8810316

One small nit with the paper: it is claimed that there had been technical difficulties with the individual logarithm step of the NFS applied to discrete logarithms, making individual logs asymptotically as expensive as the precomputation. Commeine and Semaev [1] deserve the credit for breaking this barrier; Barbulescu did improve their L[1/3, 1.44] to L[1/3, 1.232] by using smarter early-abort strategies, but was not the first to come up with 'cheap' individual logs.

[1] http://link.springer.com/chapter/10.1007%2F11745853_12


It's not particularly surprising to the IETF TLS Working Group either, which is at least partially why https://tools.ietf.org/html/draft-ietf-tls-negotiated-ff-dhe... exists.

Minimum 2048 bits, please note. 1024 is not safe - and has not been for quite some time. GCHQ and NSA can definitely eat 1024-bit RSA/DH for breakfast at this point (although it does still take them until lunch).

(Those of you still using DSA-1024 signing keys on your PGP keyrings should reroll RSA/RSA ones. I suggest 3072 bits to match a 128-bit workfactor given what we know, but 4096 bits is more common and harmless. 2048 bits is a bare minimum.)


What do you (and pbsd) think of the site's recommendation to use custom 2048-bit parameters as opposed to a well-known 2048-bit group such as group 14 from RFC3526? Is it really that likely a nation-level adversary could break 2048-bit FFDHE the same way they've probably broken the 1024-bit group 2? How does that weigh against the risk of implementation errors generating your own parameters, or the risk of choosing a group with a currently-unknown weakness?


Group 14 is fine. You might as well also tell people to use custom block ciphers, since a precomputation of roughly the same magnitude---compute a common plaintext under many possible keys---would break AES-128 pretty quickly as well.

I would say use a custom group if you must stick with 1024-bit groups for some reason. Otherwise, use a vetted 2048+-bit group. If---or when---2048-bit discrete logs over a prime field are broken (and by broken I mean once, regardless of precomputation), it will likely be due to some algorithmic advance, in which case DHE is essentially done for. If nation states have been able to pull that off already, then it's pointless to even recommend anything related to DHE in the first place.


Seconded. Group 14 (2048-bit, ≈112-bit workfactor) or another safe 2048-bit or greater prime (such as ffdhe2048, or ffdhe3072 @ ≈128-bit workfactor) will do fine for now. You don't need to roll your own safe primes. As per the paper: "When primes are of sufficient strength, there seems to be no disadvantage to reusing them."

The problem with reusing them is of course when they're not strong enough, and so if an adversary can pop one, they can get a lot of traffic - and as I've said for a while and as the paper makes clear, 1024-bit and below are definitely not strong enough. Anything below 2048-bit would be a bit suspect at this point (which is precisely why the TLS Working Group rejected including any primes in the ffdhe draft smaller than that - even though a couple of people were arguing for them!).

If you're still needing to use 1024-bit DH, DSA or RSA for anything at all, and you can't use larger or switch to ECC for any reason, I feel you have a Big Problem looming you need to get to fixing. Custom DH groups will not buy you long enough time to ignore it - get a plan in place to replace it now. We thought 1024-bit was erring on the small side in the 1990s!

I concur that the NSA's attack on VPN looks like an operational finite-field DH break - I didn't realise that two-thirds of IKE out there would still negotiate Oakley 1 (768) and 2 (1024), but I suppose I didn't account for IKE hardware! Ouch!

Their attacks on TLS are, though also passive, architected far more simply and more suggestive of an RC4 break to me as there seems to be no backend HPC needed - ciphertext goes in, plaintext comes out. Both are realistic attacks, I feel, but RC4 would have been far more common in TLS at the time than 1024-bit DHE, and although 1024-bit RSA would be present many likely sites would have been using 2048-bit, so naturally they'd go for the easiest attack available. (That gives us a loose upper bound for how hard it is to break RC4: easier than this!) I also don't think the CRYPTO group at GCHQ would have described this as a "surprising […] cryptologic advance" from NSA, but just an (entirely-predictable) computational advance, and (again) lots of people in practice relying on crypto that really should have been phased out at least a decade ago. So there's probably more to come on that front.

Best current practice: Forget DHE, use ECDHE with secp256r1 instead (≈128-bit workfactor, much faster, no index calculus). You can probably do that today with just about everything (except perhaps Java). It will be faster, and safer. And, we know of nothing wrong with NIST P-256 at this point, despite its murky origins.

Looking forward, Curve25519 (≈128-bit) and Ed448-Goldilocks (≈222-bit) are, of course, even better still as the algorithms are more foolproof and they are "rigid" with no doubts about where they come from (and in Curve25519's case, it's even faster still). CFRG is working on recommending those for TLS and wider standardisation. You can use 25519 in the latest versions of OpenSSH right now, and you should if you can.


> It's not particularly surprising to the IETF TLS Working Group either, which is at least partially why https://tools.ietf.org/html/draft-ietf-tls-negotiated-ff-dhe.... exists.

This draft /does/ encourages use of larger keys, but also encourages the use of common parameter groups. The weakdh.org site mentions the use of common groups is a reason for this attack to be feasible. It also advises sysadmins to generate their own parameters. To me, that makes using common groups sound like a bad move.

The problem is, I lack proper knowledge to assess whether using common groups really is a bad move, even when using larger group sizes... Anyone here who can?


Obviously weakdh has more up-to-date recommendations (public only a few hours!) so you should certainly not cling to the older ones published by IETF who-knows-when and which could have been influenced by the players who prefer the weaker encryption for their own benefit.

I don't understand why you would not believe weakdh recommendations? The researchers describe in their paper (1) exactly how they can precompute some values only once in order to then do fast attacks on any vulnerable communication. They proved that the common and too small values are both dangerous. It's real. And it's definitely not "choose the one you like." Change both.

1) https://weakdh.org/imperfect-forward-secrecy.pdf

"Precomputation took 7 days (...) after which computing individual logs took a median time of 90 seconds."


I strongly believe in 'Audi alteram partem', and like to understand rather than believe. Hence my question.

For all I know, a few extra bits parameter length can make the NFS just as infeasible as generating own parameters.

Edit: re-reading my earlier comment I understand your reply better. I've expanded my question to 'even with larger group sizes', as it indeed is clear that it is a problem with smaller groups.


The newly disclosed research clearly demonstrates that the common parameters enable "precompute just once, attack fast everywhere" whereas when everybody simply generates their own values that approach becomes impossible. The difference is between the days of computation versus the seconds in their example.

The difference is many orders of magnitude, it is if everybody everywhere can be attacked anytime or just somebody sometimes somewhere.

Moreover, the main reason why it should be done is that the expected browser updates won't block the sites with 1024 bits. So all the sites which for whatever reason still use 1024 bits won't be so vulnerable if they had their own parameters.

The practice of using the common parameters already now worsens the current state. The bad effects of the really bad move already exist. The common parameters are now provably bad and it won't change in the future. Just don't use the common parameters. Generate the new ones everywhere.

And, of course, "minimum 2048 bits, please."

Edit: audi alteram... means "listen to the other side." Which side is the other side here? The stale information is not "the other side" it's just stale.


Thanks for elaborating.

The 'other side' are the people currently working on the negotiated-ffdhe draft (which I assume are bright people too). The draft was last updated a week ago (12 May 2015), so their considerations must be quite recent.

I'm just trying to get a sense of pros and cons. Iirc, generating own groups has its problems too. For example, the Triple Handshake attack (https://www.secure-resumption.com/) could break TLS because implementations did not properly validate DH params. Allowing only some (set of) pre-defined (known good) params would have stopped that attack.

To be clear, I'm certainly not arguing for or against using common groups. Just trying to get a complete picture. (And yes, based on current information I think too that using unique groups is the right approach.)


The folks working on that draft have definitely become aware of this research. Soon we'll see what they have to say about it.


Neither. ECDHE on P-256 doesn't have this problem, is available almost everywhere and is faster and safer: use that, or better still, Curve25519 and friends (in OpenSSH already, coming up in TLS later this year hopefully?).

There's very little reason in practice to bother trying to patch DHE, it's slow and old and interoperates worse (thanks Java). Chrome's just taking it out in the medium-term.



> Those of you still using DSA-1024 signing keys on your PGP keyrings should reroll RSA/RSA ones.

Can you please explain what you mean? Do you mean "those of you still using DSA-1024 anywhere" or do you mean that there is something we should do "on the PGP keyrings" specifically? Can we control how we maintain the keyrings? Is there some setting for the "key on the keyrings"? I ask since I don't know the state of the art of the formats of the keyrings.


I mean use of RSA-1024 or DSA-1024 anywhere, for any purpose, is really too small for safe use now.

By "on your keyrings" I mean that quite a few PGP keys in the wild still use DSA-1024 master signing keys with (often much larger) ElGamal encryption subkeys (as DSA was not specified for a long time with keys beyond 2048-bits).

However, DSA-1024/(ElGamal-anything) is not a safe configuration anymore - an attacker who can do a discrete-log on a highly-valued 1024-bit finite field can recover the signing key, and sign things - including software released under that key, or grant themselves new encryption subkeys of any strength.

It may therefore be a good idea to review your PGP keyrings for any master keys you trust which fit that 1024-bits and below criteria (look for 1024D, or below), as they definitely are overdue an upgrade. You may find that a fruitful search, with a few surprises still. For example, to pick one high-profile signing key that would doubtless have been interesting to Nation-State Adversaries and would have been susceptible to such an attack: http://pgp.mit.edu:11371/pks/lookup?op=vindex&search=0xE3BA7... ˙ ͜ʟ˙

The common safe configuration for modern OpenPGP (and upstream GnuPG's current default, I believe) is to use RSA signing keys and RSA encryption subkeys, each of at least 2048 bits - really I'd recommend 3072 or 4096 bits, as use of PGP is not as performance-sensitive as TLS and there is no forward secrecy, so I wouldn't really recommend going much below a ≈128-bit workfactor (equivalent to ≈3072 bit RSA or ≈256-bit elliptic-curve).

Edward Snowden trusted RSA-4096 signing and encryption keys with his life, and that obviously worked out fine for him at the time.


Thanks a lot for taking the time to answer.

You are right, a lot of the keys I have in my public keyring are actually 1024D/<whatever>g, made many years ago. Hm.


Chrome will be increasing it's minimum DH group size to 1024 bits: https://groups.google.com/a/chromium.org/forum/#!topic/secur...

But, if you need to spend the time updating a server configuration, just switch to ECDHE instead.


I suggest a 768 bit minimum for now, because of Java. As a side note, the latest IcedTea 6/7 release allow 1024-bit DHE but it is not enabled by default.


I say this as a Java developer who thinks Java is on the whole pretty decent: Java is way behind browser vendors when it comes to adding new ciphers and dropping broken ciphers.

Between the infrequent releases, the slavish devotion to maintaining backwards-compatibility, the sluggish release distribution, and the general conservative nature of some Java shops when it comes to upgrading, it's just not reasonable to expect the average non-java-using organisation to wait for Java before dropping broken crypto.

If you're running the latest LTS release of Ubuntu you'll get Firefox 37.0.2, released May 2015. But you'll still get Java 7, despite the fact Java 8 was released in March 2014. Giving you cutting-edge TLS1.0 and CBC ciphers [1]. And least Java 7 looks good compared to Java 6 [2] which has no SNI support, a stack of weak ciphers, and only supports the most obscure ciphers. And I know some people who are still running Java 6 in production systems.

If you wait for every last Java 6 holdout before deploying a secure configuration, you'll be waiting forever. Leave us behind, we'll only slow you down [3].

[1] https://www.ssllabs.com/ssltest/viewClient.html?name=Java&ve... [2] https://www.ssllabs.com/ssltest/viewClient.html?name=Java&ve... [3] http://tvtropes.org/pmwiki/pmwiki.php/Main/IWillOnlySlowYouD...


I'd say best practices for Java shops should be to move the SSL termination to a proxy in front of the app server. This doesn't work if you're doing mutual TLS to authenticate users in your Java stack of course, and I'm sure there's a bunch of other use cases where you can't, and it would of course be ideal if Java could keep up with security. But given the state of the Java world, just drop in an TLS termination proxy wherever possible (nginx works great) and forget about doing it in Java.


What people are really saying here is that the native APIs provided the standard Java packages are behind. Java, as a platform, has no inherent limitations like this. Feel free to check out the excellent work of the team a Bouncy Castle [1]. They offer full ECDHE support in a really well maintained library. Sure, Oracle should build this in natively (and I admit that it does look bad when you see it on SSL Labs), but there are other options.

[1] https://www.bouncycastle.org


Ubuntu uses IcedTea I think, which in its latest version supports 1024-bit and 2048-bit DHE on the server side (768-bit DHE is still default though) and up to 4096-bit DHE on the client side.


As a java developer I can only tell you that I'm grateful to everyone who forces people to upgrade.


This was fun to find at the end of two days last week when a third-party system was trying to connect via SSL.

Java 8 is the first to allow above 1024-bit Diffie-Hellman parameter values.


Java 8 was the first version to do so natively, but out of curiosity what was stopping you from using a library (like Bouncy Castle) to handle encryption in earlier versions of Java?


And it still doesn't allow >2048-bit parameters


Given all the vitriol on another front-page thread, I wanted to note from the site that "A close reading of published NSA leaks shows that the agency's attacks on VPNs are consistent with having achieved such a break."

I'm grateful to people who disclose and fix vulnerabilities instead of leaving everyone vulnerable, and kudos especially to Nadia Heninger from this team, whose work has only grown in interesting-ness and practical implications over the past few years.


Why are export grade ciphers even still a thing. I can't believe that libraries are still shipped with implementations for those.

Also, scary that SSH appears to be partially affected(?)


Why are export grade ciphers even still a thing.

Because all the countries still can't get along with each other, and thus export restrictions still exist.

http://en.wikipedia.org/wiki/Export_of_cryptography_from_the...


Shipping breakable encryption sounds worse than shipping none at all. Especially when over and over again it becomes a source of vulnerabilities :(


> Shipping breakable encryption sounds worse than shipping none at all.

Well, yeah, that's the idea—"export-grade cryptography" essentially means means "cryptography we, as a state actor, can win against in a cyberwar."


But the GP's point seems to be that this seems to turn into

"cryptography we, as a state actor, can win against in a cyberwar, but which ultimately will end up being exploited at home as well, since we're all using the same partially broken code base"


Indeed; maybe it would be better named "export-only cryptography"—not for domestic use by order of COMSEC.


Except that most of the export rules that remains that affect browsers are easy to follow nowadays.


> Also, scary that SSH appears to be partially affected(?)

Yeah. Is it sufficient to set ServerKeyBits to 2048?


You might want to follow these instructions to secure your ssh server and client https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

Be careful: not all clients support the newest algorithms. Example: Ubuntu 12.04 ssh client doesn't support curve25519-sha256@libssh.org (I'm still googling how to upgrade to the latest openssh, anybody has the answer?)

In general, check that you are still able to connect to your server before closing your last ssh connection to it.


That parameter is only for SSHv1 and if you have SSHv1 enabled you've already lost.

What they are refering to is the Key Exchange method named "diffie-hellman-group1-sha1" which uses a 1024-bit DH group. You can disable this with use of the KexAlgorithms parameter. Starting with OpenSSH 6.6 it is already disabled on the server side, but still allowed with the client. There are severe interoperability problems with embedded devices if disabled.


From the TLS sysadmin deployment guide(https://weakdh.org/sysadmin.html):

> Generate a Strong, Unique Diffie Hellman Group. A few fixed groups are used by millions of servers, which makes them an optimal target for precomputation, and potential eavesdropping. Administrators should generate unique, 2048-bit or stronger Diffie-Hellman groups using "safe" primes for each website or server.

Is this why with the easy-rsa package (https://github.com/OpenVPN/easy-rsa) one should always build a dh pair first? People are using pre-seeded ones when they do not use this tool first!? That is scary.


For OpenSSH, take a close, hard look at /etc/ssh/moduli (or wherever it's at) too, in addition to EC curves. I would consider deleting the default moduli and regenerating it.

https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

In my mind, more generally: EC attempts to make crypto algos stretch using fewer bits but implementations are harder to prove both theoretically (by being more esoteric, therefore fewer eyeballs are able to catch errors) and functionally correct (by having more moving parts). Why haven't more conservative stretching / extension of proven algos happened?

Also, even more broadly, this and at lot of other crypto decisions in TLS come off as seat-of-the-pants, guesswork, cooking by committee rather than simple, feature-minimal and bullet-resistant standards (how many way over-engineered and over-featured encodings do certs need?). The result smells like a pile of poo that will get recall after recall, patch after patch until something about the inputs and decision-making process changes. We can't keep having OpenSSL and the TLS committee saying "yes" instead of "no" to (feature creep) throwing every little edge use-case live into production 1.x branch, the codebase is huge enough, and it's nearly impossible to compile out all the little used crap, even in forks. Doing the same thing and expecting a different result is either stupid or insane, or both. OpenSSL and TLS leadership, process changes perhaps?


This isnt exactly news but I guess a good site with a codename is needed to fix things nowadays.

Also, its nice and dandy to have postfix use SSL but SMTP TLS is always set to opportunistic and can be degraded to no encryption by a MITM - because, you know, compatibility.


DANE can at least partly mitigate this, though that assumes that you've DNSSEC set up, along with TLSA records for your mailservers.

Setting up DNSSEC is another kettle of fish, mind.


Yea, DNSSEC has its own problems. Personally I think a version of HSTS for mail is the best idea for now.


I'm using Version 42.0.2311.152 (64-bit) Chrome, so far it's still vulnerable to this. I believe it's the latest production version.


I'm running Version 43.0.2357.65 (64-bit) Chrome, which is also vulnerable to this. I believe it's the latest production version.


43.0.2357.65 (64-bit) as well (latest), says is vulnerable.


I'm on the unstable channel here (44.0.2403.4 (Official Build) dev (64-bit)) and even that seems vulnerable.


Chrome 45.0.2407.0 canary (64-bit) is vulnerable too, so hopefully they address this before it is promoted to beta.


Firefox Developer Edition, which is on 40.0, is also reported as vulnerable.


It's a bit strange this was published while the vulnerability isn't patched for the majority of users out there. Browsers vendors don't tend to lag security fixes, so why no responsible disclosure?

Or is this just not really a browser issue, and held back on the browser side because blocking insecure ciphers breaks most of the internet?


> so why no responsible disclosure?

Presumably because the main danger here comes from state-level adversaries who already know and actively exploit the issue, not script kiddies who might get funny ideas after reading the announcement. So the sooner it is published and people can start fixing their servers and clients, the less damage is done.


Yeah, a big part of the risk here is that someone might spend a couple million dollars and have a form of this capability up and running within a couple of months (and that governments may have done that by around 2010). If you want to improve your crypto, you can probably roll out a better configuration much faster than new adversaries can get these capabilities up and running. (Except for this Java compatibility problem, maybe.)


Because this isn't really an attack per se. There are some interesting things in the paper, but really this is just low grade crypto being tolerated for far too long in the name of compatibility.

I finally found the Mozilla bug entry for this, they've known of it since 2010 when they raised the minimums to 512-bit DH groups.

https://bugzilla.mozilla.org/show_bug.cgi?id=587407


Seen many nginx tutorials omit a pretty critical (IMO) command before applying/reloading config changes: nginx -t.

nginx -t will quickly tell you if you've screwed something up before actually trying to apply changes.


`service nginx configtest` does the same thing.


That's good to know too, yes, but using the nginx binary itself to test is always available, whilst a service command normally involves an init script, systemd config file or similar to be available, and they can vary between packages and distributions.


Is TLS_DHE_RSA_WITH_AES_256_CBC_SHA with a 768-bit group less secure than TLS_RSA_WITH_AES_256_CBC_SHA? Doesn't DHE just add an extra perfect forward secrecy layer to the non-DHE cipher suite without changing anything else?


If you can break the DH exchange in the DHE ciphersuite, you can recover the session key and decrypt the traffic. That can be a complete break of that particular session without any need to break the server's long-term RSA key. It's quite possible to have a situation where the TLS_RSA... used a 2048-bit RSA key while the TLS_DHE... used a 1024-bit (or worse) DH parameter. In that case an attacker could have an easier time breaking the 1024-bit discrete logarithm problem compared to breaking the 2048-bit RSA problem.

To answer your question more directly, the DHE does use a different form of key establishment which uses different algorithms, different parameters, and potentially different parameter sizes. The forward secrecy is a desirable property in itself, but under some circumstances implementations might use weaker cryptographic parameters in conjunction with it.

(Daniel Kahn Gillmor first told me about this problem; in TLS_DHE_RSA the RSA key is used for authentication of the DH key establishment -- to stop someone from doing an active MITM attack -- but not for the key establishment itself. In TLS_RSA the RSA key is used directly for key establishment. Thus when you use TLS_DHE_RSA, your security levels may be limited by the weakest link mechanism that you rely on for security, which could conceivably be the DH exchange, depending on other features of your configuration and environment. A number of folks have been aware of that particular problem to some extent for a while and even discussed it at, for instance, the IETF TLS working group, but this paper takes things considerably further and makes the problems really concrete.)

Edit: upthread you can find a link to pbsd and AlyssaRowan discussing forms of the problem half a year ago, including the fact that you can get less security from weak DH parameters than you would have gotten from strong RSA parameters, despite the presence of forward secrecy. In some settings there cost trade-offs are possible for attackers between breaking particular sessions vs. breaking all traffic to a particular service.


It depends on the size of the RSA key. With LogJam TLS_DHE_RSA_WITH_AES_256_CBC_SHA is likely weaker; with TLS_RSA_WITH_AES_256_CBC_SHA the client essentially makes up the shared secret, encrypts it using RSA and sends it to the server [1]. That exchange simultaneously establishes the session key and authenticates the server.

With DHE, the Diffie-Helman exponent math is what establishes the shared secret, and if the DH prime is broken, then the shared secret can be derived by a MITM. The public DH parameters are sent in the plain, so a MITM can just observe them. A hash of those parameters is signed using RSA, so they can't be changed, but that's not important to defeating forward secrecy.

So if if the RSA key is bigger than ~1024 bits, then TLS_DHE_RSA_WITH_AES_256_CBC_SHA is likely weaker.

[1] There is some further derivation done to get the actually session key, but that's not relevant here.


Looks like Amazon's EC2 ELBs have a common 1024 DH. I don't see a way to fix it without moving away from ELBs, or getting Amazon to fix it.


https://forums.aws.amazon.com/ann.jspa?annID=3061 > Today, Elastic Load Balancing released a new default SSL Security Policy that no longer includes Ephemeral Diffie-Hellman (DHE) ciphersuites. ELB offers predefined SSL Security Policies to simplify the configuration of your load balancer by providing a recommended cipher suite that adheres to AWS security best practices.

1. Select your load balancer (EC2 - > Load Balancers). 2. In the Listeners tab, click "Change" in the Cipher column. 3. Ensure that the radio button for "Predefined Security Policy" is selected 4. In the dropdown, select the "ELBSecurityPolicy-2015-05" policy. 5. Click "Save" to apply the settings to the listener. 6. Repeat these steps for each listener that is using HTTPS or SSL for each load balancer.


For those running IIS, I'd suggest looking at IIS Crypto [1]. NOTE: you will need a reboot after the change, and you may see issues with ancient browers (IE8, Android 2, etc).

[1] https://www.nartac.com/Products/IISCrypto/


So for a full test, I would recommend https://tls.so/ or https://www.ssllabs.com/ssltest/. But if you just want to check if your server has the logjam vulnerability, I would suggest http://security.uwsoftware.be/logjam. It just says No, Yes or Yes by the NSA (for 1024 DH keys). A little bit humour inside a scanner.


For Nginx I think you can simply do !EXPORT to not support *EXPORT. There was a similar security bug which advised users to disable some EXPORT ciphers... correct me if I am wrong.


nginx takes OpenSSL cipher specs, as does apache and a lot of other programs.

The `openssl ciphers -v CIPHERSPEC` command will list out what's enabled with a given setting.


Export grade ciphers have always been a blight on TLS.

I hope this is the last nail in the coffin to see the last of it disabled in the wild, you would have thought FREAK would have done the job.


So a secure TLS client must refuse to negotiate DHE if the server uses one of the common DHE param values, like in case a Debian no-entropy private key used in the cert?


So the NSA can break 1024-bit DHE? Is ECDHE good enough to hold us for now, or do we need to find an alternative to DHE ASAP?


Two parts to your question: 1. Yes, the NSA can likely break 1024-bit DHE at scale - as well as 1024-bit RSA or anything smaller. (Probably also the RC4 stream cipher, if you didn't listen and are still using that.)

2. ECDHE is totally different to traditional finite-field DHE. ECDHE over P-256 (or better curves) is not vulnerable to this attack.


Depends on what we mean by "at scale", right? That's why the shared parameter aspect of this attack is so meaningful: they probably can't solve 1024 bit DH problems "on demand".


If they can solve the 1024-bit DH problem "on demand" we should probably be eying 2048-bit DH with some suspicion.

(I'm agreeing with your point.)


Attacks don't really scale like that. If 2048 bit prime field discrete logs fail, all of prime field discrete logs (and with it probably RSA) will probably be done for.

There is, as I understand it, a huge performance penalty for using keys larger than 2048 bits. People should just use 2048 bit keys, or stop using conventional prime field public key algorithms altogether, is what I think.


Well, if they could crack 1024-bit primes in under a day, how long would you anticipate 2048-bit would last? That's why I said eye it suspiciously.

I don't think the NSA can do this on demand.

> People should just use 2048 bit keys, or stop using conventional prime field public key algorithms altogether, is what I think.

I believe moving towards ECC (especially djb's work) is probably the right move.


Much much longer. We can reason about how an attacker with very large compute resources can break 1024 bit DH/RSA. Given the feasibility of a 1024 break, we can also reason about optimizations that would serve to make that attack deployable in the real world.

We can't do the former thing at all for 2048 bit DH/RSA, let alone the latter.

Entities attacking 1024 bit keys are doing something we've believed would be inevitable for something like a decade.

When 2048 bit DH/RSA falls, DH and RSA will probably fall with them; they won't fall because compute resources eventually catch up to them, but rather because we discover something about integer factorization or discrete logs that makes prime field cryptography altogether unsafe.


If you look at the chart on the top of page 8 of the technical paper (https://weakdh.org/imperfect-forward-secrecy.pdf), they have some intelligent guesses for core-years to crack a DH-512, 768, and 1024 group, as well as associated memory requirements.

512, which they actually did, is 10.2 core-years for the precomputation plus 10 core-minutes per actual crack. 768 they estimate at 29,300 core-years plus 2 core-days per crack. 1024 is estimated at 45M core-years plus 30 core-days per crack. On top of that while 10M of those core-years are easily parallelizable with specialized hardware (the sieving stages) 35M of them are spent doing linear algebra on a square matrix with 5 billion rows. The authors of the paper note that there's been little work on designing custom systems suitable for this task and only give a rough estimate of the resulting cost (somewhere in the order of hundreds of millions of dollars).

As you can see the challenges presented (hence cost) doesn't scale linearly with problem difficulty. The linear algebra step looks completely implausible at 2048.


What does ECDHE have to do with DHE? DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. There is no 1024-bit ECDHE.


Or if there is, it's probably snakeoil.

http://bindshell.nl/pub/zines/OandE/exp03.txt

(Ctrl+F: Vpn24.org)


At least JSSE Java uses random primes (generated at startup). However the Java 8 default of 1024 is rather weak (not to mention the 768 bit of java 6+7). Whats worse is, that clients accept down to 512 bit. (And the client side is harder to protect with ssl accelerators). But there is SunPKCS11-NSS as an provider.


I am wrong, no random primes.


Everyone should note that these TLS attacks may also work on EAP. i.e. WiFi authentication, or 802.1X. Since HTTP is so much sexier than EAP, no one pays attention to EAP. :(


I don't think the export ciphersuites are present in EAP ( http://tools.ietf.org/html/rfc5216 )


For clients, OpenSSL 1.0.2 introduced a function SSL_get_server_tmp_key() which retrives the ephemeral key parameters used by the server.


Hackers have really lost their sense of humor. This would have been at the top of the page had I come up with this name: https://www.youtube.com/watch?v=zZ-oafGPkqg (with an author name of Karl Hungus)


NSFW


" We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains."

We need a new web.


Is there a support list somewhere for DHE >1024?


Use a proper ciphersuite and stop worrying about downgrade attacks. https://wiki.mozilla.org/Security/Server_Side_TLS


Looks like it's not enough just to set a proper ciphersuite, it's also important to reconfigure the dhparams (which apparently isn't even possible in most common apache versions). Interestingly, Dovecot seems to have had the foresight to automatically regenerate dhparams weekly by default.


"The recommendations in this guide provide configurations that are not impacted by this."



From your link: "Before Apache 2.4.7, the DH parameter is always set to 1024 bits and is not user configurable. This has been fixed in mod_ssl 2.4.7 that Red Hat has backported into their RHEL 6 Apache 2.2 distribution with httpd-2.2.15-32.el6. Future versions of Apache will automatically select a better value for the DH parameter."

So I suppose he does mean like that.


just so you know

when TLS was committed to openSSL, the code used the vars 'payload' for []byte... not 'msg' or 'data'....

i wrote about it and some other facts and was downvoted to oblivion

another guy from the w3c team wrote an article "TLS is not HTTPS" but they are selling it as HTTPS, same name logo icon etc, and shipping it to the whole world, his post was removed also..

were heading towards a global root.

found the article: http://www.w3.org/DesignIssues/Security-NotTheS.html

my take on the situation: http://8ch.net/g/res/2200.html#2363


TLS implementation has already had 2 serious show-stopping bugs if i remember. and heartbleed on top of that.

they were beginner crypto mistakes like reusing nonce/null nonce...

and the new logjam bug

keep downvoting guys the government needs you!


I suspect you're being downvoted for linking to 8chan.


And/or people aren't much interested in reading about variable names.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: