
Logjam TLS attack - noondip
https://weakdh.org/
======
mdewinter
My side project tries to give secure default settings for all major webservers
and other software (like haproxy, mysql, mailservers etc):
[https://cipherli.st/](https://cipherli.st/)

From the start it has listed the suggestion to set up >2048 DH keys.

If you want to test your site for export ciphers, you can try my other side
project: [https://tls.so/](https://tls.so/) \- you can also use the SSL labs
test but mine is faster for just testing ciphersuite. (And it's open source,
so you can use it internally as well).

Mozilla also has a good wiki page for SSL settings:
[https://wiki.mozilla.org/Security/Server_Side_TLS](https://wiki.mozilla.org/Security/Server_Side_TLS)

~~~
r1ch
Any chance of adding STARTTLS support to tls.so? I've found a lack of decent
tools to scan FTP servers, SMTP servers, etc.

------
pbsd
This is not at all surprising; even I had pointed out the looming DH problem
in an earlier thread:
[https://news.ycombinator.com/item?id=8810316](https://news.ycombinator.com/item?id=8810316)

One small nit with the paper: it is claimed that there had been technical
difficulties with the individual logarithm step of the NFS applied to discrete
logarithms, making individual logs asymptotically as expensive as the
precomputation. Commeine and Semaev [1] deserve the credit for breaking this
barrier; Barbulescu did improve their L[1/3, 1.44] to L[1/3, 1.232] by using
smarter early-abort strategies, but was not the first to come up with 'cheap'
individual logs.

[1]
[http://link.springer.com/chapter/10.1007%2F11745853_12](http://link.springer.com/chapter/10.1007%2F11745853_12)

~~~
AlyssaRowan
It's not _particularly_ surprising to the IETF TLS Working Group either, which
is at least partially why [https://tools.ietf.org/html/draft-ietf-tls-
negotiated-ff-dhe...](https://tools.ietf.org/html/draft-ietf-tls-negotiated-
ff-dhe-09) exists.

Minimum _2048_ bits, please note. 1024 is not safe - and has not been for
quite some time. GCHQ and NSA can definitely eat 1024-bit RSA/DH for breakfast
at this point (although it does still take them until lunch).

(Those of you still using DSA-1024 signing keys on your PGP keyrings should
reroll RSA/RSA ones. I suggest 3072 bits to match a 128-bit workfactor given
what we know, but 4096 bits is more common and harmless. 2048 bits is a bare
minimum.)

~~~
syzzer
> It's not particularly surprising to the IETF TLS Working Group either, which
> is at least partially why [https://tools.ietf.org/html/draft-ietf-tls-
> negotiated-ff-dhe...](https://tools.ietf.org/html/draft-ietf-tls-negotiated-
> ff-dhe..). exists.

This draft /does/ encourages use of larger keys, but also encourages the use
of common parameter groups. The weakdh.org site mentions the use of common
groups is a reason for this attack to be feasible. It also advises sysadmins
to generate their own parameters. To me, that makes using common groups sound
like a bad move.

The problem is, I lack proper knowledge to assess whether using common groups
really is a bad move, even when using larger group sizes... Anyone here who
can?

~~~
acqq
Obviously weakdh has more up-to-date recommendations (public only a few
hours!) so you should certainly _not_ cling to the _older_ ones published by
IETF who-knows-when and which could have been influenced by the players who
prefer the weaker encryption for their own benefit.

I don't understand why you would not believe weakdh recommendations? The
researchers describe in their paper (1) exactly how they can precompute some
values only once in order to then do fast attacks on any vulnerable
communication. They proved that the common and too small values are _both_
dangerous. It's real. And it's definitely not "choose the one you like."
Change both.

1) [https://weakdh.org/imperfect-forward-
secrecy.pdf](https://weakdh.org/imperfect-forward-secrecy.pdf)

"Precomputation took 7 days (...) after which computing individual logs took a
median time of 90 seconds."

~~~
syzzer
I strongly believe in 'Audi alteram partem', and like to understand rather
than believe. Hence my question.

For all I know, a few extra bits parameter length can make the NFS just as
infeasible as generating own parameters.

Edit: re-reading my earlier comment I understand your reply better. I've
expanded my question to 'even with larger group sizes', as it indeed is clear
that it _is_ a problem with smaller groups.

~~~
acqq
The newly disclosed research clearly demonstrates that the common parameters
enable "precompute just once, attack fast everywhere" whereas when everybody
simply generates their own values that approach becomes impossible. The
difference is between the days of computation versus the seconds in their
example.

The difference is many orders of magnitude, it is if everybody everywhere can
be attacked anytime or just somebody sometimes somewhere.

Moreover, the main reason why it should be done is that the expected browser
updates won't block the sites with 1024 bits. So all the sites which for
whatever reason still use 1024 bits won't be so vulnerable if they had their
own parameters.

The practice of using the common parameters _already now worsens the current
state._ The bad effects of the really bad move _already exist._ The common
parameters are now _provably bad_ and _it won 't change_ in the future. Just
don't use the common parameters. Generate the new ones everywhere.

And, of course, "minimum 2048 bits, please."

Edit: audi alteram... means "listen to the other side." Which side is the
other side here? The stale information is not "the other side" it's just
stale.

~~~
syzzer
Thanks for elaborating.

The 'other side' are the people currently working on the negotiated-ffdhe
draft (which I assume are bright people too). The draft was last updated a
week ago (12 May 2015), so their considerations must be quite recent.

I'm just trying to get a sense of pros and cons. Iirc, generating own groups
has its problems too. For example, the Triple Handshake attack
([https://www.secure-resumption.com/](https://www.secure-resumption.com/))
could break TLS because implementations did not properly validate DH params.
Allowing only some (set of) pre-defined (known good) params would have stopped
_that_ attack.

To be clear, I'm certainly not arguing for or against using common groups.
Just trying to get a complete picture. (And yes, based on current information
I think too that using unique groups is the right approach.)

~~~
schoen
The folks working on that draft have definitely become aware of this research.
Soon we'll see what they have to say about it.

------
agl
Chrome will be increasing it's minimum DH group size to 1024 bits:
[https://groups.google.com/a/chromium.org/forum/#!topic/secur...](https://groups.google.com/a/chromium.org/forum/#!topic/security-
dev/WyGIpevBV1s)

But, if you need to spend the time updating a server configuration, just
switch to ECDHE instead.

~~~
yuhong
I suggest a 768 bit minimum for now, because of Java. As a side note, the
latest IcedTea 6/7 release allow 1024-bit DHE but it is not enabled by
default.

~~~
michaelt
I say this as a Java developer who thinks Java is on the whole pretty decent:
Java is way behind browser vendors when it comes to adding new ciphers and
dropping broken ciphers.

Between the infrequent releases, the slavish devotion to maintaining
backwards-compatibility, the sluggish release distribution, and the general
conservative nature of some Java shops when it comes to upgrading, it's just
not reasonable to expect the average non-java-using organisation to wait for
Java before dropping broken crypto.

If you're running the latest LTS release of Ubuntu you'll get Firefox 37.0.2,
released May 2015. But you'll still get Java 7, despite the fact Java 8 was
released in March 2014. Giving you cutting-edge TLS1.0 and CBC ciphers [1].
And least Java 7 looks good compared to Java 6 [2] which has no SNI support, a
stack of weak ciphers, and only supports the most obscure ciphers. And I know
some people who are still running Java 6 in production systems.

If you wait for every last Java 6 holdout before deploying a secure
configuration, you'll be waiting forever. Leave us behind, we'll only slow you
down [3].

[1]
[https://www.ssllabs.com/ssltest/viewClient.html?name=Java&ve...](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=7u25)
[2]
[https://www.ssllabs.com/ssltest/viewClient.html?name=Java&ve...](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=6u45)
[3]
[http://tvtropes.org/pmwiki/pmwiki.php/Main/IWillOnlySlowYouD...](http://tvtropes.org/pmwiki/pmwiki.php/Main/IWillOnlySlowYouDown)

~~~
mersault
I'd say best practices for Java shops should be to move the SSL termination to
a proxy in front of the app server. This doesn't work if you're doing mutual
TLS to authenticate users in your Java stack of course, and I'm sure there's a
bunch of other use cases where you can't, and it would of course be ideal if
Java could keep up with security. But given the state of the Java world, just
drop in an TLS termination proxy wherever possible (nginx works great) and
forget about doing it in Java.

------
justcommenting
Given all the vitriol on another front-page thread, I wanted to note from the
site that "A close reading of published NSA leaks shows that the agency's
attacks on VPNs are consistent with having achieved such a break."

I'm grateful to people who disclose and fix vulnerabilities instead of leaving
everyone vulnerable, and kudos especially to Nadia Heninger from this team,
whose work has only grown in interesting-ness and practical implications over
the past few years.

------
0x0
Why are export grade ciphers even still a thing. I can't believe that
libraries are still shipped with implementations for those.

Also, scary that SSH appears to be partially affected(?)

~~~
userbinator
_Why are export grade ciphers even still a thing._

Because all the countries still can't get along with each other, and thus
export restrictions still exist.

[http://en.wikipedia.org/wiki/Export_of_cryptography_from_the...](http://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States#Current_status)

~~~
0x0
Shipping breakable encryption sounds worse than shipping none at all.
Especially when over and over again it becomes a source of vulnerabilities :(

~~~
derefr
> Shipping breakable encryption sounds worse than shipping none at all.

Well, yeah, that's the idea—"export-grade cryptography" essentially means
means "cryptography we, as a state actor, can win against in a cyberwar."

~~~
darklajid
But the GP's point seems to be that this seems to turn into

"cryptography we, as a state actor, can win against in a cyberwar, but which
ultimately will end up being exploited at home as well, since we're all using
the same partially broken code base"

~~~
derefr
Indeed; maybe it would be better named "export-only cryptography"—not for
domestic use by order of COMSEC.

------
616c
From the TLS sysadmin deployment
guide([https://weakdh.org/sysadmin.html](https://weakdh.org/sysadmin.html)):

> Generate a Strong, Unique Diffie Hellman Group. A few fixed groups are used
> by millions of servers, which makes them an optimal target for
> precomputation, and potential eavesdropping. Administrators should generate
> unique, 2048-bit or stronger Diffie-Hellman groups using "safe" primes for
> each website or server.

Is this why with the easy-rsa package ([https://github.com/OpenVPN/easy-
rsa](https://github.com/OpenVPN/easy-rsa)) one should always build a dh pair
first? People are using pre-seeded ones when they do not use this tool first!?
That is scary.

~~~
trimble-alum
For OpenSSH, take a close, hard look at /etc/ssh/moduli (or wherever it's at)
too, in addition to EC curves. I would consider deleting the default moduli
and regenerating it.

[https://stribika.github.io/2015/01/04/secure-secure-
shell.ht...](https://stribika.github.io/2015/01/04/secure-secure-shell.html)

In my mind, more generally: EC attempts to make crypto algos stretch using
fewer bits but implementations are harder to prove both theoretically (by
being more esoteric, therefore fewer eyeballs are able to catch errors) and
functionally correct (by having more moving parts). Why haven't more
conservative stretching / extension of proven algos happened?

Also, even more broadly, this and at lot of other crypto decisions in TLS come
off as seat-of-the-pants, guesswork, cooking by committee rather than simple,
feature-minimal and bullet-resistant standards (how many way over-engineered
and over-featured encodings do certs need?). The result smells like a pile of
poo that will get recall after recall, patch after patch until something about
the inputs and decision-making process changes. We can't keep having OpenSSL
and the TLS committee saying "yes" instead of "no" to (feature creep) throwing
every little edge use-case live into production 1.x branch, the codebase is
huge enough, and it's nearly impossible to compile out all the little used
crap, even in forks. Doing the same thing and expecting a different result is
either stupid or insane, or both. OpenSSL and TLS leadership, process changes
perhaps?

------
zobzu
This isnt exactly news but I guess a good site with a codename is needed to
fix things nowadays.

Also, its nice and dandy to have postfix use SSL but SMTP TLS is always set to
opportunistic and can be degraded to no encryption by a MITM - because, you
know, compatibility.

~~~
talideon
DANE can at least partly mitigate this, though that assumes that you've DNSSEC
set up, along with TLSA records for your mailservers.

Setting up DNSSEC is another kettle of fish, mind.

~~~
yuhong
Yea, DNSSEC has its own problems. Personally I think a version of HSTS for
mail is the best idea for now.

------
methou
I'm using Version 42.0.2311.152 (64-bit) Chrome, so far it's still vulnerable
to this. I believe it's the latest production version.

~~~
buu700
I'm running Version 43.0.2357.65 (64-bit) Chrome, which is also vulnerable to
this. I believe it's the latest production version.

~~~
nodesocket
43.0.2357.65 (64-bit) as well (latest), says is vulnerable.

~~~
captn3m0
I'm on the unstable channel here (44.0.2403.4 (Official Build) dev (64-bit))
and even that seems vulnerable.

------
ericcholis
Seen many nginx tutorials omit a pretty critical (IMO) command before
applying/reloading config changes: nginx -t.

nginx -t will quickly tell you if you've screwed something up before actually
trying to apply changes.

~~~
ceequof
`service nginx configtest` does the same thing.

~~~
garethsaxby
That's good to know too, yes, but using the nginx binary itself to test is
always available, whilst a service command normally involves an init script,
systemd config file or similar to be available, and they can vary between
packages and distributions.

------
paralelogram
Is TLS_DHE_RSA_WITH_AES_256_CBC_SHA with a 768-bit group less secure than
TLS_RSA_WITH_AES_256_CBC_SHA? Doesn't DHE just add an extra perfect forward
secrecy layer to the non-DHE cipher suite without changing anything else?

~~~
schoen
If you can break the DH exchange in the DHE ciphersuite, you can recover the
session key and decrypt the traffic. That can be a complete break of that
particular session without any need to break the server's long-term RSA key.
It's quite possible to have a situation where the TLS_RSA... used a 2048-bit
RSA key while the TLS_DHE... used a 1024-bit (or worse) DH parameter. In that
case an attacker could have an easier time breaking the 1024-bit discrete
logarithm problem compared to breaking the 2048-bit RSA problem.

To answer your question more directly, the DHE does use a different form of
key establishment which uses different algorithms, different parameters, and
potentially different parameter sizes. The forward secrecy is a desirable
property in itself, but under some circumstances implementations might use
weaker cryptographic parameters in conjunction with it.

(Daniel Kahn Gillmor first told me about this problem; in TLS_DHE_RSA the RSA
key is used for authentication of the DH key establishment -- to stop someone
from doing an active MITM attack -- but not for the key establishment itself.
In TLS_RSA the RSA key is used directly for key establishment. Thus when you
use TLS_DHE_RSA, your security levels may be limited by the weakest link
mechanism that you rely on for security, which could conceivably be the DH
exchange, depending on other features of your configuration and environment. A
number of folks have been aware of that particular problem to some extent for
a while and even discussed it at, for instance, the IETF TLS working group,
but this paper takes things considerably further and makes the problems really
concrete.)

Edit: upthread you can find a link to pbsd and AlyssaRowan discussing forms of
the problem half a year ago, including the fact that you can get less security
from weak DH parameters than you would have gotten from strong RSA parameters,
despite the presence of forward secrecy. In some settings there cost trade-
offs are possible for attackers between breaking particular sessions vs.
breaking all traffic to a particular service.

------
edgan
Looks like Amazon's EC2 ELBs have a common 1024 DH. I don't see a way to fix
it without moving away from ELBs, or getting Amazon to fix it.

~~~
zowers
[https://forums.aws.amazon.com/ann.jspa?annID=3061](https://forums.aws.amazon.com/ann.jspa?annID=3061)
> Today, Elastic Load Balancing released a new default SSL Security Policy
that no longer includes Ephemeral Diffie-Hellman (DHE) ciphersuites. ELB
offers predefined SSL Security Policies to simplify the configuration of your
load balancer by providing a recommended cipher suite that adheres to AWS
security best practices.

1\. Select your load balancer (EC2 - > Load Balancers). 2\. In the Listeners
tab, click "Change" in the Cipher column. 3\. Ensure that the radio button for
"Predefined Security Policy" is selected 4\. In the dropdown, select the
"ELBSecurityPolicy-2015-05" policy. 5\. Click "Save" to apply the settings to
the listener. 6\. Repeat these steps for each listener that is using HTTPS or
SSL for each load balancer.

------
tracker1
For those running IIS, I'd suggest looking at IIS Crypto [1]. NOTE: you will
need a reboot after the change, and you may see issues with ancient browers
(IE8, Android 2, etc).

[1]
[https://www.nartac.com/Products/IISCrypto/](https://www.nartac.com/Products/IISCrypto/)

------
dolfje
So for a full test, I would recommend [https://tls.so/](https://tls.so/) or
[https://www.ssllabs.com/ssltest/](https://www.ssllabs.com/ssltest/). But if
you just want to check if your server has the logjam vulnerability, I would
suggest
[http://security.uwsoftware.be/logjam](http://security.uwsoftware.be/logjam).
It just says No, Yes or Yes by the NSA (for 1024 DH keys). A little bit humour
inside a scanner.

------
yeukhon
For Nginx I think you can simply do !EXPORT to not support *EXPORT. There was
a similar security bug which advised users to disable some EXPORT ciphers...
correct me if I am wrong.

~~~
ryan-c
nginx takes OpenSSL cipher specs, as does apache and a lot of other programs.

The `openssl ciphers -v CIPHERSPEC` command will list out what's enabled with
a given setting.

------
jpgvm
Export grade ciphers have always been a blight on TLS.

I hope this is the last nail in the coffin to see the last of it disabled in
the wild, you would have thought FREAK would have done the job.

------
wolf550e
So a secure TLS client must refuse to negotiate DHE if the server uses one of
the common DHE param values, like in case a Debian no-entropy private key used
in the cert?

------
higherpurpose
So the NSA can break 1024-bit DHE? Is ECDHE good enough to hold us for now, or
do we need to find an alternative to DHE ASAP?

~~~
qrmn
Two parts to your question: 1\. Yes, the NSA can likely break 1024-bit DHE at
scale - as well as 1024-bit RSA or anything smaller. (Probably also the RC4
stream cipher, if you didn't listen and are still using that.)

2\. ECDHE is totally different to traditional finite-field DHE. ECDHE over
P-256 (or better curves) is not vulnerable to this attack.

~~~
tptacek
Depends on what we mean by "at scale", right? That's why the shared parameter
aspect of this attack is so meaningful: they probably can't solve 1024 bit DH
problems "on demand".

~~~
sarciszewski
If they can solve the 1024-bit DH problem "on demand" we should probably be
eying 2048-bit DH with some suspicion.

(I'm agreeing with your point.)

~~~
tptacek
Attacks don't really scale like that. If 2048 bit prime field discrete logs
fail, all of prime field discrete logs (and with it probably RSA) will
probably be done for.

There is, as I understand it, a huge performance penalty for using keys larger
than 2048 bits. People should just use 2048 bit keys, or stop using
conventional prime field public key algorithms altogether, is what I think.

~~~
sarciszewski
Well, if they could crack 1024-bit primes in under a day, how long would you
anticipate 2048-bit would last? That's why I said eye it suspiciously.

I don't think the NSA can do this on demand.

> People should just use 2048 bit keys, or stop using conventional prime field
> public key algorithms altogether, is what I think.

I believe moving towards ECC (especially djb's work) is probably the right
move.

~~~
tptacek
Much much longer. We can reason about how an attacker with very large compute
resources can break 1024 bit DH/RSA. Given the feasibility of a 1024 break, we
can also reason about optimizations that would serve to make that attack
deployable in the real world.

We can't do the former thing _at all_ for 2048 bit DH/RSA, let alone the
latter.

Entities attacking 1024 bit keys are doing something we've believed would be
inevitable for something like a decade.

When 2048 bit DH/RSA falls, DH and RSA will probably fall with them; they
won't fall because compute resources eventually catch up to them, but rather
because we discover something about integer factorization or discrete logs
that makes prime field cryptography altogether unsafe.

------
eckes
At least JSSE Java uses random primes (generated at startup). However the Java
8 default of 1024 is rather weak (not to mention the 768 bit of java 6+7).
Whats worse is, that clients accept down to 512 bit. (And the client side is
harder to protect with ssl accelerators). But there is SunPKCS11-NSS as an
provider.

~~~
eckes
I am wrong, no random primes.

------
adekok
Everyone should note that these TLS attacks may also work on EAP. i.e. WiFi
authentication, or 802.1X. Since HTTP is so much sexier than EAP, no one pays
attention to EAP. :(

~~~
baby
I don't think the export ciphersuites are present in EAP (
[http://tools.ietf.org/html/rfc5216](http://tools.ietf.org/html/rfc5216) )

------
caf
For clients, OpenSSL 1.0.2 introduced a function SSL_get_server_tmp_key()
which retrives the ephemeral key parameters used by the server.

------
peterwwillis
Hackers have really lost their sense of humor. This would have been at the top
of the page had I come up with this name: [https://www.youtube.com/watch?v=zZ-
oafGPkqg](https://www.youtube.com/watch?v=zZ-oafGPkqg) (with an author name of
Karl Hungus)

~~~
bch
NSFW

------
spacefight
" We further estimate that an academic team can break a 768-bit prime and that
a nation-state can break a 1024-bit prime. Breaking the single, most common
1024-bit prime used by web servers would allow passive eavesdropping on
connections to 18% of the Top 1 Million HTTPS domains."

We need a new web.

------
carsonreinke
Is there a support list somewhere for DHE >1024?

------
jvehent
Use a proper ciphersuite and stop worrying about downgrade attacks.
[https://wiki.mozilla.org/Security/Server_Side_TLS](https://wiki.mozilla.org/Security/Server_Side_TLS)

~~~
0x0
Looks like it's not enough just to set a proper ciphersuite, it's also
important to reconfigure the dhparams (which apparently isn't even possible in
most common apache versions). Interestingly, Dovecot seems to have had the
foresight to automatically regenerate dhparams weekly by default.

~~~
zobzu
You mean like
[https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handsh...](https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam)
?

~~~
nerdy
From your link: _" Before Apache 2.4.7, the DH parameter is always set to 1024
bits and is not user configurable. This has been fixed in mod_ssl 2.4.7 that
Red Hat has backported into their RHEL 6 Apache 2.2 distribution with
httpd-2.2.15-32.el6. Future versions of Apache will automatically select a
better value for the DH parameter."_

So I suppose he does mean like that.

------
mc_hammer
just so you know

when TLS was committed to openSSL, the code used the vars 'payload' for
[]byte... not 'msg' or 'data'....

i wrote about it and some other facts and was downvoted to oblivion

another guy from the w3c team wrote an article "TLS is not HTTPS" but they are
selling it as HTTPS, same name logo icon etc, and shipping it to the whole
world, his post was removed also..

were heading towards a global root.

found the article: [http://www.w3.org/DesignIssues/Security-
NotTheS.html](http://www.w3.org/DesignIssues/Security-NotTheS.html)

my take on the situation:
[http://8ch.net/g/res/2200.html#2363](http://8ch.net/g/res/2200.html#2363)

~~~
mc_hammer
TLS implementation has already had 2 serious show-stopping bugs if i remember.
and heartbleed on top of that.

they were beginner crypto mistakes like reusing nonce/null nonce...

and the new logjam bug

keep downvoting guys the government needs you!

~~~
sarciszewski
I suspect you're being downvoted for linking to 8chan.

~~~
tedunangst
And/or people aren't much interested in reading about variable names.

