We wrote a blog post: The perfect SSL nginx configuration (http://blog.commando.io/the-perfect-nginx-ssl-configuration/) which details all the nginx directives to set to achieve an A+ rating on sslLabs, including mitigation of FREAK, POODLE, and HEARTBLEED.
Hmm, as a novice, capable of setting up fine Drupal/Nginx/mail(Postfix) server I'm kind of shocked to get an F rating on ssllabs with the default, up to date, ssl enabled, Debian/Nginx config... Sounds like something to fix, not? Is there that much need for some forms of backwards compatibility? Are A+ servers badly reachable from older browsers or something? Why would the default be so bad?
Somehow, in all my naivety I have always thought regular apt-get update/upgrades would keep me secure. Seems I'm still vulnerable to POODLE even. Guess I'll have to keep checking next to updating. Should I delete old config files with every update? Are new ones containing the recommended settings?
Our configuration doesn't support for IE6 or IE8 on Windows XP, but that's the only downside. Also, this configuration has 100% forward secrecy :)
Finally, you can get an A+ rating for free with StartSSL's free option, then using the SHA2 intermediate certificate[2]. This is what I use for my pgp keyserver[3].
The problem here is not "how to have a secure configuration", it's really "why is it not secure by default"; we actually need more "secure by default" because it largely reduces the chances of doing it wrong.
You can keep your A+ and add IE8 on XP, plus boost your key exchange to 100%[0], by following Mozilla's TLS docs[1] and sticking with the default Intermediate ciphersuite.
You might also consider disabling server tokens to hide your Nginx version (server_tokens off;) for a bit of 'security through obscurity' and enabling SPDY (listen 443 ssl spdy;) for a performance boost.
Also worth pointing out is the upcoming Let's Encrypt project[2] which will make domain validated certificates free soon.
What prevents me from using 'Modern' is it requires Android 4.4+, which excludes a hell of a lot of mobile users. I'm okay with dropping XP support but dropping Android 4.3 and earlier is too limiting for me.
I have an up-to-date nginx on wheezy, stock debian packages, no more than just `listen $IP ssl ; ssl_certificate ; ssl_certificate_key` directives and it gets an A on ssllabs.
Do you know exactly what problem you had? It might have been unrelated to debian's presets.
EDIT: a different server with a many-times-upgraded nginx package (but same version) has no `ssl_protocols` in /etc/nginx/nginx.conf and so had SSLv3 enabled. So i agree that this can happen. In my case it's probably a consequence of silent upgrades and `Dpkg::Options::=--force-conf{def,new,old}` choosing to preserve existing config files.
These are the ones pulling down the grade:
- This server is vulnerable to the POODLE attack. If possible, disable SSL 3 to mitigate. Grade capped to C. MORE INFO »
- This server supports anonymous (insecure) suites (see below for details). Grade set to F.
- The server supports only older protocols, but not the current best TLS 1.2. Grade capped to B.
- This server accepts the RC4 cipher, which is weak. Grade capped to B. MORE INFO »
The server was installed quite some time ago on Digital Ocean, it could be that I just need the most recent default Nginx configs. I'll test. Btw, I have a startssl cert, should have choosen the www subdomain though, not "mail".
I'll do an apt-get purge nginx before reinstalling and than manually add back the old settings.
Thank you for taking the time to do that. But, would you consider adding to the article a bit? I come from the slightly older "never run a command you don't understand" school of systems administration. Your second section is OK at describing what's going on in the configuration file, but the first section is a bit sparse.
Ugh. I've seen servers with single-DES-only because they had a state-of-the-art config file in the 1990s that nobody had ever touched later. While I appreciate the intent behind efforts like this, I'm really uncomfortable with advice to configure your server with certain ciphers by name, and not make a plan for checking regularly whether that's still correct.
Does merely "!EXP" not work for this? The intent behind OpenSSL's groups is to avoid exactly this problem.
I'm troubled by this too. Unfortunately, Ivan Ristic has come to the conclusion that it's not possible to build future-proof cipher suite configuration in a generic way:
Yeah, I think "We agitated to make this the default in the upstream project" would be a fantastic way to solve this problem.
"We're providing builds that are patched to have better defaults" or "We're providing a git repository / config management host / something that has an always up-to-date config snippet" might also be okay, as would "Please check back to this blog post regularly". It's the static post that makes me sad.
You'll have to keep up to date in your server config in other areas, however, including disabling attack surface area and staying up to date and fully patched. Security is never set-it-and-forget-it and that's the real problem with "state-of-the-art config" files. After you fix this manually, find a way to automate such deployments and sign up for a bunch of mailing lists or RSS feeds. Monitoring would be nice but awareness is a start.
Staying fully patched, at least in theory, involves taking new sources or binaries from the same source on a regular basis. If we could push config like this in the same fashion, I'd be thrilled (whether that involves them coming from the people who get you your sources/binaries, or from a third party). I just worry that someone's going to follow this advice and then leave their job in a few months, and the next person maintaining the system won't even realize that the cipher suites are customized from the upstream defaults. It's not a particularly normal thing to configure.
That's where (hopefully) the automating part comes in: a file, checked in to version control, that clearly says what's changed. But this is also where automatically patched vulnerability scanners could play a role, just as you'd want to check configurations periodically to be sure no one's gone in with SSH manually...
If you want 100% on all categories (not recommend, you will prevent a lot of browser from using SSL) you will need to turn off everything other than TLS 1.2 and remove all ciphers that are lower than 256 bits. You will also need a 4096 bits certificate with SHA256.
We left out http headers, because that usually can be done at the application level, or in location{} blocks in nginx. I wouldn't recommend setting headers globally in http{}.
Non-EC DHE is basically dead. The param size isn't part of the TLS handshake and so using a larger size actually breaks some clients that only do 1024-bit DH params. At the end of the day, almost all the clients that support larger DH param sizes also support ECDHE, which is faster anyway. You might as well not bother and just keep a few non-PFS ciphers for those clients to avoid interoperability problems.
Bonus trivia:
ssh-dss (SSH DSA keys) has vaguely similar problem, which they considered fixing but decided instead to simply not repeat the mistakes when writing the SSH ECDSA spec. This is why ssh-dss keys are effectively limited to 1024-bit.
Yep. Time to give up on anyone using a browser that depends on XP's SSL support. Much like SSLv3, they will get the message when the entire Internet stops loading in their browser.
They will get the message to randomly download some thing from the net that fixes their problem, if they're lucky it will be as well-behaved as Superfish.
The guidelines on Server Side TLS are pretty good, and it is pretty similar to my own cipher list that I use in production. It and the config generator are a great resource to give to people who are less informed about TLS config.
My only real gripe is that despite almost exclusively using explicit cipher suite names, there are three groups thrown in:
1. kEDH+AESGCM
2. AES
3. CAMELLIA
which then require trailing filters to disable unwanted possible side effects. It's a lot more confusing for the lay person to read, and may produce unintended results on untested versions of OpenSSL.
The first group will not output AES ordering in the preferred order (AES128 then AES256). The second one is redundant in my opinion. The third will likewise produce out-of-order results -- if you trust Camellia, wouldn't you prefer to use a forward secret cipher (DHE-RSA-CAMELLIA256-SHA) before a non-forward secret one (AES256-SHA)?
On the topic of Camellia, I don't understand why it makes the cut on the intermediate config. No browser ever supported Camellia that didn't also support AES, did it?
Anyway, I would view it as an improvement if all of the cipher suites were listed explicitly with no groups, so that there is no need for complicated filters at the end and the potential of activating something in a different version of OpenSSL that you didn't expect to be there.
I'm curious why you bother with (non-EC) DHE at all? Its an interoperability nightmare thanks to the lack of DH param size negotiation in the TLS handshake and all the clients that work with the larger (larger than 1024-bit) DH params also do ECDHE. And at the end of the day, there aren't really that many DHE capable clients that won't do ECDHE. For interoperability reasons I prefer to just keep DHE off and let those rare clients use non-PFS suites.
PS: you're my hero for making this page to begin with. I often direct people to it who ask about SSL settings. Even if I have my own tweaks to the list. Its useful for more than just webservers too.
That's not acceptable for us, which is why DHE is there. Mozilla aims to provide the best possible security to the larger number, and that drives a number of the choices in the recommended ciphers.
OpenSSL has way too many options that reduce security. A lot of that legacy code needs to be removed outright. Not turned off by some flag, not controlled by some environment variable, removed.
(And then, when Rust settles down, OpenSSL needs to be rewritten in Rust, as cleanly as possible.)
They're sure trying. Right now, they're struggling to turn off OpenSSL's "dynamic engine", which allows loading and unloading new crypto engines while OpenSSL is running. In case someone hot-plugs a USB crypto device, perhaps?
There stuff in there that 0.001% of users want. It creates a risk for everyone else.
You mean building our web security infrastructure on a "design by committee" kitchen sink protocol with a million config options was a bad idea?
I'm always surprised at the lack of simplicity in FOSS projects. Just because its easy to add a feature or option, doesn't mean it should be done. Sane defaults, that yes will sometimes break legacy systems, makes sense. Moving on to removing old features, that yes will break legacy systems, makes sense. Instead, there's this "who moved my cheese" mentality that is really ugly.
The top voted comments in HN are just most obscure blog postings of questionable validity claiming "wait wait guise, openssl is fine, its admins that suck because of this super obscure config kinda sorta fixes this and they should be using it!" These blog postings are symptoms of the real problem. I shouldn't have to reconfigure my entire SSL infrastructure every couple months.
It's a pity that this ELB policy (ELBSecurityPolicy-2015-02) also disables 3DES. For older browsers (for instance IE8, see https://www.ssllabs.com/ssltest/viewClient.html?name=IE&vers...) the only options with a good enough key length are RC4 and 3DES.
Newer browsers also have AES, so they don't need 3DES, but it's still useful as a fallback for older clients, and it's still considered secure (but slow).
I can't believe that they are outright naming vulnerable sites, that is really classless. Even if the data could be gathered by an attacker now that a vulnerability is known, you don't need to go the extra mile to provide it.
I disagree. If seeing their name on this list lights a fire under them to fix it that much faster, this is a good thing. Besides, if the you are an attacker capable of exploiting this vulnerability in the wild, this is the first and easiest part of the process. Scanning the top 1M sites would take you no time at all.
Edit: what really is annoying is that the sysadmin guide is "Coming Soon!". That is the irresponsible part: "here look we broke TLS, we'll tell you how to fix it at 11!"
It is "Coming soon", but if you click on it the section of the page it links to does tell you what to do (disable export ciphers), and furthermore links to a detailed guide [1] on setting up a secure set of ciphers, which even includes an automated configuration generator [2].
To be honest, I'm not sure what more they're hoping to add there.
I really disagree with your perspective here, but I do concur that a fast fix is desirable for any impacted site. Notifying impacted sites ahead of public disclosure would have been a better move, and particularly ahead of public shaming and attacker targeting.
While these notifications may have gone out, there is no reference to any such thing on the page. Also: do they plan to update this list? Or are these sites to be shamed forever?
edit: and yes, the lack of a steps-to-fix is unforgivable. This feels like a race to be first rather than a race to responsibly release and resolve the issue all around.
Especially considering that the fix is beyond trivial:
> Notifying impacted sites ahead of public disclosure would have been a better move
Notifying that many effected websites is practically the same as making it public, and could've resulted in letting attackers know about this before the public (and any effected websites that aren't on your list) knows about it and is able to fix that.
There are too many names on that list - not to contact, but to trust. To everyone that you give secret advance notice, you're potentially handing a zero-day.
That's true. Have they contacted them now? Do these places which will only fix a problem if they're shamed into it actually know that they are on the wall of shame?
More to the point: has a widespread public vulnerability ever before been released alongside a list of everyone who is vulnerable to it? I can't recall such a thing ever happening.
The same folks providing the list this time around also made one for Heartbleed. It was posted roughly the same time as the initial disclosure, from what I recall.
This sort of proves my point from another comment: they stopped updating the list shorting after it was posted, and so all of these domains are forever stuck on the shame list.
Viewing domains from the Alexa top 1M list so many times today also makes it very clear that it is total crap.
Just a food for thought (I agree with you that if you call yourself an attacker this is baby stuff enumerating over the top 1M sites): would the author publish google.com or twitter.com if google.com / twitter.com was among one of the affected sites? Would we consider google.com more important than sohu.com and with that we would less likely publish google.com without first notifying Google? You certainly can do your due diligence by notifying everyone on that list, give a one day and then publish the full closure? I don't know. But I am interested in the timeline and looks like this CVE might have been out for a while?
Certainly there is one site ranked #27 but I doubt you will get anything out of reporting that to the site adminstrator. I am pretty sure that site (a Chinese portal and search service) does not have bug bounty.
google.com was never going to be on the list, because the researchers specifically talked to Adam Langley at Google ahead of the public disclosure [1] and thus provided advanced warning.
Some companies will always receive early warnings about major security vulnerabilities, and that makes sense to gather details about the vulnerability and its exploits, and to minimize the negative impact of an announcement. Other companies get to find out about it the day of the public announcement -- but they don't generally also find themselves on a wall of shame the same day.
"The idea of a branded exploit – one that is carefully curated for easy consumption – is a new one. Historically obfuscation, either real or inadvertent, has been the watchword in computer security mostly because not everyone cared about major exploits. Heartbleed, in a way, was different. It was worldwide, very dangerous, and oddly photogenic. Whereas a Java exploit or Adobe Reader problem is “invisible” to the average user, the idea of a hacker watching your passwords scroll, Matrix-like without security systems setting off alarm bells is compelling and frightening. By creating a “bugs 2.0″ page for the exploit, Codenomicon inadvertently allowed the average user to understand and potentially react to the problem."
EDIT: To the OP, I totally misread your point. I thought you were complaining that people were naming vulnerabilities (i.e. FeakAttack, Heartbleed, etc), whereas you said "naming the sites that had such a vulnerability". I agree with you, it's a bit tasteless. My bad! I'm leaving my comment here anyway, because I do think it's worth noting the benefits of branding a known SSL bug/exploit.
How vulnerable is accessing these sites? If I tracert to them, the few that I have tried go from the isp to high tier transit to a cloud hosting company. Doesn't seem like much in the way of attack vector beyond someone with a court order access to servers along the route.
They have been known to be weak literally since their inception. The entire reason for export cipher suites was to create encryption that could be broken by the US government.
No one should have permitted them since the export control was lifted in 2000.
That does not change the fact that some sites did in fact continue to permit them as 'last resort' ciphersuites, to ensure total browser coverage. This did not compromise site security for users who supported actually secure ciphersuites -- until now.
Responsible disclosure should mean that impacted sites (if they have been identified) should be informed before being publicly shamed. Doesn't matter if they were doing something dumb, it wasn't a known security vulnerability before now.
If you're using AWS Elastic Load Balancer, then the quick fix is:
1) Select the load balancer you want to edit
2) Click the "Listeners" tab
3) Click "change" under the "Cipher" column for the HTTPS row
4) Select the most recent pre-defined security policy, from 2015-02.
Is there an easy way to check our own servers? I can see the fix is to add !EXPORT to the end of the cipher list, but how do we check that the server requires the fix?
Really disappointed with this announcement. Some of the other named exploits have come with repro instructions and usually with a fix (shellshock notwithstanding). This is just a description and a shame list.
I hate to be a jerk about this, but this isn't a rhetorical question. Are there any simple instructions? 1 2 3 and you're fixed? Why didn't this come on the same page as the disclosure? (that last question can be taken as rhetorical, for now)
That isn't very simple either, and it's only for nginx.
This is all asking me to learn about ciphers to patch a security hole (and know for sure that it's patched). I don't think it's unreasonable to expect otherwise for a security hole. A few people voted up my parent comment so I don't think I'm alone in this.
For me personally, the only important thing I have is on Heroku, which I presume has its own set of instructions, which they somehow haven't executed themselves or emailed us about yet. Unless this also affects SSH?
To me the whole idea of negotiating ciphers seems broken: a man-in-the-middle will always choose the weakest one.
I guess the argument is that cipher negotiation lets you implement stronger crypto without defining a new protocol version, but what is the point of that? An attacker will just negotiate for the weaker cipher anyway (unless this negotiation is cryptographically protected too of course, but this seems so complex in comparison with the rather meaningless "goal" of cipher negotiation).
Its interesting too me that Firefox is supposedly not vulnerable, yet on both my laptop (Windows 8.1 Firefox 36) and My Desktop (Windows 7 Firefox 36) the website (freakattack.com) says i AM vulnerable?
"Warning! Your client is vulnerable to CVE-2015-0204. Even though your client doesn't offer any RSA EXPORT suites, it can still be tricked into using one of them. We encourage you to upgrade your client. "
If you want to check your domains/servers, not just your clients I updated a cipher verification script to just test Export (EXP) ciphers via openssl: https://gist.github.com/degan/70e8059507d173751294
This is a very disappointing trend in security. Publicly shaming sites into action is not a benefit that outweighs making it easier for attackers. It's ridiculous to argue that it is.
Running zmap or masscan is such a low barrier to entry (compared to the expertise required to then factor the key and set up a MitM attack) that it's hard to argue this is really making it that much easier for attackers.
Aside from the benefit of pressuring these sites into fixing the problem, it also benefits users, who can now make an informed choice about whether to (say) visit a vulnerable site at a cafe or wait until they get home.
I disagree with this perspective entirely. There are many more users of these sites than operators. Assume that a site is no longer secure, therefore operating any of these sites and claiming secure comms is fraudulent. This fraud is obviously unintentional of course, but the greater damage is to the user, not the site.
Secondly, it saves attackers a trivial amount of time. If they're able to exploit this problem, scanning for its existence is orders of magnitude easier.
Do you know if they are only scanning or reporting the 'www' sites or are they listing the main site even if it's just a single server misconfigured, or subdomain, etc?
Details are sparse, but the text file is literally bare domains and an IP that in my testing is always the A record for domain.blah. I don't think they're even looking at www.domain.blah, let alone actually crawling these sites or otherwise exhausting their domain space.
I suspected as much. It makes this a lot less useful, but, I guess it's more like ringing an alarm than being precise. On the other hand for some sites this might amount to a false alarm if the tested address has no critical service running on it. Mind you they should all be remedied, but some more hurriedly than others.
I think it's particularly misleading because some sites only run redirector services on domain.blah for the purpose of sending you to www.domain.blah.
Yes the problem should still be remedied, but no customer data flows through this service, and the connection would be renegotiated after the redirect on systems that may bear very little resemblance technically.
I would agree with you for a vulnerability like Heartbleed, where the attack was against the site itself. This is an attack against the users of these sites, so users need to know which sites are vulnerable. Also, attackers are far more capable of gathering this data on their own than users are.
freakattack.com is an IP owned and managed by the University of Michigan. I could not visit the site due to them being in my firewall's ban list caused by unauthorized vulnerability testing against my home network.
As an aside I wonder why our tax dollars are being used to support unauthorized vulnerability attempts and for hosting a .com commercial site?
Is it legal for the person/people operating freakattack.com to use US Tax Income to fund their own commercial efforts using University resources? I didn't graduate college, maybe it's legal for them to do this?
That was probably just a random student who learned some fun stuff in Security class and slept through the Ethics lesson. I can't speak for UMich, but security research at my university (NC State) has a very strict "don't attack civilians" policy.
> hosting a .com commercial site
First off, .com sites are not necessarily commercial. Second, this isn't a commercial site, it's an informational page about a recently discovered TLS vulnerability.
In the first case I read you as saying it's OK to commit a crime against a civilian in the United States as long as [the person didn't mean to] and in the second case that since not all .COM domains are used for commercial purposes and since this one seems to be information only at the moment; that our tax dollars which helps Universities across the United States to run can be used to fund whatever .COM sites students feel so inclined to register and for whatever reason they feel is justified.
I heard that rhetoric when ones you are calling for help prosecuted Aaron Schwartz. All in times when NSA was hacking all the systems they could get their hands on both around the world and in USA.
You may be overreacting and unwillingly supporting erosion of civil rights.
What is your evidence that the vulnerability testing was done by someone supported by your tax dollars, instead of by a computer that was part of a botnet controlled by your government's cyberenemies?
It could be a student in the dorms who discovered metasploit though. Or someone in the computer lab who has a tool that doesn't need root. (or who rooted the lab computer)
You are actually correct that you were scanned by an official, funded project at the University of Michigan. The research team specializes in "internet-wide measurement", meaning they scan for vulnerabilities on a regular basis in order to get a sort of "Internet health report".
Nonetheless, if this bothers you, visiting the IP that scanned you gives you instructions for opting out: http://141.212.122.194
He asked about why UMichigan is inaccessible from your network. How do you know it was a supported student activity and not either a malicious student, or a machine on the UMichigan network that's been compromised?
Have you reported the activity against your home network to UMichigan?