Hacker News new | past | comments | ask | show | jobs | submit login
The Freak Attack SSL/TLS Vulnerability (freakattack.com)
292 points by markthethomas on March 3, 2015 | hide | past | favorite | 116 comments



We wrote a blog post: The perfect SSL nginx configuration (http://blog.commando.io/the-perfect-nginx-ssl-configuration/) which details all the nginx directives to set to achieve an A+ rating on sslLabs, including mitigation of FREAK, POODLE, and HEARTBLEED.


Hmm, as a novice, capable of setting up fine Drupal/Nginx/mail(Postfix) server I'm kind of shocked to get an F rating on ssllabs with the default, up to date, ssl enabled, Debian/Nginx config... Sounds like something to fix, not? Is there that much need for some forms of backwards compatibility? Are A+ servers badly reachable from older browsers or something? Why would the default be so bad? Somehow, in all my naivety I have always thought regular apt-get update/upgrades would keep me secure. Seems I'm still vulnerable to POODLE even. Guess I'll have to keep checking next to updating. Should I delete old config files with every update? Are new ones containing the recommended settings?


Here's the how we get an A+ rating[1] for nginx on utilityapi.com:

    ssl on;
    ssl_certificate my_ssl.crt;
    ssl_certificate_key my_ssl.key;
    ssl_session_timeout 5m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers EECDH+aRSA+AES256:EDH+aRSA+AES256:EECDH+aRSA+AES128:EDH+aRSA+AES128;
    ssl_session_cache shared:SSL:50m;
    ssl_prefer_server_ciphers on;
    add_header Strict-Transport-Security max-age=63072000;
Our configuration doesn't support for IE6 or IE8 on Windows XP, but that's the only downside. Also, this configuration has 100% forward secrecy :)

Finally, you can get an A+ rating for free with StartSSL's free option, then using the SHA2 intermediate certificate[2]. This is what I use for my pgp keyserver[3].

[1]: https://www.ssllabs.com/ssltest/analyze.html?d=utilityapi.co...

[2]: https://www.startssl.com/certs/class1/sha2/pem/

[3]: https://www.ssllabs.com/ssltest/analyze.html?d=sks.daylightp...


The problem here is not "how to have a secure configuration", it's really "why is it not secure by default"; we actually need more "secure by default" because it largely reduces the chances of doing it wrong.


You can keep your A+ and add IE8 on XP, plus boost your key exchange to 100%[0], by following Mozilla's TLS docs[1] and sticking with the default Intermediate ciphersuite.

You might also consider disabling server tokens to hide your Nginx version (server_tokens off;) for a bit of 'security through obscurity' and enabling SPDY (listen 443 ssl spdy;) for a performance boost.

Also worth pointing out is the upcoming Let's Encrypt project[2] which will make domain validated certificates free soon.

[0]https://www.ssllabs.com/ssltest/analyze.html?d=brossmanit.co... [1]https://wiki.mozilla.org/Security/Server_Side_TLS [2]https://letsencrypt.org/


I'm not wild about having non-FS options that a man in the middle could force a downgrade to. IE8 on XP isn't worth it.


I think using Mozilla's "Modern cipher suite" list should do it, and it seems to be all forward secure:

https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_com...


What prevents me from using 'Modern' is it requires Android 4.4+, which excludes a hell of a lot of mobile users. I'm okay with dropping XP support but dropping Android 4.3 and earlier is too limiting for me.


I have an up-to-date nginx on wheezy, stock debian packages, no more than just `listen $IP ssl ; ssl_certificate ; ssl_certificate_key` directives and it gets an A on ssllabs.

Do you know exactly what problem you had? It might have been unrelated to debian's presets.

EDIT: a different server with a many-times-upgraded nginx package (but same version) has no `ssl_protocols` in /etc/nginx/nginx.conf and so had SSLv3 enabled. So i agree that this can happen. In my case it's probably a consequence of silent upgrades and `Dpkg::Options::=--force-conf{def,new,old}` choosing to preserve existing config files.


These are the ones pulling down the grade: - This server is vulnerable to the POODLE attack. If possible, disable SSL 3 to mitigate. Grade capped to C. MORE INFO » - This server supports anonymous (insecure) suites (see below for details). Grade set to F. - The server supports only older protocols, but not the current best TLS 1.2. Grade capped to B. - This server accepts the RC4 cipher, which is weak. Grade capped to B. MORE INFO »

The server was installed quite some time ago on Digital Ocean, it could be that I just need the most recent default Nginx configs. I'll test. Btw, I have a startssl cert, should have choosen the www subdomain though, not "mail". I'll do an apt-get purge nginx before reinstalling and than manually add back the old settings.


You should have a configuration file ending in .dpkg-new with whatever settings the modern nginx package provides, no need to purge and reinstall.


Thank you for taking the time to do that. But, would you consider adding to the article a bit? I come from the slightly older "never run a command you don't understand" school of systems administration. Your second section is OK at describing what's going on in the configuration file, but the first section is a bit sparse.

It's still a helpful starting point though.



Ugh. I've seen servers with single-DES-only because they had a state-of-the-art config file in the 1990s that nobody had ever touched later. While I appreciate the intent behind efforts like this, I'm really uncomfortable with advice to configure your server with certain ciphers by name, and not make a plan for checking regularly whether that's still correct.

Does merely "!EXP" not work for this? The intent behind OpenSSL's groups is to avoid exactly this problem.


I'm troubled by this too. Unfortunately, Ivan Ristic has come to the conclusion that it's not possible to build future-proof cipher suite configuration in a generic way:

https://news.ycombinator.com/item?id=8473626


Yeah, I think "We agitated to make this the default in the upstream project" would be a fantastic way to solve this problem.

"We're providing builds that are patched to have better defaults" or "We're providing a git repository / config management host / something that has an always up-to-date config snippet" might also be okay, as would "Please check back to this blog post regularly". It's the static post that makes me sad.


You'll have to keep up to date in your server config in other areas, however, including disabling attack surface area and staying up to date and fully patched. Security is never set-it-and-forget-it and that's the real problem with "state-of-the-art config" files. After you fix this manually, find a way to automate such deployments and sign up for a bunch of mailing lists or RSS feeds. Monitoring would be nice but awareness is a start.


Staying fully patched, at least in theory, involves taking new sources or binaries from the same source on a regular basis. If we could push config like this in the same fashion, I'd be thrilled (whether that involves them coming from the people who get you your sources/binaries, or from a third party). I just worry that someone's going to follow this advice and then leave their job in a few months, and the next person maintaining the system won't even realize that the cipher suites are customized from the upstream defaults. It's not a particularly normal thing to configure.


That's where (hopefully) the automating part comes in: a file, checked in to version control, that clearly says what's changed. But this is also where automatically patched vulnerability scanners could play a role, just as you'd want to check configurations periodically to be sure no one's gone in with SSH manually...


If you want 100% on all categories (not recommend, you will prevent a lot of browser from using SSL) you will need to turn off everything other than TLS 1.2 and remove all ciphers that are lower than 256 bits. You will also need a 4096 bits certificate with SHA256.

  ssl on;
  ssl_certificate      ssl.nginx;
  ssl_certificate_key  ssl.key;
  ssl_dhparam dhparam.pem;
  ssl_protocols TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_ciphers '!ECDHE-RSA-AES128-GCM-SHA256:!ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:!DHE-RSA-AES128-GCM-SHA256:!DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:!ECDHE-RSA-AES128-SHA256:!ECDHE-ECDSA-AES128-SHA256:!ECDHE-RSA-AES128-SHA:!ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:!DHE-RSA-AES128-SHA256:!DHE-RSA-AES128-SHA:!DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!AES128-GCM-SHA256:AES256-GCM-SHA384:!AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!AES128-SHA256:!DES-CBC3-SHA:!CAMELLIA128-SHA:!DHE-RSA-CAMELLIA128-SHA';
  ssl_session_cache    shared:SSL:10m;
  ssl_session_timeout  10m;


https://www.ssllabs.com/ssltest/analyze.html?d=techdroid.com


this: https://gist.github.com/plentz/6737338

start with this.

if you don't need the compat, use a cipher suite without RC4 (as in the parent post)


In addition to your configs I had to enable HSTS to achieve an A+ Add to a server section:

  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";


We left out http headers, because that usually can be done at the application level, or in location{} blocks in nginx. I wouldn't recommend setting headers globally in http{}.


Didn't know about setting DH parameters.

On each server, do the following:

    sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
Then in nginx.conf set:

    http {
        ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    }


Non-EC DHE is basically dead. The param size isn't part of the TLS handshake and so using a larger size actually breaks some clients that only do 1024-bit DH params. At the end of the day, almost all the clients that support larger DH param sizes also support ECDHE, which is faster anyway. You might as well not bother and just keep a few non-PFS ciphers for those clients to avoid interoperability problems.

Bonus trivia: ssh-dss (SSH DSA keys) has vaguely similar problem, which they considered fixing but decided instead to simply not repeat the mistakes when writing the SSH ECDSA spec. This is why ssh-dss keys are effectively limited to 1024-bit.


2048 bit DHE breaks java 6, but is only PFS option for recent msie on windows. A tradeoff worth making.


Well, DHE is the only PFS option for IE on Windows XP. Vista, 7 and 8 all support ECDHE.

IE8 on XP is basically totally busted:

https://www.ssllabs.com/ssltest/viewClient.html?name=IE&vers...


It doesn't work either because it depends on DSA certificates.


Yep. Time to give up on anyone using a browser that depends on XP's SSL support. Much like SSLv3, they will get the message when the entire Internet stops loading in their browser.


They will get the message to randomly download some thing from the net that fixes their problem, if they're lucky it will be as well-behaved as Superfish.


[deleted]


You can use 3DES instead of RC4, but this cipher is one of the slowest ones.


Please see "Recommended Configurations" in https://wiki.mozilla.org/Security/Server_Side_TLS to see which cipher suite you should be using on your server.

Above also shows how to configure most common web servers.

You can see which cipher suite your server is using at https://www.ssllabs.com/ssltest/


Author of Server Side TLS here. Almost everyone should be able to use the intermediate configuration we propose. I recommend our conf generator at https://mozilla.github.io/server-side-tls/ssl-config-generat... . Cipherscan is also a good tool to have in your toolbox: https://github.com/jvehent/cipherscan


The guidelines on Server Side TLS are pretty good, and it is pretty similar to my own cipher list that I use in production. It and the config generator are a great resource to give to people who are less informed about TLS config.

My only real gripe is that despite almost exclusively using explicit cipher suite names, there are three groups thrown in:

1. kEDH+AESGCM 2. AES 3. CAMELLIA

which then require trailing filters to disable unwanted possible side effects. It's a lot more confusing for the lay person to read, and may produce unintended results on untested versions of OpenSSL.

The first group will not output AES ordering in the preferred order (AES128 then AES256). The second one is redundant in my opinion. The third will likewise produce out-of-order results -- if you trust Camellia, wouldn't you prefer to use a forward secret cipher (DHE-RSA-CAMELLIA256-SHA) before a non-forward secret one (AES256-SHA)?

On the topic of Camellia, I don't understand why it makes the cut on the intermediate config. No browser ever supported Camellia that didn't also support AES, did it?

Anyway, I would view it as an improvement if all of the cipher suites were listed explicitly with no groups, so that there is no need for complicated filters at the end and the potential of activating something in a different version of OpenSSL that you didn't expect to be there.


I'm curious why you bother with (non-EC) DHE at all? Its an interoperability nightmare thanks to the lack of DH param size negotiation in the TLS handshake and all the clients that work with the larger (larger than 1024-bit) DH params also do ECDHE. And at the end of the day, there aren't really that many DHE capable clients that won't do ECDHE. For interoperability reasons I prefer to just keep DHE off and let those rare clients use non-PFS suites.

PS: you're my hero for making this page to begin with. I often direct people to it who ask about SSL settings. Even if I have my own tweaks to the list. Its useful for more than just webservers too.


> let those rare clients use non-PFS suites

That's not acceptable for us, which is why DHE is there. Mozilla aims to provide the best possible security to the larger number, and that drives a number of the choices in the recommended ciphers.


How about the Modern suite, where you already give up compatibility with old stuff? Is non-EC DHE needed there?


OpenSSL has way too many options that reduce security. A lot of that legacy code needs to be removed outright. Not turned off by some flag, not controlled by some environment variable, removed.

(And then, when Rust settles down, OpenSSL needs to be rewritten in Rust, as cleanly as possible.)


LibreSSL is doing the first part of that. When it's in a useable state, switch to that and leave OpenSSL in the past.


They're sure trying. Right now, they're struggling to turn off OpenSSL's "dynamic engine", which allows loading and unloading new crypto engines while OpenSSL is running. In case someone hot-plugs a USB crypto device, perhaps?

There stuff in there that 0.001% of users want. It creates a risk for everyone else.


You mean building our web security infrastructure on a "design by committee" kitchen sink protocol with a million config options was a bad idea?

I'm always surprised at the lack of simplicity in FOSS projects. Just because its easy to add a feature or option, doesn't mean it should be done. Sane defaults, that yes will sometimes break legacy systems, makes sense. Moving on to removing old features, that yes will break legacy systems, makes sense. Instead, there's this "who moved my cheese" mentality that is really ugly.

The top voted comments in HN are just most obscure blog postings of questionable validity claiming "wait wait guise, openssl is fine, its admins that suck because of this super obscure config kinda sorta fixes this and they should be using it!" These blog postings are symptoms of the real problem. I shouldn't have to reconfigure my entire SSL infrastructure every couple months.


Amazon already updated their ELB policies to disable RC4

https://forums.aws.amazon.com/ann.jspa?annID=2877


It's a pity that this ELB policy (ELBSecurityPolicy-2015-02) also disables 3DES. For older browsers (for instance IE8, see https://www.ssllabs.com/ssltest/viewClient.html?name=IE&vers...) the only options with a good enough key length are RC4 and 3DES.

Newer browsers also have AES, so they don't need 3DES, but it's still useful as a fallback for older clients, and it's still considered secure (but slow).


Guess it's good I'm using ELB with TCP pass through (because ELB can't handle different SSL terminations by port).


I can't believe that they are outright naming vulnerable sites, that is really classless. Even if the data could be gathered by an attacker now that a vulnerability is known, you don't need to go the extra mile to provide it.


I disagree. If seeing their name on this list lights a fire under them to fix it that much faster, this is a good thing. Besides, if the you are an attacker capable of exploiting this vulnerability in the wild, this is the first and easiest part of the process. Scanning the top 1M sites would take you no time at all.

Edit: what really is annoying is that the sysadmin guide is "Coming Soon!". That is the irresponsible part: "here look we broke TLS, we'll tell you how to fix it at 11!"


It is "Coming soon", but if you click on it the section of the page it links to does tell you what to do (disable export ciphers), and furthermore links to a detailed guide [1] on setting up a secure set of ciphers, which even includes an automated configuration generator [2].

To be honest, I'm not sure what more they're hoping to add there.

[1] https://wiki.mozilla.org/Security/Server_Side_TLS#Recommende...

[2] https://mozilla.github.io/server-side-tls/ssl-config-generat...


I really disagree with your perspective here, but I do concur that a fast fix is desirable for any impacted site. Notifying impacted sites ahead of public disclosure would have been a better move, and particularly ahead of public shaming and attacker targeting.

While these notifications may have gone out, there is no reference to any such thing on the page. Also: do they plan to update this list? Or are these sites to be shamed forever?

edit: and yes, the lack of a steps-to-fix is unforgivable. This feels like a race to be first rather than a race to responsibly release and resolve the issue all around.

Especially considering that the fix is beyond trivial:

  Apache: SSLCipherSuite ALL:!EXPORT
  nginx: ssl_ciphers 'ALL:!EXPORT'
(although you shouldn't use ALL, this is just an example; use https://mozilla.github.io/server-side-tls/ssl-config-generat... if you don't know what to do)


> Notifying impacted sites ahead of public disclosure would have been a better move

Notifying that many effected websites is practically the same as making it public, and could've resulted in letting attackers know about this before the public (and any effected websites that aren't on your list) knows about it and is able to fix that.


I don't understand why they didn't first contact the website owners. Isn't this exactly what the WHOIS technical contact is for?


There are too many names on that list - not to contact, but to trust. To everyone that you give secret advance notice, you're potentially handing a zero-day.


That's true. Have they contacted them now? Do these places which will only fix a problem if they're shamed into it actually know that they are on the wall of shame?

More to the point: has a widespread public vulnerability ever before been released alongside a list of everyone who is vulnerable to it? I can't recall such a thing ever happening.


The same folks providing the list this time around also made one for Heartbleed. It was posted roughly the same time as the initial disclosure, from what I recall.

http://web.archive.org/web/20140411064356/https://zmap.io/he...


So they did, I wasn't aware of that.

This sort of proves my point from another comment: they stopped updating the list shorting after it was posted, and so all of these domains are forever stuck on the shame list.

Viewing domains from the Alexa top 1M list so many times today also makes it very clear that it is total crap.


One cannot realistically expect a secret to remain with that many people.


Just a food for thought (I agree with you that if you call yourself an attacker this is baby stuff enumerating over the top 1M sites): would the author publish google.com or twitter.com if google.com / twitter.com was among one of the affected sites? Would we consider google.com more important than sohu.com and with that we would less likely publish google.com without first notifying Google? You certainly can do your due diligence by notifying everyone on that list, give a one day and then publish the full closure? I don't know. But I am interested in the timeline and looks like this CVE might have been out for a while?

Certainly there is one site ranked #27 but I doubt you will get anything out of reporting that to the site adminstrator. I am pretty sure that site (a Chinese portal and search service) does not have bug bounty.


google.com was never going to be on the list, because the researchers specifically talked to Adam Langley at Google ahead of the public disclosure [1] and thus provided advanced warning.

Some companies will always receive early warnings about major security vulnerabilities, and that makes sense to gather details about the vulnerability and its exploits, and to minimize the negative impact of an announcement. Other companies get to find out about it the day of the public announcement -- but they don't generally also find themselves on a wall of shame the same day.

[1] https://www.smacktls.com/ under Acknowledgements


"The idea of a branded exploit – one that is carefully curated for easy consumption – is a new one. Historically obfuscation, either real or inadvertent, has been the watchword in computer security mostly because not everyone cared about major exploits. Heartbleed, in a way, was different. It was worldwide, very dangerous, and oddly photogenic. Whereas a Java exploit or Adobe Reader problem is “invisible” to the average user, the idea of a hacker watching your passwords scroll, Matrix-like without security systems setting off alarm bells is compelling and frightening. By creating a “bugs 2.0″ page for the exploit, Codenomicon inadvertently allowed the average user to understand and potentially react to the problem."

http://techcrunch.com/2014/04/09/heartbleed-the-first-consum...

EDIT: To the OP, I totally misread your point. I thought you were complaining that people were naming vulnerabilities (i.e. FeakAttack, Heartbleed, etc), whereas you said "naming the sites that had such a vulnerability". I agree with you, it's a bit tasteless. My bad! I'm leaving my comment here anyway, because I do think it's worth noting the benefits of branding a known SSL bug/exploit.


How vulnerable is accessing these sites? If I tracert to them, the few that I have tried go from the isp to high tier transit to a cloud hosting company. Doesn't seem like much in the way of attack vector beyond someone with a court order access to servers along the route.


Or anyone who runs the wifi in a coffee shop.


And so the argument between "full" and "coordinated" (or "responsible") disclosure continues.

Unfortunately, this way is a lot of the time the only way to get a company to patch. If they do patch at all, that is.


They're listing sites out of the top alexa rankings.

Anyone can do this scan themselves in minutes.

It's not a mile, it's a tiny hop and enough simpler than writing exploit code that it's negligible.


Export cipher suites have been known to be weak for years.


They have been known to be weak literally since their inception. The entire reason for export cipher suites was to create encryption that could be broken by the US government.

No one should have permitted them since the export control was lifted in 2000.

That does not change the fact that some sites did in fact continue to permit them as 'last resort' ciphersuites, to ensure total browser coverage. This did not compromise site security for users who supported actually secure ciphersuites -- until now.

Responsible disclosure should mean that impacted sites (if they have been identified) should be informed before being publicly shamed. Doesn't matter if they were doing something dumb, it wasn't a known security vulnerability before now.


I was very amused to see whitehouse.gov in the list of vulnerable sites.


LibreSSL removed the US Export cyphers by default, afaict, so shouldn't be vulnerable.


Seems like a smart move. Prevents people from shooting themselves in the foot.


If you're using AWS Elastic Load Balancer, then the quick fix is:

1) Select the load balancer you want to edit 2) Click the "Listeners" tab 3) Click "change" under the "Cipher" column for the HTTPS row 4) Select the most recent pre-defined security policy, from 2015-02.

This should get you an A on SSL Lab's test[1]

https://www.ssllabs.com/ssltest/


Is there an easy way to check our own servers? I can see the fix is to add !EXPORT to the end of the cipher list, but how do we check that the server requires the fix?

Really disappointed with this announcement. Some of the other named exploits have come with repro instructions and usually with a fix (shellshock notwithstanding). This is just a description and a shame list.


I updated a public cipher checking script earlier to specifically check EXP ciphers: https://gist.github.com/degan/70e8059507d173751294

It will attempt to connect to the domain you specify with all of the EXP ciphers your OpenSSL knows about.


That page has a meta description for different vulnerability: <meta name="description" content="POODLE Attack and SSLv3 Support Measurement" />


Fixed. :)


https://freakattack.com/clienttest.html

I just tested my devices. Linux machines running firefox all passed. On the other hand my Android phone did not, lots of RSA_EXPORT ciphers accepted.

But as with nearly every security story: linux/foss software for the WIN!


Firefox for Android is not vulnerable to FREAK, and is one of the few ways to get a modern, supported browser engine on older Android devices.


Windows Phone 8 passes as well. Closed source for the win! Did I do that right?


Oddly, I've seen different results on subsequent visits to their site with the same browser (Chrome 40).


So this isn't just a thing where I update openssl? I have to learn about configuring cyphers on short notice?


I hate to be a jerk about this, but this isn't a rhetorical question. Are there any simple instructions? 1 2 3 and you're fixed? Why didn't this come on the same page as the disclosure? (that last question can be taken as rhetorical, for now)

https://wiki.mozilla.org/Security/Server_Side_TLS#Recommende...

That's not exactly simple.

http://blog.commando.io/the-perfect-nginx-ssl-configuration/

That isn't very simple either, and it's only for nginx.

This is all asking me to learn about ciphers to patch a security hole (and know for sure that it's patched). I don't think it's unreasonable to expect otherwise for a security hole. A few people voted up my parent comment so I don't think I'm alone in this.

For me personally, the only important thing I have is on Heroku, which I presume has its own set of instructions, which they somehow haven't executed themselves or emailed us about yet. Unless this also affects SSH?


http://undeadly.org/cgi?action=article&sid=20150304092744

The following CVEs did not apply to LibreSSL: ... CVE-2015-0204 - RSA silently downgrades to EXPORT_RSA

Don't forget: http://www.openbsdfoundation.org/


To me the whole idea of negotiating ciphers seems broken: a man-in-the-middle will always choose the weakest one.

I guess the argument is that cipher negotiation lets you implement stronger crypto without defining a new protocol version, but what is the point of that? An attacker will just negotiate for the weaker cipher anyway (unless this negotiation is cryptographically protected too of course, but this seems so complex in comparison with the rather meaningless "goal" of cipher negotiation).


Here's how I've been testing this:

openssl s_client -cipher EXPORT -connect www.example.com:443

SSL Labs hasn't listed this vulnerability explicitly yet, but the test seems pretty simple.


Its interesting too me that Firefox is supposedly not vulnerable, yet on both my laptop (Windows 8.1 Firefox 36) and My Desktop (Windows 7 Firefox 36) the website (freakattack.com) says i AM vulnerable?

"Warning! Your client is vulnerable to CVE-2015-0204. Even though your client doesn't offer any RSA EXPORT suites, it can still be tricked into using one of them. We encourage you to upgrade your client. "


Checked Google Chrome prior to update, said it was vulnerable. Updated and now it isn't. Firefox 37 on OS X wasn't vulnerable apparently.


Can also just install LibreSSL portable and it will fix all these issues of insecure ciphers, SSL3 ect.


I've recently been running Hiawatha servers with PolarSSL (recently renamed something else). I have avoided all the most recent bugs.

https://tls.mbed.org/


Breakdown of FREAK sites (Alexa Top 1M) by country.

https://infogr.am/https_sites_that_support_rsa_export_suites


If you want to check your domains/servers, not just your clients I updated a cipher verification script to just test Export (EXP) ciphers via openssl: https://gist.github.com/degan/70e8059507d173751294


Could do with making your messages a bit more clear, i.e: does No mean not vulnerable or does Yes mean not vulnerable.


Yep, I used it and I have no idea whatsoever whether my server is vulnerable or not. Similar for the ssllabs.com test.


If it connects to any of those ciphers with a YES, you may have a problem. They should all say NO.


thanks for clarifying this.


Waiting for ssllabs.com to add FREAK checks. Thanks guys, welcomo your support!


How can I test if my own, Heroku-based servers are affected?


This gist seems to be working well for testing: https://gist.github.com/degan/70e8059507d173751294

(I've run it against the servers I manage)


Wouldn't it be Heroku's job to update their SSL settings?


is cloudflare safe from this?



This is a very disappointing trend in security. Publicly shaming sites into action is not a benefit that outweighs making it easier for attackers. It's ridiculous to argue that it is.


Running zmap or masscan is such a low barrier to entry (compared to the expertise required to then factor the key and set up a MitM attack) that it's hard to argue this is really making it that much easier for attackers.

Aside from the benefit of pressuring these sites into fixing the problem, it also benefits users, who can now make an informed choice about whether to (say) visit a vulnerable site at a cafe or wait until they get home.


I want to disagree with you, but you're right. If there is any reason to call into question the trust or safety of a site I visit, I should know.


I disagree with this perspective entirely. There are many more users of these sites than operators. Assume that a site is no longer secure, therefore operating any of these sites and claiming secure comms is fraudulent. This fraud is obviously unintentional of course, but the greater damage is to the user, not the site.

Secondly, it saves attackers a trivial amount of time. If they're able to exploit this problem, scanning for its existence is orders of magnitude easier.


Do you know if they are only scanning or reporting the 'www' sites or are they listing the main site even if it's just a single server misconfigured, or subdomain, etc?


Details are sparse, but the text file is literally bare domains and an IP that in my testing is always the A record for domain.blah. I don't think they're even looking at www.domain.blah, let alone actually crawling these sites or otherwise exhausting their domain space.


I suspected as much. It makes this a lot less useful, but, I guess it's more like ringing an alarm than being precise. On the other hand for some sites this might amount to a false alarm if the tested address has no critical service running on it. Mind you they should all be remedied, but some more hurriedly than others.


I think it's particularly misleading because some sites only run redirector services on domain.blah for the purpose of sending you to www.domain.blah.

Yes the problem should still be remedied, but no customer data flows through this service, and the connection would be renegotiated after the redirect on systems that may bear very little resemblance technically.


I would agree with you for a vulnerability like Heartbleed, where the attack was against the site itself. This is an attack against the users of these sites, so users need to know which sites are vulnerable. Also, attackers are far more capable of gathering this data on their own than users are.


freakattack.com is an IP owned and managed by the University of Michigan. I could not visit the site due to them being in my firewall's ban list caused by unauthorized vulnerability testing against my home network.

As an aside I wonder why our tax dollars are being used to support unauthorized vulnerability attempts and for hosting a .com commercial site?

Is it legal for the person/people operating freakattack.com to use US Tax Income to fund their own commercial efforts using University resources? I didn't graduate college, maybe it's legal for them to do this?


> support unauthorized vulnerability attempts

That was probably just a random student who learned some fun stuff in Security class and slept through the Ethics lesson. I can't speak for UMich, but security research at my university (NC State) has a very strict "don't attack civilians" policy.

> hosting a .com commercial site

First off, .com sites are not necessarily commercial. Second, this isn't a commercial site, it's an informational page about a recently discovered TLS vulnerability.


In the first case I read you as saying it's OK to commit a crime against a civilian in the United States as long as [the person didn't mean to] and in the second case that since not all .COM domains are used for commercial purposes and since this one seems to be information only at the moment; that our tax dollars which helps Universities across the United States to run can be used to fund whatever .COM sites students feel so inclined to register and for whatever reason they feel is justified.


I heard that rhetoric when ones you are calling for help prosecuted Aaron Schwartz. All in times when NSA was hacking all the systems they could get their hands on both around the world and in USA.

You may be overreacting and unwillingly supporting erosion of civil rights.


What is your evidence that the vulnerability testing was done by someone supported by your tax dollars, instead of by a computer that was part of a botnet controlled by your government's cyberenemies?


Its probably just scans from zmap. Complaining about zmap scans is about on the level of complaining about ssllabs.com scanning your box.

https://zmap.io/

It could be a student in the dorms who discovered metasploit though. Or someone in the computer lab who has a tool that doesn't need root. (or who rooted the lab computer)


Here is a check for the IP for freakattack.com:

http://www.tcpiputils.com/browse/ip-address/141.212.122.194

Edit: They have been on that list for a while, so either the staff at the University is incompetent or they don't care; what was your point again?


This is why reverse DNS exists. http://researchscan450.eecs.umich.edu/


You are actually correct that you were scanned by an official, funded project at the University of Michigan. The research team specializes in "internet-wide measurement", meaning they scan for vulnerabilities on a regular basis in order to get a sort of "Internet health report".

Nonetheless, if this bothers you, visiting the IP that scanned you gives you instructions for opting out: http://141.212.122.194


He asked about why UMichigan is inaccessible from your network. How do you know it was a supported student activity and not either a malicious student, or a machine on the UMichigan network that's been compromised?

Have you reported the activity against your home network to UMichigan?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: