Hacker News new | comments | ask | show | jobs | submit login
This POODLE bites: exploiting the SSL 3.0 fallback (googleonlinesecurity.blogspot.com)
391 points by ch0wn on Oct 14, 2014 | hide | past | web | favorite | 103 comments



I'm not entirely sure, because I'm in a car waiting to pick up my daughter from play practice while I consider it, but I think this is worse than BEAST. It's slower, but it's easier.

POODLE seems to be a padding oracle based on SSL 3.0's inability to fully validate padding. The oracle only gives you the last byte of a block; a full extended padding oracle gives you successive bytes, but this vulnerability doesn't. The authors sidestep that problem by using application-layer control of the boundaries of blocks to repeatedly line up the byte they want to learn with the last byte of padding the vulnerability reveals. C l e v e r !

The difference, I think, between POODLE and BEAST is that BEAST needed not just blockwise "chosen-boundary" control over plaintext, but also a continuous channel that would provide the client with fine-grained control over the first N bytes of each request. It didn't work (IIRC) with vanilla Javascript.

This attack, however, seems to.


> The difference, I think, between POODLE and BEAST is that BEAST needed not just blockwise "chosen-boundary" control over plaintext, but also a continuous channel that would provide the client with fine-grained control over the first N bytes of each request. It didn't work (IIRC) with vanilla Javascript.

Sure you have control over the first N bytes. Look at the request-line: "GET /hello/world/this/is/my/url HTTP/1.1". Sure, you don't control the spaces, but you can assume any practical implementation uses a single space character. Combine that with control over the method (with XHR or statically through <form> or <img>) and the path, you're in business.

AIUI, the JS PoC took a fair bit of trial-and-error to find the right value of N, but all these attacks require a fair number of requests anyway, so it's not really significant.


> Sure you have control over the first N bytes. Look at the request-line: "GET /hello/world/this/is/my/url HTTP/1.1". Sure, you don't control the spaces, but you can assume any practical implementation uses a single space character. Combine that with control over the method (with XHR or statically through <form> or <img>) and the path, you're in business.

tptacek is right. To make BEAST work we had to control _all_ the bytes of the very first block. We tried very hard to make it work with Javascript, but we couldn't. Java applet (and maybe Flash) was the only tool that gave us that kind of control.


This text seems to contain interesting information. Could somebody (who is familiar with this jargon) translate it to plain english? thanks.


The vulnerability only shows the last byte of the cypher block, but the authors of the exploit allow you to decide where the last byte of the block will be, therefore allowing you to view pretty much any byte you want.


thank you for the clarification


Apparently independently discovered by Thomas Pornin with a few hours of work: http://chat.stackexchange.com/transcript/message/18151930#18...

We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly. It makes me want to find such and firmly tell them that there are vulnerabilities in TLS 1.2.


> We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly.

That reminds me of the old QA technique / tactic of only telling the developers where a bug was found. Sometimes you don't even have to find an issue, you can just pick a complicated module.


There was a study done decades ago where code was seeded with a number of bugs. Developers were told that there were a number of bugs. They found that many, but the overlap was not total. That is to say, they found bugs that the study authors did not seed.


Phew, I sure am glad that we instituted a company-wide policy of not putting any bugs in the code in the first place. Just makes life easier all around.


Well, I remember at the time reading this that if one picked an arbitrary number, and told the developers that there were that many, they would find that many even if none were inserted.


You laugh, but this is closer to most companies "quality policy" than otherwise.

True quality comes from good process and systems, nothing more, nothing less.


If you can dig up the citation, I'd love to read this. That's fascinating.


I was not successful, but a sibling comment to yours https://news.ycombinator.com/item?id=8458030 provides a very useful description of the phenomenon.



Pornin did this --- to Thai Duong, no less! --- with CRIME, too. That guy is a freak of nature.

I also ruefully remember how Halvar Flake did this to Dan Kaminsky's embargoed DNS vulnerability.


When Pornin disclosed CRIME I was telling myself: "Wait a minute, I saw this name somewhere." Then I remembered that it was Pornin's article [1] that helped me understand how zlib flush modes work.

I thought that was pretty funny.

[1] http://www.bolet.org/~pornin/deflate-flush-en.html


You guys are both my heroes.


I don't follow -- how is it an independent discovery if it was posted after complete details had already been released by Google?


Pornin's comments on SE predate the publication.


Oh my bad. What is the explanation of this part?

"given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly."

Was there previously an announcement that a vulnerability existed, without details of what the vulnerability was?


There was a lot of Twitter traffic about SSL3 being bad. But TLS1 has very few differences from SSL3.

In Heartbleed, we knew that it was introduced with a particular OpenSSL version. That had 12 changes; 8 were DTLS. Two were clearly not relevant. One was heartbeat. Dan Franke took about fifteen minutes to go from that set of inputs to a sketch of heartbleed.

I now advocate QA by what I call "virtual tests": you don't have to write the tests, just the failure messages. Developers will do the rest of the work on their own.


Yes, there was: https://news.ycombinator.com/item?id=8452931

That announcement implied that the vulnerability was SSL 3.0 only. Knowing that, the obvious place to look at would be the differences between SSL 3.0 and its successor TLS 1.0.


Here's relevant nginx configuration to disable SSLv3:

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:EECDH+RC4:RSA+RC4:!MD5;
  ssl_prefer_server_ciphers on;
This ciphersuite is recommended by CloudFlare.


For those who want to verify their change after updating the configuration (since I looked for such information): either use "nmap --script ssl-enum-ciphers -p 443 <host>" or ssllabs (eg: https://www.ssllabs.com/ssltest/analyze.html?d=wisecashhq.co...) and you will see the enabled protocols.


For those running IIS, use IIS Crypto by Nartac Software for a decent GUI on configuring your protocol and cipher options.


If you use SNI (many domains on one IP), make sure to put this (ssl_protocols TLSv1 TLSv1.1 TLSv1.2;) into the default server { ... } config (or the first one: the one which opens when you go to https://your_server_ip).


Or better yet, don't serve any content from the default. I actually return a 403 error for the default host or any request without a Host header.


The problem is that if you specify ssl_protocols somewhere else rather than default/first server, it won't work. So, yes, serve error, but specify ssl_protocols in there.


Why not just disable the default server? (genuine question; I don't know if there are drawbacks to it)


With name-based virtual hosts (those that rely on the server selecting the appropriate resource based on the Host header), typical clients depend on the IP address returned by DNS for that host. If they visit that IP, ask for the host, and the server isn't configured to deliver that host's resources, it's good practice to give the client an error. Since the web server has to listen on that IP without knowing which host will be requested before the connection is made, it's convenient to have a fallback and handle errors there. I deny all access to the default host, which generates a 403 Forbidden error (with a custom message), but there are definitely other ways to deal with this situation.

The important thing is that a host's protected resources are served only when SSL/TLS is properly negotiated. Serving one host's content as the default when another host was requested violates this.

In practice, nearly all of these requests come from bots, crawlers and penetration testers. So another advantage is that the log entries can be used to block further requests at the firewall, freeing resources and even possibly protecting the server from undisclosed vulnerabilities (test this approach carefully to make sure it's appropriate for your site and doesn't subject you to a DoS).


This works, but from TFA:

"Disabling SSL 3.0 support … presents significant compatibility problems"

"Therefore our recommended response is to support TLS_FALLBACK_SCSV."


Disabling SSLv3 will indeed affect a significant amount of clients in the real world.

I've seen a few commenters here on HN that point out that pretty much everything since Windows XP (ignoring IE6) supports at least 1.0 of the TLS protocols. While that may be correct in theory, in practice it's not.

At a 1MM+ visitors/week site we still see a few percent of our users that regularly connect using SSLv3 across different versions of Windows, including more modern ones such as Windows Vista, 7 and 8(!)

Though I'm not sure why this is the case, antivirus software suites such as McAfee[1] have in the past been known to disable TLS 1.0 system wide in Windows.

[1] http://answers.microsoft.com/en-us/ie/forum/ie8-windows_othe...


It's well-known that the fallback can be triggered by accident, see for instance https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00:

"[...] Also, handshake errors due to network glitches could similarly be misinterpreted as interaction with a legacy server and result in a protocol downgrade."

Perhaps that's what you're seeing.


CloudFlare sees 0.65% of HTTPS traffic using SSL v3 so it's a pretty small impact.


and of that 0.65% - 98% of them have the ability to connect over TLS.

so we're talking 2% of 1% that are dead in the water.


Except if you force downgrade browsers. So they nee to actually support the extension to prevent that.


The point was that if you disable SSLv3, you will cut off some users from your site, but only 0.65%, so it's not that bad of an effect.


Does anyone have any idea what kind of clients would require SSL3 to stay enabled? Old android phones and/or Windows XP perhaps?


Quoting myself (https://news.ycombinator.com/item?id=8453718):

"For clients, a quick look at https://www.ssllabs.com/ssltest/clients.html shows that even older clients (Android 2.3, Java 6, the oldest supported version of IE, etc) support TLS 1.0, so there should be no issues disabling SSLv3 on servers too."


IE6/XP According to Qualsys.


IE6 on XP can actually use TLS, it is just not enabled by default.

But IE8 is readily available on XP so who would use IE6


I think it's probably safe to say that anyone who's using IE6 is either not one who cannot change the defaults (by policy or by skill) or their machine is already malware infested.


Source for the Cloudfare configuration: https://github.com/cloudflare/sslconfig


Firefox disabled a bunch of ciphers on the client side with today's release as well


If you are referring to https://bugzilla.mozilla.org/show_bug.cgi?id=1036765, that was an unrelated change that was made well in advance of my knowledge of this issue.


Mozilla is going to disable SSL 3.0 by default in Firefox 34: https://blog.mozilla.org/security/2014/10/14/the-poodle-atta...


You don't have to wait for Firefox 34. Go to about:config and set security.tls.version.min to 1. That will force the browser to use only TLSv1 or newer.


I've done that since before security.tls.version.min existed (the pref had another name before, and allowed nonsensical combinations like allowing TLS 1.0 and SSL 2.0 but not SSL 3.0).

But Firefox 34 disabling SSL 3.0 by default will disable it even for nontechnical users. That should force the remaining non-TLS-compatible servers and middleboxes to be fixed.

On the server side, hopefully the knee-jerk reaction of disabling SSL 3.0 will have the same effect, forcing the remaining non-TLS-compatible clients and middleboxes to be fixed.

That could allow for the SSL 3.0 code to be finally removed from both clients and servers, like happened with the older SSL 2.0. And with it, the fallback code for extension intolerant servers, so clients can count on being able to use extensions.


And if you use thunderbird for emails, remember to do there too. (Same config key).


Adam Langley from Google explained this in more details: https://www.imperialviolet.org/2014/10/14/poodle.html


Check me on this: The TLS_FALLBACK_SCSV will prevent downgrade attacks, yes. However, any "real" SSLv3 connections will still be SSLv3 and still fully vulnerable to the described attack. Downgrading is not a necessary component of the attack, it just increases the number of vulnerable client/server combinations to include those that would normally not be vulnerable due to negotiating TLS1.0+. Therefore, if you are in a position where you truly care about security and the fact the SSLv3 has such an enormous hole in it is unacceptable, you should still be looking at simply turning off SSLv3 as the only acceptable mitigation, even if that does cut some clients off.

Yes?

(Except please note for the purposes of this question I'm assuming as a given that cutting off SSLv3 is considered preferable by the entity in question to a very weak SSL negotiation. Whether or not any given entity should have that opinion is a different question; I politely ask that you get into that question elsewhere.)


The end of https://www.imperialviolet.org/2014/10/14/poodle.html talks about this better than the several paragraphs I just wrote in a draft comment. So go read that.

tl;dr: practically, yes, we want to move away from SSLv3 ASAP, though it isn't totally broken yet.


Thanks. I wanted to check because there's a common pattern in these sorts of security incidents where some suggestion for possible mitigation gets made (note how I weakened that phrase) and by the time the game of security telephone is played out over a few panicky hours it turns into "just do X and the problem is solved!". For instance, "just switch to RC4!" was not exactly the most solid of advice for BEAST, if one actually thought about it, but it was frequently presented as The Solution (TM). (Not necessarily by knowledgeable people, of course, but by enough people.)

TLS_FALLBACK_SCSV has the danger of becoming that, but in the short term it doesn't seem like it actually does much in a world where, AFAIK, not very many SSL connections support it.


Sounds right, and I think that's covered by this:

> Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, but presents significant compatibility problems, even today.


> The TLS_FALLBACK_SCSV will prevent downgrade attacks

And it's only implemented by boringssl, ie, Google Chrome and Google's web servers?


It's independently in others servers as well. No idea on clients yet.


See https://github.com/yahoo/cnet for a client that uses BoringSSL


Disable SSLv2 and SSLv3

For Apache:

SSLProtocol all -SSLv2 -SSLv3

For Nginx:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Source: http://blog.rlove.org/2013/12/strong-ssl-crypto.html


For any unfortunate souls using IIS, Digicert has a walkthrough of disabling SSLv3 at https://www.digicert.com/ssl-support/iis-disabling-ssl-v3.ht...


For IIS:

Nartac IIS Crypto gives a nice GUI to manage all Protocols, Ciphers, Hashes and Key Exchanges https://www.nartac.com/Products/IISCrypto/


If you're using AWS' ELB, Amazon has already added a new Predefined Security Policy with SSLv3 disabled: ELBSecurityPolicy-2014-10


Yes. If you have a lot of ELBs, use the following instructions instead of changing the policies by hand: https://gist.github.com/aastaneh/46ceb03150e5284b8a3a


I'm starting to wonder how many other important vulnerabilities I'm missing because their discoverer didn't come up with a catchy name like Heartbleed, Shellshock, or POODLE.


I'm starting to collect articles of merit related to the new attack.

I'll continue to grow the list the more I see/read.

https://cybershambles.com/question/97-the-sslappening-poodle...


I made a "non-technical" Poodle scan reporter at: https://www.expeditedssl.com/poodle if anybody needs to convince someone in their organization that a problem needs addressing and to take action.


This seems to be reporting "not vulnerable" no matter what I put in. Known vulnerable sites still are getting the checkmark.

We wrote one also at https://www.tinfoilsecurity.com/poodle


Here is another one vulnerable hosts checker http://poodlemap.com/


Hey, we're checking for SSLv3 and then CBC ciphers, so if you're not on 3 or not using ciphers you'll show as ok.


It seems that this is an attack on the CBC-mode ciphers but doesn't change anything about the RC4 ciphers.

RC4 is mentioned in passing as having weaknesses, but is it actually broken? If we can't disable SSL3 completely would using only RC4 ciphers be an option?


RC4 in TLS is pretty bad too:

http://www.isg.rhul.ac.uk/tls/

"Our second attack applies to TLS and can be carried out in a single connection or session (but tolerates multiple connections/sessions). It exploits certain double-byte biases in RC4 keystreams (the Fluhrer-McGrew biases). It targets plaintext bytes located at any position in the TLS plaintext stream. The number of encryptions needed to reliably recover a set of 16 consecutive targeted plaintext bytes is around 10 times 2^30, but already with only 6 times 2^30 sessions, these target bytes can be recovered with 50% reliability. Since this double-byte bias attack does not require the TLS Handshake Protocol to be rerun, it can in practice be more efficient than our single-byte bias attack."

http://www.isg.rhul.ac.uk/tls/RC4biases.pdf

edit: I could not get star to show-up, so I just spelled it out as 'times.'


That's the "worst" RC4 attack I'm aware of. But I'm not sure that it's quite at the level of a practical attack - certainly not as bad as POODLE.


Usually, the server will reject this record, and the attacker will simply try again with a new request. Occasionally (on average, once in 256 requests), the server will accept the modified record

This suggests to me that a possible workaround could be to detect this attack because it will generate the characteristic pattern of a successful record amongst many invalid ones, and then expire the relevant cookies; by the time the attacker has figured out a byte or two, the cookie has already become useless. It could potentially turn into a denial-of-service, but that's something anyone with MITM capability can do trivially anyway.


FYI, if you're using Twilio: https://twitter.com/twilio/status/522446663130963969

"If you are encountering trouble with inbound Twilio requests while mitigating the SSLv3 vuln, contact help@twilio.com for direct help."

(That is, they have to manually enable TLS on your account.)

Also, if you're using GET requests with ExactTarget, you'll run into the same thing, but I haven't heard back from them if / when they'll have that fixed.


How to disable SSL 3.0 in IIS: http://support.microsoft.com/kb/187498


Practically speaking, how broken is SSLv3.0 now? Are we hours, days, weeks, months, or years from someone actually getting out there and exploiting this?


This is exploitable now. Padding oracles have been found before, and are even homework assignments of the Coursera cryptography course. The specifics of this bug make it a little bit harder but the paper explains how to work around this. Expect this bug to be exploited in the wild since minutes after (or perhaps already before) the release.


Sounds fairly straight forward...

You need 256 requests to get one byte of clear text. A 160bit session key is likely to be 40 bytes, so that's 10,240 requests to get the session key - it might actually be less if you can take shortcuts because you know the result bytes can only be 0-9a-f. You also need to be a man in the middle.


To business owners and large sites operators out there: before disabling SSLv3, make sure that none of your clients/customers/users are stuck on IE6. We still see significant IE6 traffic coming from China. Some legacy clients are also stuck on SSLv3.

As a general rule, review your logs before disabling things. And ask your users to use modern browsers as soon as possible.


As a web app developer focusing on applications in both oil and healthcare related apps for smaller businesses we disabled SSL3 early this year after XP was depreciated. The vast majority of our clients had upgraded to a newer version of Windows already, but a few stuck on XP were able to use Chrome or Firefox to access the programs.

As for legacy clients, time to firewall them from the internet. They make the Internet a more dangerous place for everybody. If your device/program cannot be updated then it needs put on a vlan'd network segment. Breach after breach have shown that attackers will use any exploitable resource go gain a foothold inside your network.


If you absolutely have to use IE6, go to Internet Options's Advanced tab and check TLS 1.0 and while you are at it uncheck SSL 2.0. But of course the preferred solution is to upgrade and while you are it please also update to XP SP3 if you hasn't already. There is no WGA check in WinXP service pack in general, despite such misconceptions.


I'm assuming that a user stuck on IE6 doesn't have the necessary technical expertise to upgrade or change security parameters.


I'd assume a user stuck on IE6 without the necessary technical expertise can follow a detailed guide full of pretty pictures written by someone else to change security parameters.

There's also the old trick of giving the user a .reg (or if that's too crude, a simple executable) to poke the relevant settings directly in the registry.

(Poking directly in the registry can break if Microsoft changes the way that setting is stored, but they won't change anything on XP anymore, so it's as safe as poking in the registry usually is.)


to be honest a user on IE6 can be compromised in a zillion other ways :/


How did I know I'd find you here by typing "[Ctrl] + [F] jvehent [Enter]" ?


Firefox 34 will disable SSLv3 entirely:

https://blog.mozilla.org/security/2014/10/14/the-poodle-atta...

Already using the beta, it is very stable.

You can also set security.tls.version.min to 1

In Chrome set the command line flag --ssl-version-min=tls1


Sorry if I've have missed this if it's been posted elsewhere, but is there a way of having Apache log which SSL protocol version is being used for SSL/TLS connections?

I've seen people post figures like 0.85% of HTTPS connections have been SSL 3.0 and was wondering how those figures were compiled.


Reading https://www.openssl.org/~bodo/ssl-poodle.pdf may also be helpful if you want to fully understand this exploit.


This has pretty large implications for countries (namely china) with still a sizable IE6 userbase. IE6 doesn't support TLS by default, making https effectively completely unsupported for that browser.


Isn't it an option? I remember IE5 having TLS1.0 as an option. Just because it's default not enabled doesn't mean it's "effectively completely unsupported". Browser protocol support matrix here:

http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_br...


http://jbp.io/2013/07/07/tls-downgrade/

An old writeup of mine on TLS downgrade, if anyone's interested.


Disable SSL3.0 in golang ListenAndServeTLS: https://gist.github.com/olov/eb60ab878eb73a7c5e22


You can check to see if you're vulnerable using our free tool: https://www.tinfoilsecurity.com/poodle


https://zmap.io/sslv3/ - "POODLE Attack and SSLv3 Support Measurement"


We are in the horrible position of supporting some straggling IE6 users. Thankfully disabling SSLv3 means they can no longer log in.


This vulnerability is being downplayed and details are being kept secret until they can patch this bug because of how severe this is


this may help clarify why Firefox dragged its feet for so long to enable TLS 1.2 support by default...from the timeline, we could probably make some guesses as to when certain agencies took notice of this attack.

the real question is why it took major site ops this long to realize. given a trove of handshakes (which Google has been saving for years), user-agent headers, and expected ciphersuites, it perhaps should not have been too difficult to detect downgrade attacks in the wild. that doesn't in itself give you POODLE, but it probably offers some clues...especially given other information available to them.


despite the positive changes in today's Firefox release, it's worth remembering that time Firefox decided the GUI options for disabling sslv3 should be removed http://forums.mozillazine.org/viewtopic.php?f=23&t=2722163


I made the change to remove the version UI from Firefox. I made that change for two reasons:

(1) It sped up the implementation of TLS 1.1 and TLS 1.2. Removing the UI meant I didn't have to redo the UI when I added TLS 1.1 and TLS 1.2 support. Keeping that UI would have caused TLS 1.1 and TLS 1.2 to be delayed even further.

(2) Our measurements showed that ~1% of users had disabled TLS 1.0, leaving only SSL 3.0 enabled. I was worried that the UI encouraged people to make that mistake. Also, it is the application's responsibility to do the right thing automatically. People shouldn't have to fiddle with such options to be secure.


To be honest, I remember the Slashdot article that incorrectly suggested that SSL 2.0 and TLS 1.0 was affected by BEAST.


i was not implying that POODLE affected TLS 1.0/1.1 in my (downvoted) parent comment, but was pointing out that the scope of ciphersuites in Firefox has been effectively constrained for a large number of users by defaults and GUI toggles vs. enabled about:config prefs for quite some time now.


And what I am talking about is why.


there are many layers of indirection.

i wouldn't dispute that disabling sslv3 gui prefs in Firefox may have had significant positive impacts on many users or that many users are clueless about crypto in general as it seems you're implying, but it's worth considering that those decisions also have implications that may be convenient for other parties, like dramatically increasing the activation energy to configuring a browser for security.

another example is how long it currently takes to configure via GUI distrusting or at least warning users when Firefox relies on CAs that users from a given country may almost never encounter. should your average Firefox user in the United States always silently trust Turkmenistan's CA out of the box?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: