POODLE seems to be a padding oracle based on SSL 3.0's inability to fully validate padding. The oracle only gives you the last byte of a block; a full extended padding oracle gives you successive bytes, but this vulnerability doesn't. The authors sidestep that problem by using application-layer control of the boundaries of blocks to repeatedly line up the byte they want to learn with the last byte of padding the vulnerability reveals. C l e v e r !
This attack, however, seems to.
Sure you have control over the first N bytes. Look at the request-line: "GET /hello/world/this/is/my/url HTTP/1.1". Sure, you don't control the spaces, but you can assume any practical implementation uses a single space character. Combine that with control over the method (with XHR or statically through <form> or <img>) and the path, you're in business.
AIUI, the JS PoC took a fair bit of trial-and-error to find the right value of N, but all these attacks require a fair number of requests anyway, so it's not really significant.
We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly. It makes me want to find such and firmly tell them that there are vulnerabilities in TLS 1.2.
That reminds me of the old QA technique / tactic of only telling the developers where a bug was found. Sometimes you don't even have to find an issue, you can just pick a complicated module.
True quality comes from good process and systems, nothing more, nothing less.
I also ruefully remember how Halvar Flake did this to Dan Kaminsky's embargoed DNS vulnerability.
I thought that was pretty funny.
"given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly."
Was there previously an announcement that a vulnerability existed, without details of what the vulnerability was?
In Heartbleed, we knew that it was introduced with a particular OpenSSL version. That had 12 changes; 8 were DTLS. Two were clearly not relevant. One was heartbeat. Dan Franke took about fifteen minutes to go from that set of inputs to a sketch of heartbleed.
I now advocate QA by what I call "virtual tests": you don't have to write the tests, just the failure messages. Developers will do the rest of the work on their own.
That announcement implied that the vulnerability was SSL 3.0 only. Knowing that, the obvious place to look at would be the differences between SSL 3.0 and its successor TLS 1.0.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
The important thing is that a host's protected resources are served only when SSL/TLS is properly negotiated. Serving one host's content as the default when another host was requested violates this.
In practice, nearly all of these requests come from bots, crawlers and penetration testers. So another advantage is that the log entries can be used to block further requests at the firewall, freeing resources and even possibly protecting the server from undisclosed vulnerabilities (test this approach carefully to make sure it's appropriate for your site and doesn't subject you to a DoS).
"Disabling SSL 3.0 support … presents significant compatibility problems"
"Therefore our recommended response is to support TLS_FALLBACK_SCSV."
I've seen a few commenters here on HN that point out that pretty much everything since Windows XP (ignoring IE6) supports at least 1.0 of the TLS protocols. While that may be correct in theory, in practice it's not.
At a 1MM+ visitors/week site we still see a few percent of our users that regularly connect using SSLv3 across different versions of Windows, including more modern ones such as Windows Vista, 7 and 8(!)
Though I'm not sure why this is the case, antivirus software suites such as McAfee have in the past been known to disable TLS 1.0 system wide in Windows.
"[...] Also, handshake errors due to network glitches could similarly be misinterpreted as interaction with a legacy server and result in a protocol downgrade."
Perhaps that's what you're seeing.
so we're talking 2% of 1% that are dead in the water.
"For clients, a quick look at https://www.ssllabs.com/ssltest/clients.html shows that even older clients (Android 2.3, Java 6, the oldest supported version of IE, etc) support TLS 1.0, so there should be no issues disabling SSLv3 on servers too."
But IE8 is readily available on XP so who would use IE6
But Firefox 34 disabling SSL 3.0 by default will disable it even for nontechnical users. That should force the remaining non-TLS-compatible servers and middleboxes to be fixed.
On the server side, hopefully the knee-jerk reaction of disabling SSL 3.0 will have the same effect, forcing the remaining non-TLS-compatible clients and middleboxes to be fixed.
That could allow for the SSL 3.0 code to be finally removed from both clients and servers, like happened with the older SSL 2.0. And with it, the fallback code for extension intolerant servers, so clients can count on being able to use extensions.
(Except please note for the purposes of this question I'm assuming as a given that cutting off SSLv3 is considered preferable by the entity in question to a very weak SSL negotiation. Whether or not any given entity should have that opinion is a different question; I politely ask that you get into that question elsewhere.)
tl;dr: practically, yes, we want to move away from SSLv3 ASAP, though it isn't totally broken yet.
TLS_FALLBACK_SCSV has the danger of becoming that, but in the short term it doesn't seem like it actually does much in a world where, AFAIK, not very many SSL connections support it.
> Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, but presents significant compatibility problems, even today.
And it's only implemented by boringssl, ie, Google Chrome and Google's web servers?
SSLProtocol all -SSLv2 -SSLv3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
Nartac IIS Crypto gives a nice GUI to manage all Protocols, Ciphers, Hashes and Key Exchanges https://www.nartac.com/Products/IISCrypto/
I'll continue to grow the list the more I see/read.
We wrote one also at https://www.tinfoilsecurity.com/poodle
RC4 is mentioned in passing as having weaknesses, but is it actually broken? If we can't disable SSL3 completely would using only RC4 ciphers be an option?
"Our second attack applies to TLS and can be carried out in a single connection or session (but tolerates multiple connections/sessions). It exploits certain double-byte biases in RC4 keystreams (the Fluhrer-McGrew biases). It targets plaintext bytes located at any position in the TLS plaintext stream. The number of encryptions needed to reliably recover a set of 16 consecutive targeted plaintext bytes is around 10 times 2^30, but already with only 6 times 2^30 sessions, these target bytes can be recovered with 50% reliability. Since this double-byte bias attack does not require the TLS Handshake Protocol to be rerun, it can in practice be more efficient than our single-byte bias attack."
edit: I could not get star to show-up, so I just spelled it out as 'times.'
This suggests to me that a possible workaround could be to detect this attack because it will generate the characteristic pattern of a successful record amongst many invalid ones, and then expire the relevant cookies; by the time the attacker has figured out a byte or two, the cookie has already become useless. It could potentially turn into a denial-of-service, but that's something anyone with MITM capability can do trivially anyway.
"If you are encountering trouble with inbound Twilio requests while mitigating the SSLv3 vuln, contact email@example.com for direct help."
(That is, they have to manually enable TLS on your account.)
Also, if you're using GET requests with ExactTarget, you'll run into the same thing, but I haven't heard back from them if / when they'll have that fixed.
You need 256 requests to get one byte of clear text. A 160bit session key is likely to be 40 bytes, so that's 10,240 requests to get the session key - it might actually be less if you can take shortcuts because you know the result bytes can only be 0-9a-f. You also need to be a man in the middle.
As a general rule, review your logs before disabling things. And ask your users to use modern browsers as soon as possible.
As for legacy clients, time to firewall them from the internet. They make the Internet a more dangerous place for everybody. If your device/program cannot be updated then it needs put on a vlan'd network segment. Breach after breach have shown that attackers will use any exploitable resource go gain a foothold inside your network.
There's also the old trick of giving the user a .reg (or if that's too crude, a simple executable) to poke the relevant settings directly in the registry.
(Poking directly in the registry can break if Microsoft changes the way that setting is stored, but they won't change anything on XP anymore, so it's as safe as poking in the registry usually is.)
Already using the beta, it is very stable.
You can also set security.tls.version.min to 1
In Chrome set the command line flag --ssl-version-min=tls1
I've seen people post figures like 0.85% of HTTPS connections have been SSL 3.0 and was wondering how those figures were compiled.
An old writeup of mine on TLS downgrade, if anyone's interested.
the real question is why it took major site ops this long to realize. given a trove of handshakes (which Google has been saving for years), user-agent headers, and expected ciphersuites, it perhaps should not have been too difficult to detect downgrade attacks in the wild. that doesn't in itself give you POODLE, but it probably offers some clues...especially given other information available to them.
(1) It sped up the implementation of TLS 1.1 and TLS 1.2. Removing the UI meant I didn't have to redo the UI when I added TLS 1.1 and TLS 1.2 support. Keeping that UI would have caused TLS 1.1 and TLS 1.2 to be delayed even further.
(2) Our measurements showed that ~1% of users had disabled TLS 1.0, leaving only SSL 3.0 enabled. I was worried that the UI encouraged people to make that mistake. Also, it is the application's responsibility to do the right thing automatically. People shouldn't have to fiddle with such options to be secure.
i wouldn't dispute that disabling sslv3 gui prefs in Firefox may have had significant positive impacts on many users or that many users are clueless about crypto in general as it seems you're implying, but it's worth considering that those decisions also have implications that may be convenient for other parties, like dramatically increasing the activation energy to configuring a browser for security.
another example is how long it currently takes to configure via GUI distrusting or at least warning users when Firefox relies on CAs that users from a given country may almost never encounter. should your average Firefox user in the United States always silently trust Turkmenistan's CA out of the box?