I'm not entirely sure, because I'm in a car waiting to pick up my daughter from play practice while I consider it, but I think this is worse than BEAST. It's slower, but it's easier.
POODLE seems to be a padding oracle based on SSL 3.0's inability to fully validate padding. The oracle only gives you the last byte of a block; a full extended padding oracle gives you successive bytes, but this vulnerability doesn't. The authors sidestep that problem by using application-layer control of the boundaries of blocks to repeatedly line up the byte they want to learn with the last byte of padding the vulnerability reveals. C l e v e r !
The difference, I think, between POODLE and BEAST is that BEAST needed not just blockwise "chosen-boundary" control over plaintext, but also a continuous channel that would provide the client with fine-grained control over the first N bytes of each request. It didn't work (IIRC) with vanilla Javascript.
> The difference, I think, between POODLE and BEAST is that BEAST needed not just blockwise "chosen-boundary" control over plaintext, but also a continuous channel that would provide the client with fine-grained control over the first N bytes of each request. It didn't work (IIRC) with vanilla Javascript.
Sure you have control over the first N bytes. Look at the request-line: "GET /hello/world/this/is/my/url HTTP/1.1". Sure, you don't control the spaces, but you can assume any practical implementation uses a single space character. Combine that with control over the method (with XHR or statically through <form> or <img>) and the path, you're in business.
AIUI, the JS PoC took a fair bit of trial-and-error to find the right value of N, but all these attacks require a fair number of requests anyway, so it's not really significant.
> Sure you have control over the first N bytes. Look at the request-line: "GET /hello/world/this/is/my/url HTTP/1.1". Sure, you don't control the spaces, but you can assume any practical implementation uses a single space character. Combine that with control over the method (with XHR or statically through <form> or <img>) and the path, you're in business.
tptacek is right. To make BEAST work we had to control _all_ the bytes of the very first block. We tried very hard to make it work with Javascript, but we couldn't. Java applet (and maybe Flash) was the only tool that gave us that kind of control.
The vulnerability only shows the last byte of the cypher block, but the authors of the exploit allow you to decide where the last byte of the block will be, therefore allowing you to view pretty much any byte you want.
We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly. It makes me want to find such and firmly tell them that there are vulnerabilities in TLS 1.2.
> We saw this with Heartbleed too: given sure confidence that there is a vulnerability in a particular diff, skilled security researchers can find it very quickly.
That reminds me of the old QA technique / tactic of only telling the developers where a bug was found. Sometimes you don't even have to find an issue, you can just pick a complicated module.
There was a study done decades ago where code was seeded with a number of bugs. Developers were told that there were a number of bugs. They found that many, but the overlap was not total. That is to say, they found bugs that the study authors did not seed.
Phew, I sure am glad that we instituted a company-wide policy of not putting any bugs in the code in the first place. Just makes life easier all around.
Well, I remember at the time reading this that if one picked an arbitrary number, and told the developers that there were that many, they would find that many even if none were inserted.
When Pornin disclosed CRIME I was telling myself: "Wait a minute, I saw this name somewhere." Then I remembered that it was Pornin's article [1] that helped me understand how zlib flush modes work.
There was a lot of Twitter traffic about SSL3 being bad. But TLS1 has very few differences from SSL3.
In Heartbleed, we knew that it was introduced with a particular OpenSSL version. That had 12 changes; 8 were DTLS. Two were clearly not relevant. One was heartbeat. Dan Franke took about fifteen minutes to go from that set of inputs to a sketch of heartbleed.
I now advocate QA by what I call "virtual tests": you don't have to write the tests, just the failure messages. Developers will do the rest of the work on their own.
That announcement implied that the vulnerability was SSL 3.0 only. Knowing that, the obvious place to look at would be the differences between SSL 3.0 and its successor TLS 1.0.
For those who want to verify their change after updating the configuration (since I looked for such information): either use "nmap --script ssl-enum-ciphers -p 443 <host>" or ssllabs (eg: https://www.ssllabs.com/ssltest/analyze.html?d=wisecashhq.co...) and you will see the enabled protocols.
If you use SNI (many domains on one IP), make sure to put this (ssl_protocols TLSv1 TLSv1.1 TLSv1.2;) into the default server { ... } config (or the first one: the one which opens when you go to https://your_server_ip).
The problem is that if you specify ssl_protocols somewhere else rather than default/first server, it won't work. So, yes, serve error, but specify ssl_protocols in there.
With name-based virtual hosts (those that rely on the server selecting the appropriate resource based on the Host header), typical clients depend on the IP address returned by DNS for that host. If they visit that IP, ask for the host, and the server isn't configured to deliver that host's resources, it's good practice to give the client an error. Since the web server has to listen on that IP without knowing which host will be requested before the connection is made, it's convenient to have a fallback and handle errors there. I deny all access to the default host, which generates a 403 Forbidden error (with a custom message), but there are definitely other ways to deal with this situation.
The important thing is that a host's protected resources are served only when SSL/TLS is properly negotiated. Serving one host's content as the default when another host was requested violates this.
In practice, nearly all of these requests come from bots, crawlers and penetration testers. So another advantage is that the log entries can be used to block further requests at the firewall, freeing resources and even possibly protecting the server from undisclosed vulnerabilities (test this approach carefully to make sure it's appropriate for your site and doesn't subject you to a DoS).
Disabling SSLv3 will indeed affect a significant amount of clients in the real world.
I've seen a few commenters here on HN that point out that pretty much everything since Windows XP (ignoring IE6) supports at least 1.0 of the TLS protocols. While that may be correct in theory, in practice it's not.
At a 1MM+ visitors/week site we still see a few percent of our users that regularly connect using SSLv3 across different versions of Windows, including more modern ones such as Windows Vista, 7 and 8(!)
Though I'm not sure why this is the case, antivirus software suites such as McAfee[1] have in the past been known to disable TLS 1.0 system wide in Windows.
"[...] Also, handshake errors due to network glitches could similarly be misinterpreted as interaction with a legacy server and result in a protocol downgrade."
"For clients, a quick look at https://www.ssllabs.com/ssltest/clients.html shows that even older clients (Android 2.3, Java 6, the oldest supported version of IE, etc) support TLS 1.0, so there should be no issues disabling SSLv3 on servers too."
I think it's probably safe to say that anyone who's using IE6 is either not one who cannot change the defaults (by policy or by skill) or their machine is already malware infested.
You don't have to wait for Firefox 34. Go to about:config and set security.tls.version.min to 1. That will force the browser to use only TLSv1 or newer.
I've done that since before security.tls.version.min existed (the pref had another name before, and allowed nonsensical combinations like allowing TLS 1.0 and SSL 2.0 but not SSL 3.0).
But Firefox 34 disabling SSL 3.0 by default will disable it even for nontechnical users. That should force the remaining non-TLS-compatible servers and middleboxes to be fixed.
On the server side, hopefully the knee-jerk reaction of disabling SSL 3.0 will have the same effect, forcing the remaining non-TLS-compatible clients and middleboxes to be fixed.
That could allow for the SSL 3.0 code to be finally removed from both clients and servers, like happened with the older SSL 2.0. And with it, the fallback code for extension intolerant servers, so clients can count on being able to use extensions.
Check me on this: The TLS_FALLBACK_SCSV will prevent downgrade attacks, yes. However, any "real" SSLv3 connections will still be SSLv3 and still fully vulnerable to the described attack. Downgrading is not a necessary component of the attack, it just increases the number of vulnerable client/server combinations to include those that would normally not be vulnerable due to negotiating TLS1.0+. Therefore, if you are in a position where you truly care about security and the fact the SSLv3 has such an enormous hole in it is unacceptable, you should still be looking at simply turning off SSLv3 as the only acceptable mitigation, even if that does cut some clients off.
Yes?
(Except please note for the purposes of this question I'm assuming as a given that cutting off SSLv3 is considered preferable by the entity in question to a very weak SSL negotiation. Whether or not any given entity should have that opinion is a different question; I politely ask that you get into that question elsewhere.)
Thanks. I wanted to check because there's a common pattern in these sorts of security incidents where some suggestion for possible mitigation gets made (note how I weakened that phrase) and by the time the game of security telephone is played out over a few panicky hours it turns into "just do X and the problem is solved!". For instance, "just switch to RC4!" was not exactly the most solid of advice for BEAST, if one actually thought about it, but it was frequently presented as The Solution (TM). (Not necessarily by knowledgeable people, of course, but by enough people.)
TLS_FALLBACK_SCSV has the danger of becoming that, but in the short term it doesn't seem like it actually does much in a world where, AFAIK, not very many SSL connections support it.
> Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, but presents significant compatibility problems, even today.
I'm starting to wonder how many other important vulnerabilities I'm missing because their discoverer didn't come up with a catchy name like Heartbleed, Shellshock, or POODLE.
I made a "non-technical" Poodle scan reporter at: https://www.expeditedssl.com/poodle if anybody needs to convince someone in their organization that a problem needs addressing and to take action.
It seems that this is an attack on the CBC-mode ciphers but doesn't change anything about the RC4 ciphers.
RC4 is mentioned in passing as having weaknesses, but is it actually broken? If we can't disable SSL3 completely would using only RC4 ciphers be an option?
"Our second attack applies to TLS and can be carried out in a single connection or session (but tolerates multiple connections/sessions). It exploits certain double-byte biases in RC4 keystreams (the Fluhrer-McGrew biases). It targets plaintext bytes located at any position in the TLS plaintext stream. The number of encryptions needed to reliably recover a set of 16 consecutive targeted plaintext bytes is around 10 times 2^30, but already with only 6 times 2^30 sessions, these target bytes can be recovered with 50% reliability. Since this double-byte bias attack does not require the TLS Handshake Protocol to be rerun, it can in practice be more efficient than our single-byte bias attack."
Usually, the server will reject this record, and the attacker will simply try again with a new request. Occasionally (on average, once in 256 requests), the server will accept the modified record
This suggests to me that a possible workaround could be to detect this attack because it will generate the characteristic pattern of a successful record amongst many invalid ones, and then expire the relevant cookies; by the time the attacker has figured out a byte or two, the cookie has already become useless. It could potentially turn into a denial-of-service, but that's something anyone with MITM capability can do trivially anyway.
"If you are encountering trouble with inbound Twilio requests while mitigating the SSLv3 vuln, contact help@twilio.com for direct help."
(That is, they have to manually enable TLS on your account.)
Also, if you're using GET requests with ExactTarget, you'll run into the same thing, but I haven't heard back from them if / when they'll have that fixed.
Practically speaking, how broken is SSLv3.0 now? Are we hours, days, weeks, months, or years from someone actually getting out there and exploiting this?
This is exploitable now. Padding oracles have been found before, and are even homework assignments of the Coursera cryptography course. The specifics of this bug make it a little bit harder but the paper explains how to work around this. Expect this bug to be exploited in the wild since minutes after (or perhaps already before) the release.
You need 256 requests to get one byte of clear text. A 160bit session key is likely to be 40 bytes, so that's 10,240 requests to get the session key - it might actually be less if you can take shortcuts because you know the result bytes can only be 0-9a-f. You also need to be a man in the middle.
To business owners and large sites operators out there: before disabling SSLv3, make sure that none of your clients/customers/users are stuck on IE6. We still see significant IE6 traffic coming from China. Some legacy clients are also stuck on SSLv3.
As a general rule, review your logs before disabling things. And ask your users to use modern browsers as soon as possible.
As a web app developer focusing on applications in both oil and healthcare related apps for smaller businesses we disabled SSL3 early this year after XP was depreciated. The vast majority of our clients had upgraded to a newer version of Windows already, but a few stuck on XP were able to use Chrome or Firefox to access the programs.
As for legacy clients, time to firewall them from the internet. They make the Internet a more dangerous place for everybody. If your device/program cannot be updated then it needs put on a vlan'd network segment. Breach after breach have shown that attackers will use any exploitable resource go gain a foothold inside your network.
If you absolutely have to use IE6, go to Internet Options's Advanced tab and check TLS 1.0 and while you are at it uncheck SSL 2.0. But of course the preferred solution is to upgrade and while you are it please also update to XP SP3 if you hasn't already. There is no WGA check in WinXP service pack in general, despite such misconceptions.
I'd assume a user stuck on IE6 without the necessary technical expertise can follow a detailed guide full of pretty pictures written by someone else to change security parameters.
There's also the old trick of giving the user a .reg (or if that's too crude, a simple executable) to poke the relevant settings directly in the registry.
(Poking directly in the registry can break if Microsoft changes the way that setting is stored, but they won't change anything on XP anymore, so it's as safe as poking in the registry usually is.)
Sorry if I've have missed this if it's been posted elsewhere, but is there a way of having Apache log which SSL protocol version is being used for SSL/TLS connections?
I've seen people post figures like 0.85% of HTTPS connections have been SSL 3.0 and was wondering how those figures were compiled.
This has pretty large implications for countries (namely china) with still a sizable IE6 userbase. IE6 doesn't support TLS by default, making https effectively completely unsupported for that browser.
Isn't it an option? I remember IE5 having TLS1.0 as an option. Just because it's default not enabled doesn't mean it's "effectively completely unsupported". Browser protocol support matrix here:
this may help clarify why Firefox dragged its feet for so long to enable TLS 1.2 support by default...from the timeline, we could probably make some guesses as to when certain agencies took notice of this attack.
the real question is why it took major site ops this long to realize. given a trove of handshakes (which Google has been saving for years), user-agent headers, and expected ciphersuites, it perhaps should not have been too difficult to detect downgrade attacks in the wild. that doesn't in itself give you POODLE, but it probably offers some clues...especially given other information available to them.
I made the change to remove the version UI from Firefox. I made that change for two reasons:
(1) It sped up the implementation of TLS 1.1 and TLS 1.2. Removing the UI meant I didn't have to redo the UI when I added TLS 1.1 and TLS 1.2 support. Keeping that UI would have caused TLS 1.1 and TLS 1.2 to be delayed even further.
(2) Our measurements showed that ~1% of users had disabled TLS 1.0, leaving only SSL 3.0 enabled. I was worried that the UI encouraged people to make that mistake. Also, it is the application's responsibility to do the right thing automatically. People shouldn't have to fiddle with such options to be secure.
i was not implying that POODLE affected TLS 1.0/1.1 in my (downvoted) parent comment, but was pointing out that the scope of ciphersuites in Firefox has been effectively constrained for a large number of users by defaults and GUI toggles vs. enabled about:config prefs for quite some time now.
i wouldn't dispute that disabling sslv3 gui prefs in Firefox may have had significant positive impacts on many users or that many users are clueless about crypto in general as it seems you're implying, but it's worth considering that those decisions also have implications that may be convenient for other parties, like dramatically increasing the activation energy to configuring a browser for security.
another example is how long it currently takes to configure via GUI distrusting or at least warning users when Firefox relies on CAs that users from a given country may almost never encounter. should your average Firefox user in the United States always silently trust Turkmenistan's CA out of the box?
POODLE seems to be a padding oracle based on SSL 3.0's inability to fully validate padding. The oracle only gives you the last byte of a block; a full extended padding oracle gives you successive bytes, but this vulnerability doesn't. The authors sidestep that problem by using application-layer control of the boundaries of blocks to repeatedly line up the byte they want to learn with the last byte of padding the vulnerability reveals. C l e v e r !
The difference, I think, between POODLE and BEAST is that BEAST needed not just blockwise "chosen-boundary" control over plaintext, but also a continuous channel that would provide the client with fine-grained control over the first N bytes of each request. It didn't work (IIRC) with vanilla Javascript.
This attack, however, seems to.