Hacker News new | past | comments | ask | show | jobs | submit login
The state of TLS ciphers (cloudflare.com)
61 points by jgrahamc on July 13, 2013 | hide | past | favorite | 29 comments

While this is an excellent summary of the RC4 attack and a good prioritized list of ciphers to use, it's still important to note that the vast majority of TLS-related vulnerabilities involve improper use of TLS libraries, plain old misconfiguration, and questionable certificate authorities. See "The Most Dangerous Code in the World" (https://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf) for a sobering account.

Having personally implemented certificate validation logic (for tlslite, though it's not mainlined yet), my own sense is that the X.509 validation process has now become so complicated that it violates any reasonable person's definition of a minimal "trusted computing base" (cf. microkernels for more philosophical discussion of minimality as a security goal). For example, see section 7 of RFC 5280: name validation alone now subsumes all the complexity of Unicode normalization.

For anyone wondering, Wikipedia has a nice table that show which browser support TLS 1.1/1.2, and which don't.



- Chrome Stable support TLS 1.1, Chrome Beta/Canary support TLS 1.2

- Firefox Stable support TLS 1.0. You can enable TLS 1.1 by setting "security.tls.version.max" to "2" in about:config. Support is not considered stable, and you might have some problems: https://bugzilla.mozilla.org/show_bug.cgi?id=733647

- Firefox Aurora/Nightly supports TLS 1.0. You can enable TLS 1.1/1.2 by setting "security.tls.version.max" to "3" in about:config. Like Firefox Stable, support is not considered stable and you might have some problems.

- Opera 15 support TLS 1.1.

- Opera 12 support TLS 1.0. You can enable TLS 1.1/1.2 by checking "Enable TLS v1.1" and "Enable TLS v1.2" in opera:config

- IE 10 support TLS 1.0. You can enable TLS 1.1/1.2 by enabling them in Windows, and forcing IE to use them. See http://netsekure.org/2009/10/tls-1-2-in-windiows-7/

You can use sites like https://sni.velox.ch/ and https://cc.dcsec.uni-hannover.de/ to see which version of TLS is your browser using.

> That's what the latest RC4 attack shows. Specifically, there are biases in the first 256 bytes of keystream generated by RC4. Those first bytes are likely to be used to encrypt the start of an HTTP request and may include sensitive information such as a cookie used for logging into a web site.

I didn't pick this up from the previous discussions on the vulnerability. Basically, RC4 is vulnerable under an HTTP request/response usage pattern--one where you make lots of little TLS connections that share some initial plaintext worth protecting.

In other words, if you've written a webapp which does everything over one long-lived TLS websocket (including doing the client authentication after the connection upgrade using websocket messages, rather than transmitting that in a cookie), then you have much less to worry about, and RC4 can probably be safely preferred to AES-CBC ciphers in your ciphersuite configuration. Though you still might consider padding your websocket connection with an initial 256 bytes of meaningless data that the server knows to ignore (which is not much additional overhead relative to the lifetime of the socket, really.)

I don't think this is true. I think you're assuming that the vulnerability has anything to do with how your app works, when it in fact targets browser behavior. The attacker uses (e.g.) Javascript, loaded from some random site, to generate a bajillion connections to your site, each of which will bear session cookies.

The 256 byte padding doesn't help either, because of the Fluhrer-McGrew biases.

Are the Fluhrer-McGrew biases actually exploitable? After all, they've been known for over ten years, but no one seems to have been particularly concerned about them before.

Yes, the summary of the full Patterson/Bernstein paper says the attacks on the Fluhrer-McGrew biases are easier in practice to exploit than the first 256 bytes --- which also were known for over ten years but weren't taken seriously until last year.

It's not easily exploitable, but it can be done. Elias Yarrkov's page on this: https://cipherdev.org/rc4_2013-03-13.html

That still assumes you use session cookies at all. As I said, you can keep authentication entirely within the confines of the websocket connection, deliver only non-user-specific static assets over HTTP(S), and persist your authentication token client-side in localStorage/sessionStorage, thus avoiding cookies entirely. Which is actually the best practice for libraries like SockJS, since they rely on (and encourage the use of) cross-domain connections that can't see your cookies.

Of course, using localStorage (which doesn't have the equivalent to cookies' HTTPOnly attribute) makes the Javascript-injection attack even more effective--but that's a bit of an airtight hatchway problem[1]. If you already have control of the user's DOM, you can just make it look like they were logged out and phish their credentials out of them.

[1] http://blogs.msdn.com/b/oldnewthing/archive/2007/09/20/50027...

Your analysis assumes that the client won't immediately reconnect if the attacker disconnects it. That's not a given.

I'm curious why CloudFlare includes !EDH as part of their cipher suite options... Guessing they were just following the recommendations from https://community.qualys.com/blogs/securitylabs/2011/10/17/m..., but having a specific reason would be interesting to know. EDH ciphers are generally slower (so I could see why a company that wants connections to be fast wouldn't want to use them), but at the same time, they also allow for forward secrecy. Seems like CloudFlare might want to prioritize forward secrecy over speed, as per http://blog.cloudflare.com/cloudflare-prism-secure-ciphers. Maybe I'm missing something here, though... Perhaps that !EDH should be !ADH?

From Adam Langley, who works at Google on their SSL stack, on his post "How to botch TLS forward secrecy":

In the case of multiplicative Diffie-Hellman (i.e. DHE), servers are free to choose their own, arbitrary DH groups. <snip> ... it's still the case that some servers use 512-bit DH groups, meaning that the connection can be broken open with relatively little effort.

Full article of his at https://www.imperialviolet.org/

Nice write-up of the current state. I've been more and more interested in the process of moving to ECC. Some CAs are issuing dual certs (ECC and RSA), but I'm not sure how good support in browsers and client libs is.

One key takeaway from a security course I followed at uni was to never use a stream cipher if avoidable. They tend to be much more malleable than block ciphers, and are generally less understood.

Yeah that security course was wrong. And, for what it's worth, the "safe" ciphersuite in TLS 1.2, AES-GCM, is also a stream cipher.

Not to mention AES-CTR if you want a non-AEAD mode. I think the bias against stream ciphers has something to do with the failure of all stream ciphers submitted to NESSIE, but the eStream project has yielded useful stream ciphers.

Well you don't really offer any argument as to why it's wrong. Granted my arguments are not really strong either. I have a hunch however that if you made stats about attacks against stream cipher vs attacks against block ciphers, you'd see proportionally more of the former.

Does anyone know why CloudFlare uses TLS internally? I'm assuming they mean over their intranet of web servers, so if those are behind a firewall with their iptables configured properly, then what's the point of using TLS?

We have 23 locations around the world where we have servers. Those servers need to be administered, monitored, backed up, rebooted, and synchronized. So we make extensive use of SSH and TLS.

Yes, there are all sorts of firewalls and iptables rules to create an 'intranet' across the world, but that doesn't stop the actual packets from passing the public Internet. There's really no reason not to use TLS.

Is what you noted in the post still what you use for the nginx ssl_ciphers string? When I ran that through `openssl ciphers -v` I got wildly different ordering from what is shown as cipher priority in the post.

Ah. It looks like I've put the wrong configuration in the blog post. My fault. I shall update the blog post with the correct configuration.

Here's what we are currently using:

  ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers   on;

Why do you turn off the CAMELLIA ciphers?


It's long been a best practice for highly secure environments to use transport encryption between all tiers. The idea is that if someone compromises your datacenter, it's often a second-tier box, a forgotten web app, an ssh bastion host, etc. These boxes might not have any sensitive information on them.

However, the attacker might be able to use that box as a jumping off point to attack switches or other network devices where you could configure the network port of a compromised machine to receive a copy of all of the data being transmitted on the ports of more sensitive servers.

You should treat external firewalls and iptables like seatbelts in cars - yes, they're great, yes, you're a lot more likely to survive something terrible with them, but that doesn't mean you can drive recklessly and drunk.

If some of them are in different DCs or on non-dedicated hardware, probably to stop their comms being intercepted.

Is it useful to stuff the first 257 bytes of a HTTP response with a set-cookie with a 1-second expiry containing random nonsense?


Why not? Sounds like a reasonable band-aid to me.

Because there is a viable attack on the whole keystream, not just the first 256 bytes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact