

RC4 is kind of broken in TLS - B-Con
http://blog.cryptographyengineering.com/2013/03/attack-of-week-rc4-is-kind-of-broken-in.html

======
tptacek
Attackers can use Javascript and/or browser plugins to coerce browsers into
making millions of requests to (say) GMail, specifying the URL to make the
session cookie line up at a specific point in the plaintext. Similar
techniques animated BEAST, CRIME, and Lucky 13.

Now, for sites that use RC4 (to mitigate BEAST and Lucky 13), attackers can
take advantage of a flaw in RC4: RC4's keystream output has biases throughout
the first 256 bytes. Over millions of trials, these biases make it possible to
use basic statistics to predict cookies, from the vantage point of a passive
attacker.

This is one of the easier-to-understand TLS flaws of the last few years. RC4
is simple: you key it and it spools out a Hard-to-predict stream of random
bytes, which are XOR'd to the plaintext. If those bytes aren't Hard to predict
in some way, the attack is obvious. Now that (a) we know how to get browsers
to generate millions of sessions (hello, BEAST) and (b) half the Internet is
now using RC4 (thanks, BEAST), these RC4 attacks have become germane.

My guess is that these attacks are (1) noisy, (2) slow, (3) unreliable, and
(4) expensive --- not in a "need a grid of PS3s to exploit" kind of way, but
in a "nobody will cast a broad net across the Internet with this attack" sort
of way.

But the vulnerability is also plainly unacceptable, in a way that I don't
think is true of "Lucky 13" (which isn't intractable to fix in application
code, just very annoying to fix).

It looks like we're finally reaching a place where TLS compat is going to stop
dictating what attacks we mitigate and how, because unless we opt to accept
the risk of poor "Lucky 13" fixes, there's no compatible way out. We need a
widely deployed authenticated ciphersuite.

~~~
stcredzero
_> unless we opt to accept the risk of poor "Lucky 13" fixes, there's no
compatible way out._

What particular class of fixes are you referring to by 'poor "Lucky 13"
fixes'? Do you mean that getting Google, Yahoo, Facebook, Amazon, Microsoft,
and Mozilla Foundation all to stop using a particular cipher is unworkable, or
that TLS is just too broken and needs to be replaced, or both?

~~~
tptacek
Adam Langley does a much better job explaining this than I can:
<http://www.imperialviolet.org/2013/02/04/luckythirteen.html>

As I understand it, the AES-CBC ciphersuites in TLS are not fundamentally
unworkable.

~~~
stcredzero
When one is in a game of whack-a-mole, something is probably broken somewhere.
It seems like your position comes down to this:

 _We programmers mostly know that ciphers are hard to design and should only
be designed by experts.

We programmers mostly know that designing a protocol is also hard and should
only be designed by experts. In fact, it turns out they're even harder than
ciphers to get right.

What our community still isn't getting and dealing with correctly, is that
implementing ciphers, protocols, and security software with current techniques
is hard and should only be done by experts._

It all smacks of the kind of problem newb programmers have if they haven't
studied concurrency and they decide to build their first concurrent system.
This makes me wonder if better tools could be made on the language level, like
a functional language and formal system designed to enable provably secure
systems, also accounting for timing attacks. Such languages would probably
have to leave out embedded systems, but might stand a reasonable chance of
covering desktop, mobile, and browser applications.

------
agwa
I had assumed that TLS didn't use plain RC4, but rather a variant of RC4 that
discards the first several hundred bytes of the keystream to avoid this
problem. But I just checked and apparently TLS uses straight-up RC4. Sigh...

I'm not sure why people so fervently recommend disabling AES-CBC (and SSLLabs
knocks you a whole letter grade if you haven't) when modern browsers and TLS
clients work around BEAST with "1/n-1" record splitting [1]. I figure out-of-
date browsers are probably vulnerable to _something_ that lets you hijack
sessions without needing BEAST anyways.

[1] <http://www.imperialviolet.org/2012/01/15/beastfollowup.html>

~~~
tptacek
First, the record split problems caused compat issues.

Second, even with the record splitting fix, you still have Lucky 13, which is
really annoying to fix completely, and people are wary of deploying a crypto
fix to their SSL library that in 5 years we'll just find out was window
dressing.

~~~
agl
> First, the record split problems caused compat issues.

It did, but I think we're through it now. IE, Firefox and Chrome have all had
it on for a while.

~~~
tptacek
What scares you more: AES-CBC with best-effort fixes for Lucky 13, or RC4?

~~~
agl
CBC at the moment I think. There's still the possibility of clients without
record splitting, there's timing issues in AES itself, there's lots of
possibilities for bad server padding implementations.

I have TLS 1.2 and AES-GCM working in NSS on my desktop, but we don't
currently have the NSS reviewer time to get it landed. We also have to deal
with the issue that an active attacker can trigger a version downgrade
somehow. (I don't love AES-GCM either, but it's the nearest port in this
storm.)

In the mean time, I'll probably tweak RC4 in NSS to send one byte records at
the beginning of the connection and thus burn off the first 256 bytes of
keystream by encrypting MACs. That still leaves a handful of bytes vulnerable,
but half of them will be "GET ".

But I doubt that's fully sufficient so AES-GCM is the medium term goal.

~~~
harshreality
Is this what web security is reduced to? On one hand, openssl not releasing
TLS 1.2 support for over 3 years until a semi-panic over BEAST generated
enough interest to get it done? On the other, major browser vendors not
implementing TLS 1.2 (okay, MS did, but disabled by default) for even longer
because lack of server support, compatibility issues, or fear of protocol
downgrade attacks made it an uninteresting or risky proposition? Nobody was
interested enough to say screw chicken-and-egg and simply get it done, on
either side?

OS maintainers aren't helping either. RHEL/CentOS 6.4, just released, is still
stuck with openssl 1.0.0, which doesn't support TLS 1.1 or 1.2, so there's
another few years of webservers running an "enterprise" OS not having TLS 1.2
support. But hey, OpenSSL 1.0.0 is "stable"!

I guess there's not much hope of getting Salsa20 or ChaCha, and better curves
like curve25519 (I see a draft proposing some Brainpool curves), into TLS 1.2?
(with the intent of discouraging many of the less desirable ciphersuites after
a few years... not adding more ciphersuites just to add more ciphersuites.)

~~~
tptacek
What is the advantage of getting Salsa20 or curve25519 into TLS 1.2?

~~~
aidenn0
djb says they're awesome!

But more seriously, salsa is very efficient, so if rc4 is still being used for
performance reasons, rather than security reasons, then it would seem to be to
be a decent replacement.

~~~
tptacek
RC4 is fast, but it's being used because Google was forced by circumstance to
use it.

------
pedrocr
_> We live in a world where NIST is happy to give us a new hash function every
few years. Maybe it's time we put this level of effort and funding into the
protocols that use these primitives? They certainly seem to need it._

This is a great point. Are there any modern reasonable alternatives to TLS to
use in applications? On the one hand developers are told to not implement
crypto directly and use something like TLS. Yet on the other hand it seems
most TLS implementations suck (don't check the keys for example) and the
standard itself has a bunch of holes.

~~~
tptacek
No. Developers should continue to use TLS.

If you look at the last few years of TLS --- which have been rocky, to be sure
--- you have flaws that are really difficult to exploit and (usually)
straightforward to mitigate. If you look at a representative sample of non-TLS
transport protocols, you get clownish flaws:

* Block ciphers deployed in the default mode (ECB), which allows straightforward byte-at-a-time decryption

* Error-based CBC padding oracles for which off-the-shelf tools will do decryption

* Unauthenticated ciphertext --- not "used a MAC in the wrong order", like Lucky 13 exploits, but "literally no integrity checks at all", so attackers can trivially rewrite packets

* RSA implemented "in the raw" with no formalized padding or PKCS1.5 padding

* Key exchanges with basic number theoretic flaws

* Repeated IVs and nonces that allow whole message decryption by analyzing captures of just a few hundred messages

The list goes on and on. Not only that: two of the recent 4 TLS problems
(BEAST's chained CBC IVs and CRIME's compression side channel) are equally
likely to affect custom cryptography --- they aren't the product of any weird
SSL/TLS requirement. Chained CBC IVs also happened in IPSEC; compressing
before encryption was IIRC an _Applied Cryptography_ recommendation. The only
reason the RC4 bug is unlikely to apply is that nobody outside of TLS server
operators would choose RC4.

To be sure: your best options (PGP and TLS) are creaky and scary looking. But
they are nowhere nearly as scary as the "New" cryptosystems people deploy.
What's especially annoying about the new stuff is that they follow a release
cycle that conceals how terrible they are:

* Initial release with great fanfare about the new kinds of applications they'll enable, press coverage

* Security researchers flag unbelievably blatant flaws in crypto constructions

* Blatant flaws are fixed, cryptosystem is rereleased, now with promotional text about the external security testing it has

For a cryptosystem published by someone without a citation record in
cryptography, a basic crypto flaw should be considered disqualifying; it's a
sign that the system was designed without an understanding of how to build
sound crypto. But that's not how things actually work, because everyone wants
to believe that cryptographic protection is the Internet's birthright and that
we're all just a few library calls away from "host-proof" or "anonymous"
communications.

If you're really worried about TLS security but have the flexibility of
specifying arbitrary crypto, why not use a library that does TLS with an AEAD
cipher, like AES-GCM?

~~~
ctz

      why not use a library that does TLS with an AEAD cipher, like AES-GCM?
    

Some possible reasons:

* lack of confidence in TLS's design and designers (for example, TLS1.2 still allows compression and fails to counsel against its use).

* TLS has far too many options. I want a secure channel. I don't want a secure channel toolkit.

* TLS tends to be paired with a broken and discredited root-of-trust infrastructure (which often gives the misleading impression that TLS itself was broken).

(nb. I don't have any evidence that the AEAD TLS1.2 ciphersuites are broken,
I'm playing devil's advocate here.)

Regarding your 'new cryptosystems' point: I agree, and its completely and
frustratingly hopeless. But that's why the world needs a decent secure channel
standard with good security bounds, and no knobs on the side which break
confidentiality or integrity, and no backwards-insecurity ability.

~~~
wiredfool
Is there a problem with the body content of an http response being compressed,
or is it mainly a header thing?

~~~
Daniel_Newby
There can be if a secret is on the same page as text under the attacker's
control. The attacker can run a hidden JavaScript reload attack on the page,
the fiddle with the text under their control until compression is maximized.

------
UnoriginalGuy
As a short term work-around the client/server could randomly change the order
of the request/response headers or move the cookie to near the end of the
request/response (where it is harder to recover).

They could also add "invalid" headers of random length to push the cookie
around making it difficult to find/inconsistent. Increasing the number of
request/responses that the attacker would need to sniff on in order to break
it.

The nice thing about this solution is that it could be done in the browser
(e.g. Chrome) when it is connected to an RC4 site without any involvement of
the server administrator.

It is also backwards compatible.

PS - Yes, I know, STOP USING IT - but in the real world if you told people
today then they'll still be using it ten years from now...

~~~
gingerlime
It's a nice idea, but if I understand this correctly[1], the predictable bytes
are very early in the request, as early as the second byte, which is 'E' (for
a request which starts with something like `GET / HTTP/1.1`). So moving the
cookie headers might not make much of a difference.

[1]<http://security.stackexchange.com/a/31873/7306>

EDIT: I think I might be wrong. Trying to read a little more, it seems like
the first few bytes are the easiest to predict and then it gets harder... but
with an HTTP request, the first few bytes are kinda known anyway (`GET /
...`), so this doesn't give much advantage to the attacker. Perhaps
randomizing the position of the cookie header, or perhaps adding more NO-OP
headers could help against this kind of attack after all?

~~~
UnoriginalGuy
I thought that post was saying "right now we can only get the first line, but
as we learn more we expect to get more and more data out of the request
header, including potentially cookies!"

------
rb2k_
> However, recent academic work has uncovered many small but significant
> additional biases running throughout the first 256 bytes of RC4 output.

Didn't we know about the RC4 weaknesses of the first few bytes since WEP?

~~~
caf
That was the first handful of bytes. This is the first 256 bytes.

------
DDub
Does this require the authentication cookie to be constant? If, for example, I
issue a new cookie to the client every connection then this is mitigated?

~~~
tptacek
That probably does mitigate the attack, with the proviso that a MITM can keep
cookies from rotating by preventing requests from hitting the target.

~~~
jessaustin
If the MITM can do that, it doesn't need to attack cookies does it? It can
just impersonate the remote site and steal user-entered credentials. Sharp-
eyed users or up-to-date browsers might notice the lack of https for popular
sites, and also 2FA, but in general e.g. a malicious WAP has many options.

Or I could be very wrong about this. Please advise.

~~~
tptacek
No, the MITM can be choosy about what traffic it relays and allow the attack
to run without causing any of the connections to complete. Think network-layer
MITM instead of transport-layer MITM.

