As a short term work-around the client/server could randomly change the order of the request/response headers or move the cookie to near the end of the request/response (where it is harder to recover).
They could also add "invalid" headers of random length to push the cookie around making it difficult to find/inconsistent. Increasing the number of request/responses that the attacker would need to sniff on in order to break it.
The nice thing about this solution is that it could be done in the browser (e.g. Chrome) when it is connected to an RC4 site without any involvement of the server administrator.
It is also backwards compatible.
PS - Yes, I know, STOP USING IT - but in the real world if you told people today then they'll still be using it ten years from now...
edit: I've just noticed that this is something someone with no experience in crypto would say. Sometimes things actually get worse with randomization, for example when there is a flaw that will always allow bytes 160 and 161 to be revealed. If the position of the cookie is randomized it will fully be revealed instead of possibly just two bytes. Before actually implementing this someone with a few crypto publications should take a look at it ;)
It's a nice idea, but if I understand this correctly, the predictable bytes are very early in the request, as early as the second byte, which is 'E' (for a request which starts with something like `GET / HTTP/1.1`). So moving the cookie headers might not make much of a difference.
EDIT: I think I might be wrong. Trying to read a little more, it seems like the first few bytes are the easiest to predict and then it gets harder... but with an HTTP request, the first few bytes are kinda known anyway (`GET / ...`), so this doesn't give much advantage to the attacker. Perhaps randomizing the position of the cookie header, or perhaps adding more NO-OP headers could help against this kind of attack after all?
If you can modify the browser, modify it to use a ciphersuite that doesn't have these problems!
As a shorthand: workarounds are only fair game if they don't require software updates by Microsoft or Mozilla. So, for instance, having Rails treat session tokens as one-time-use does mitigate this flaw (somewhat) and is fair game. But having Firefox randomize client headers is not useful, compared to getting Firefox to reliably do AES-GCM (which I think Firefox may be close to doing already).
I've seen you recommend GCM in a couple of places in this topic. I'm not a crypto guy, so I rely on people like you for this stuff.
Other experts I've read (Colin Percival, Thomas Pornin) have mentioned that GCM (and other encrypt-and-mac) implementations are more likely to to have chosen-ciphertext vulnerabilities with respect to CTR mode then MAC.
Can you cite either Colin or Pornin on that? It's easy to find Pornin saying positive things about the AES-GCM TLS ciphersuite.
I don't know what you mean by "chosen-ciphertext vulnerabilities". Authenticated encryption is inherently less vulnerable to chosen-ciphertext, because the ciphertext is integrity-checked. You can't choose an arbitrary new ciphertext, because it won't pass the MAC. In fact, it's the opposite construction --- MAC-then-encrypt --- that causes chosen-ciphertext flaws; a MAC-then-encrypt construct is what got us Lucky 13.
I was wrong on Pornin. I was remembering a crypto.stackexchange question in which Pornin participated, but he did not say anything specifically about GCM or encrypt-and-mac modes. It does look like he's referencing another answer that is no longer there. The quote I remembered was from Vennard (below) and this answer was marked as accepted by Pornin.
under encrypt-and-mac method:
> No integrity on the ciphertext again, since the MAC is taken against the plaintext. This opens the door to some chosen-ciphertext attacks on the cipher, as shown in section 4 of Breaking and provably repairing the SSH authenticated encryption scheme: A case study of the Encode-then-Encrypt-and-MAC paradigm.
This may not apply specifically to GCM; I'm not sure if the MAC validates plaintext or ciphertext.
GCM authenticates the ciphertext, not the plaintext.
Colin doesn't marshal a specific argument against GCM here, but rather a philosophical one. And his argument is wrong: if you look at the histories of SSL/TLS, SSH, and Tor, you find that the stuff that goes wrong is in code that tries to do simple stuff like combine a block cipher with a hash MAC (which is exactly what he's arguing for here).
GCM, on the other hand, is a NIST standard; you don't have the degrees of freedom with how you e.g. handle padding, or nonce generation, or when you apply a MAC that you do with bespoke crypto.
Obviously, I agree with Colin that generalist developers shouldn't be writing their own AES-GCM libraries. Where Colin and I differ is that I think generalist developers shouldn't be writing crypto code at all. Leave that stuff to the Adam Langley's of the world.
You should use a separate key for the encrypt phase and MAC phase. You must MAC the ciphertext, the IV, the authenticated data, and possibly a specifier for the encryption algorithm. You must also construct the MAC'd string in a way that prevents tampering with field boundaries.