Hacker News new | comments | show | ask | jobs | submit login

As a short term work-around the client/server could randomly change the order of the request/response headers or move the cookie to near the end of the request/response (where it is harder to recover).

They could also add "invalid" headers of random length to push the cookie around making it difficult to find/inconsistent. Increasing the number of request/responses that the attacker would need to sniff on in order to break it.

The nice thing about this solution is that it could be done in the browser (e.g. Chrome) when it is connected to an RC4 site without any involvement of the server administrator.

It is also backwards compatible.

PS - Yes, I know, STOP USING IT - but in the real world if you told people today then they'll still be using it ten years from now...

This seems like an excellent solution.

edit: I've just noticed that this is something someone with no experience in crypto would say. Sometimes things actually get worse with randomization, for example when there is a flaw that will always allow bytes 160 and 161 to be revealed. If the position of the cookie is randomized it will fully be revealed instead of possibly just two bytes. Before actually implementing this someone with a few crypto publications should take a look at it ;)

It's a nice idea, but if I understand this correctly[1], the predictable bytes are very early in the request, as early as the second byte, which is 'E' (for a request which starts with something like `GET / HTTP/1.1`). So moving the cookie headers might not make much of a difference.


EDIT: I think I might be wrong. Trying to read a little more, it seems like the first few bytes are the easiest to predict and then it gets harder... but with an HTTP request, the first few bytes are kinda known anyway (`GET / ...`), so this doesn't give much advantage to the attacker. Perhaps randomizing the position of the cookie header, or perhaps adding more NO-OP headers could help against this kind of attack after all?

I thought that post was saying "right now we can only get the first line, but as we learn more we expect to get more and more data out of the request header, including potentially cookies!"

The problem is the client's headers, not the server's.

I addressed the client's headers. Modifying the browser alters the client's headers.

If you can modify the browser, modify it to use a ciphersuite that doesn't have these problems!

As a shorthand: workarounds are only fair game if they don't require software updates by Microsoft or Mozilla. So, for instance, having Rails treat session tokens as one-time-use does mitigate this flaw (somewhat) and is fair game. But having Firefox randomize client headers is not useful, compared to getting Firefox to reliably do AES-GCM (which I think Firefox may be close to doing already).

I've seen you recommend GCM in a couple of places in this topic. I'm not a crypto guy, so I rely on people like you for this stuff.

Other experts I've read (Colin Percival, Thomas Pornin) have mentioned that GCM (and other encrypt-and-mac) implementations are more likely to to have chosen-ciphertext vulnerabilities with respect to CTR mode then MAC.

Can you cite either Colin or Pornin on that? It's easy to find Pornin saying positive things about the AES-GCM TLS ciphersuite.

I don't know what you mean by "chosen-ciphertext vulnerabilities". Authenticated encryption is inherently less vulnerable to chosen-ciphertext, because the ciphertext is integrity-checked. You can't choose an arbitrary new ciphertext, because it won't pass the MAC. In fact, it's the opposite construction --- MAC-then-encrypt --- that causes chosen-ciphertext flaws; a MAC-then-encrypt construct is what got us Lucky 13.


I was wrong on Pornin. I was remembering a crypto.stackexchange question in which Pornin participated, but he did not say anything specifically about GCM or encrypt-and-mac modes. It does look like he's referencing another answer that is no longer there. The quote I remembered was from Vennard (below) and this answer was marked as accepted by Pornin.


under encrypt-and-mac method:

> No integrity on the ciphertext again, since the MAC is taken against the plaintext. This opens the door to some chosen-ciphertext attacks on the cipher, as shown in section 4 of Breaking and provably repairing the SSH authenticated encryption scheme: A case study of the Encode-then-Encrypt-and-MAC paradigm. This may not apply specifically to GCM; I'm not sure if the MAC validates plaintext or ciphertext.



> Why use a composition of encryption and MAC instead of a single primitive which achieves both? Because people are very good at writing bad code.


GCM authenticates the ciphertext, not the plaintext.

Colin doesn't marshal a specific argument against GCM here, but rather a philosophical one. And his argument is wrong: if you look at the histories of SSL/TLS, SSH, and Tor, you find that the stuff that goes wrong is in code that tries to do simple stuff like combine a block cipher with a hash MAC (which is exactly what he's arguing for here).

GCM, on the other hand, is a NIST standard; you don't have the degrees of freedom with how you e.g. handle padding, or nonce generation, or when you apply a MAC that you do with bespoke crypto.

Obviously, I agree with Colin that generalist developers shouldn't be writing their own AES-GCM libraries. Where Colin and I differ is that I think generalist developers shouldn't be writing crypto code at all. Leave that stuff to the Adam Langley's of the world.

I think Thomas Pornin's [reply](http://security.stackexchange.com/questions/20464/when-authe...) (and our subsequent back-and-forth) to a question of mine on the security StackExchange highlights exactly how hard it is to Encrypt-Then-MAC yourself properly.

You should use a separate key for the encrypt phase and MAC phase. You must MAC the ciphertext, the IV, the authenticated data, and possibly a specifier for the encryption algorithm. You must also construct the MAC'd string in a way that prevents tampering with field boundaries.

Just to be clear, these are all details GCM takes care of.

Yes, absolutely. Use GCM (or XSalsa20Poly1305) whenever possible.

And if it breaks connections to servers set up to use RC4 specifically?

Sure, the browser should stop "suggesting" they use RC4. That is the browser's right. But if the server decides to use it anyway then they use it.

Also you kind of break your own rule. If we cannot suggest things which Microsoft or Mozilla have to do then we cannot suggest they alter their ciphersuite either...

The point of my rule is that if you're going to push new client code, you push a real fix, not a workaround.

Why not push both?

Why can't browsers "suggest" that they don't use RC4 any more, and when they still use RC4 (as they will almost certainly do) they use the workaround.

Then it seems like the right solution is to push TLS 1.2 + AES-GCM along with fixes for Lucky 13, and use CBC for everything before 1.2 and GCM for everything after it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact