Hacker News new | past | comments | ask | show | jobs | submit login
A Roster of TLS Cipher Suites Weaknesses (googleonlinesecurity.blogspot.com)
43 points by dpifke on Nov 14, 2013 | hide | past | web | favorite | 19 comments



The second nit with AES-GCM is that, as integrated in TLS, implementations are free to use a random nonce value. However, the size of this nonce (8 bytes) is too small to safely support using this mode. Implementations that do so are at risk of a catastrophic nonce reuse after sending on the order of a terabyte of data on a single connection. This issue can be resolved by using a counter for the nonce but using random nonces is the most common practice at this time.

I don't know how do you integrate AES-GCM with TLS, but I have to say

1. The secure AES-GCM supports 96-bit nonces. It's 12 bytes, not 8 bytes mentioned in the article.

2. Nonce is nonce. It shouldn't be chosen at random (as random IVs). As long as nonces are not reused, GCM should be secure.

3. I don't believe implementing a secure random number generator is more efficient than maintaining an incremental counter.

Edited for typos


> The secure AES-GCM supports 96-bit nonces

That's correct. However, TLS takes four bytes from the handshake key material and uses them as the first four bytes of the nonce. The remaining 8 bytes are all that vary over the lifetime of the connection.


Do you know why is TLS doing that?

But even without the 4 bytes, 64-bit nonce seems enough for me, as long as it's not chosen at random.

For comparison, if the nonce is chosen randomly, the security level is only 2^32 (supposing the 4 bytes based on the key materials remain unchanged).


I believe it was done so that AES-GCM could be implemented in a FIPS module and would not need to depend on the uniqueness of provided nonces. Either that or some standard said that nonces must be unique. (I wasn't around for the discussion.)

I agree that a counter is perfectly safe.


There's a table at wikipedia (though it obviously lacks the detail in the post)

https://en.wikipedia.org/wiki/Transport_Layer_Security#Ciphe...


For even more detail I like OWASP's cheat sheet https://www.owasp.org/index.php/Transport_Layer_Protection_C...


It's worth noting that that Jacob Appelbaum, who has worked on the Snowden documents alongside Laura Poitras, claims that TLS has been broken in real time by the NSA: https://twitter.com/ioerror/status/398059565947699200

In which case AES-CBC is almost certainly preferable to RC4, even with its flaws.


Never say never, but, this seems unlikely.

The nature of the RC4 flaw isn't such that attackers grind on a single ciphertext with fast compute. The problem is rather a series of statistical biases that recur at intervals in the keystream. The time you spend attacking RC4 isn't due to compute, but rather due to the number of samples you need to collect to leverage the biases to recover plaintext. You can imagine improvements on the attack that would require fewer samples, but probably not improvements that would get you down to double-digit samples.

There may indeed be a lot of room for attacks on RC4 to improve, and improve in ways that outpace (nonexistent) countermeasures. I think RC4 is scarier than CBC padding timing. But a real-time attack on RC4 would seem to imply a radically different attack on RC4 than any the literature has hinted at.


How related are these encryption vulnerabilities to encryption we may do in our own software (stuff that necessarily doesn't depend on SSL connections)? As in, is AES-CBC still okay to use if I'm using it correctly in a program that is self contained (ie it's not going to the net and using AES-CBC to encrypt the communication between server and program)?


The RC4 vulnerability is in the algorithm, not in TLS. Do not use RC4 in your own software. (If you absolutely must, you can avoid this vulnerability by using a variant of RC4 that discards the first several thousand bytes of keystream, but please just use something else.)

The AES-CBC vulnerabilities are specific to TLS, so as long as you don't repeat the same mistakes TLS made, you won't have the same vulnerabilities. Specifically, encrypt-then-MAC (instead of MAC-then-encrypt) to avoid padding oracle attacks like Lucky 13, and actually choose new IVs for each message instead of using the previous message's last ciphertext block (to avoid BEAST). Or better yet, use a high-level crypto library that doesn't make you worry about this stuff.


Yeah the one I'm using is encrypt-then-MAC for AES-CBC, so I guess I'm good in that aspect.


They are highly related. For instance, if you are using AES-CBC in your own program, but are not composing it with a MAC properly, you too are likely vulnerable to a padding oracle attack, and that attack is likely to be much easier to execute than Lucky 13.

In reality, you can safely look at TLS as a "best case" scenario; the number of vulnerabilities that commonly arise from homegrown crypto ("using AES in a self-contained program") are a large superset of the ones that have arisen in TLS.


When I was looking through a list of browser supported PFS TLS suites, I came across this one which is apparently supported across versions of IE that otherwise don't support PFS:

DHE-DSS-AES256-SHA

I know DHE is slow, but what really sticks out like a sore thumb is DSS. Is it known to be broken?


That ciphersuite has the following problems:

- DHE: the way ephemeral DL-DH works in TLS is unfortunately misdesigned. The client first offers DHE-* ciphersuites, then the server sends a DL group and public key in that group. The problem arises because now the client cannot:

* reject that group as having too small a modulus to possibly meet the client's security requirements,

* reject that group because it doesn't support one that big (Java SSL stack does this -- doesn't support >1024-bit modulus DH -- and is therefore broken with some more aggressive servers when they select DHE ciphersuites),

* check that the subgroup is of a suitable size to meet the client's security requirements (I'm not sure if SSL predated the Lim Lee paper here, but certainly later SSL standards didn't bother to fix it)

- DSS: fine, but the security-performance profile is similar to RSA, with the exception that verification in DSA is much slower than RSA. That's the reason it's mostly overlooked in favour of RSA.

- AES: as mentioned in the article.

So, yes. It's about as good as other SSL-era ciphersuites; which is to say: slow, badly designed and mostly broken :)


> DSS: fine

With the caveat that DSA fails exceptionally catastrophically if the random number generator is bad. If you re-use the same random number for different signatures, an attacker can recover the private key. Given how subtle RNG bugs can be, it's probably imprudent to choose it over RSA.


That's true of ECDSA as well, and, in a larger sense, true of RSA --- the "Minding Your P's and Q's" research seemed to boil down to an RNG weakness. It is also, for what it's worth, largely true of all the modern AEAD modes.


More importantly, I think DSS requires DSA certificates. The AES-GCM cipher suites in current versions of Windows have the same problem.


Nit: "Paterson", not "Peterson".


Doh! Thanks. I'll ask PR to fix that.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: