I don't know how do you integrate AES-GCM with TLS, but I have to say
1. The secure AES-GCM supports 96-bit nonces. It's 12 bytes, not 8 bytes mentioned in the article.
2. Nonce is nonce. It shouldn't be chosen at random (as random IVs). As long as nonces are not reused, GCM should be secure.
3. I don't believe implementing a secure random number generator is more efficient than maintaining an incremental counter.
Edited for typos
That's correct. However, TLS takes four bytes from the handshake key material and uses them as the first four bytes of the nonce. The remaining 8 bytes are all that vary over the lifetime of the connection.
But even without the 4 bytes, 64-bit nonce seems enough for me, as long as it's not chosen at random.
For comparison, if the nonce is chosen randomly, the security level is only 2^32 (supposing the 4 bytes based on the key materials remain unchanged).
I agree that a counter is perfectly safe.
In which case AES-CBC is almost certainly preferable to RC4, even with its flaws.
The nature of the RC4 flaw isn't such that attackers grind on a single ciphertext with fast compute. The problem is rather a series of statistical biases that recur at intervals in the keystream. The time you spend attacking RC4 isn't due to compute, but rather due to the number of samples you need to collect to leverage the biases to recover plaintext. You can imagine improvements on the attack that would require fewer samples, but probably not improvements that would get you down to double-digit samples.
There may indeed be a lot of room for attacks on RC4 to improve, and improve in ways that outpace (nonexistent) countermeasures. I think RC4 is scarier than CBC padding timing. But a real-time attack on RC4 would seem to imply a radically different attack on RC4 than any the literature has hinted at.
The AES-CBC vulnerabilities are specific to TLS, so as long as you don't repeat the same mistakes TLS made, you won't have the same vulnerabilities. Specifically, encrypt-then-MAC (instead of MAC-then-encrypt) to avoid padding oracle attacks like Lucky 13, and actually choose new IVs for each message instead of using the previous message's last ciphertext block (to avoid BEAST). Or better yet, use a high-level crypto library that doesn't make you worry about this stuff.
In reality, you can safely look at TLS as a "best case" scenario; the number of vulnerabilities that commonly arise from homegrown crypto ("using AES in a self-contained program") are a large superset of the ones that have arisen in TLS.
I know DHE is slow, but what really sticks out like a sore thumb is DSS. Is it known to be broken?
- DHE: the way ephemeral DL-DH works in TLS is unfortunately misdesigned. The client first offers DHE-* ciphersuites, then the server sends a DL group and public key in that group. The problem arises because now the client cannot:
* reject that group as having too small a modulus to possibly meet the client's security requirements,
* reject that group because it doesn't support one that big (Java SSL stack does this -- doesn't support >1024-bit modulus DH -- and is therefore broken with some more aggressive servers when they select DHE ciphersuites),
* check that the subgroup is of a suitable size to meet the client's security requirements (I'm not sure if SSL predated the Lim Lee paper here, but certainly later SSL standards didn't bother to fix it)
- DSS: fine, but the security-performance profile is similar to RSA, with the exception that verification in DSA is much slower than RSA. That's the reason it's mostly overlooked in favour of RSA.
- AES: as mentioned in the article.
So, yes. It's about as good as other SSL-era ciphersuites; which is to say: slow, badly designed and mostly broken :)
With the caveat that DSA fails exceptionally catastrophically if the random number generator is bad. If you re-use the same random number for different signatures, an attacker can recover the private key. Given how subtle RNG bugs can be, it's probably imprudent to choose it over RSA.