I don't know about the mod-p analysis, but that would be an... idiosyncratic... conclusion to draw about modern elliptic curve cryptography. The trend seems very much in the opposite direction, towards standardization and reliance on a few very carefully designed curves (optimized for performance on modern CPUs, and to avoid implementation pitfalls that give rise to vulnerabilities with older curves).
In fact, drawing any kind of conclusion about the diversity of fields in use seems a little strange. Yes, if you're going to use 1024-bit prime group, you definitely want to be using one very few others use. But just don't use 1024-bit prime groups. It doesn't much matter how many people use your (properly selected) 2048 bit group; no amount of precomputation puts it in reach of attackers.
That was the subject of djb's recent talk. He brings up the very good point that we don't know that curve wasn't designed to have some weakness that was creator has kept hidden. This includes the case where a "random", "not biased" generation method, which only moves the attack point back one step.
The best practices change over time, but old literature, and especially old online resources, have lots of inertia. We can call it the stagnating force of cargo culting, or the dark side of PageRank. The literature I remember reading clearly stated that in DH the security lies in the data used for key exchange and not the parameters. The use of well-known parameters are in fact encouraged. You state the same:
> Yes, if you're going to use 1024-bit prime group, you definitely want to be using one very few others use. [...] It doesn't much matter how many people use your (properly selected) 2048 bit group; no amount of precomputation puts it in reach of attackers.
Using standard, known-secure parameters is clearly a winner here. So: the best practice is to use provided DH parameters. What do people deploying applied cryptography do? They use parameters available from their chosen crypto library. Parameters, that were set as defaults years earlier.
Around year 2000, it was computationally (relatively) expensive to generate your own DH group. And, because using your own group makes it harder to bootstrap trust, it would have added complexity in an already complex mechanism. Hence, the best practice of using known DH group made even more sense.
The fact that Logjam mitigation strategy includes the recommendation to generate your own DH group flies against the face of established DH best practices. (Can we just assume that the modern crypto libraries will catch a faulty RNG and refuse to expose clearly vulnerable parameters? For sake of brevity, at least?) Those deploying applied crypto in the field are now faced with conflicting information. The established and easily discovered (PageRank[tm]) literature states that the best practice is to use known DH groups. The same literature also states that for compatibility reasons, using >1024 bit groups are not recommended. Now contrast this with the Logjam paper: those who require compatibility are suddenly in the need of their privately generated DH parameters. So suddenly the best practice is to ignore established best practice?
That's a nasty conflict.
Of course the sane and secure route was to simply make sure all servers were using ECDHE. But if you need to serve clients that are using ancient browsers, you probably still have to accept the known 1024 DH groups too.
The client does not care whether the 1024 bit DH prime is common or not, so even if you have to use 1024 bit DH params, make sure they're new and that the NSA can probably crack them.
It's a good idea not to use 1024 bit params. That's the change that's needed.
Attempting to generate params (or RSA keys or whatever) on the fly just exposes you to another class of bugs.
I feel like I'm misinterpreting what he meant, but can't see what other point he could be making.
He's correct that if people generated their own ECC curves instead of using standardized curves, then standardizing maliciously chosen curves would cease to be an attack vector.
That doesn't of itself imply that the pros of standardizing curves do not outweigh the cons, but it is a con of standardizing curves.
 Do schannel and other ssl implementations use static dh parameters by default, like openssl does, or do they dynamically generate them? And do they default to 1024 bit or are they smarter about it and default to match the RSA key size?
Many other nations have the financial and technical ability to perform the same attacks, and the inclination to do so. And soon, other entities will, too.
My math fu is not up to this at the moment.
But maybe the right lesson to draw is mod-p groups and elliptic-curve groups both seem to be pretty good for cryptography, but the mod-p groups are way less good if everyone is using the same few prime numbers p, and the elliptic-curve groups are way less good if everyone is using the same few elliptic curves. (A lot of these things do seem pretty predictable with hindsight, but how many did you predict?)
And that's why we need browsers to support curves other than NIST's P-256 and such. I know Chrome intents to, and I imagine Firefox isn't far behind. What's Microsoft's plan for the Edge browser regarding this? I haven't seen them say anything about it in all of their recent Edge-related posts.
Imagine that you've got to load code from 8 different domains for a website. Then your CPU is working flat out for 40ms. Probably more like 50ms on ARM.
That's a lot of time.
Don't try to discredit those with perf concerns when the operation they're complaining about is incredibly expensive; they have genuine concerns.
The link in my earlier post to djb's talk also discusses this issue, if youre' interested.
As someone who has written an embedded webserver... including the underlying TCP/IP layer and the driver for the "supposedly NE2000 compatible" Realtek chip... on a Z80 clone using only about 4k of flash and ~1.5k-2k of RAM, I sympathetic to real performance limitations. (that device only handled once minimum-size packet only one socket at a time)
That said, if you have a 1.2GHz chip, you have enough CPU for crypto. 40ms is a trivial cost for crypto, especially as you only use DH and pubkey to negotiate a symmetric key that isn't going to cause the same kind of CPU load.
There isn't anywhere close to a real performance limitation on that kind of platform, and I would regard any complaint about the performance on a >1GHz CPU as highly suspicious. When you have 1/100 or even 1/1000 the CPU cycles, that something else entirely.
What's the cost of increasing, let's say, the key size of a webpage serving SSL content. Merely adding SSL has an non-negligible cost for sites with some traffic.
Then you're forgetting all the dedicated hardware that needs to deal with that encryption. Sometimes it's a smartcard, or a security token, sometimes it's a mobile phone.
I am not arguing that there isn't a cost associated with crypto. For almost all uses, the price of crypto part of the cost of making something that connects to the internet. If you leave it out, you're creating an attractive nuisance and potential liability for someone. If you use the bad defaults of 512-bit crypto, I suggest that any claim of a product being "secure" or "using SSL" is a lie.
A smartcard isn't plugging into the internet on its own. Whatever it reads the card can wrap everything in proper crypto.
> mobile phone
You have far more CPU than you need.
 The exceptions are limited, such as a device that literally cannot do the crypto (I'm thinking an old 1MHz micro), Note that these devices shouldn't be directly on the internet, either.
Is that a reasonable argument for the inertia over key sizes?