It annoys the heck out of me when news sites bury the link to the source 2/3rd of the way through the article (nearly as much as when they don't link to it at all!).
Personally, I understand that NSA has made recommendations to NIST for decades. Those recommendations typically make it into standards. Those standards get implemented in software. I also understand there's lots of ~paranoia~ concern about anything NSA recommends and that oftentimes, software authors don't take their advice. (Sure, we don't know if the NSA recommendations are honestly excellent, or designed to facilitate backdooring; but not knowing whether the recommendations are trustworthy is part of the problem...)
So, really unless you understand the advice you can't judge their suggestions.
That is a factually incorrect statement. Currently the ZSK protecting the root zone is using a 2,048 bit RSA key. There are also many TLDs that are protected by 2,048 keys. There is absolutely nothing in DNSSEC that limits key length to 1,024 bits.
Plus, as far as big number arithmetic goes, it's relatively easy to implement.
I believe someone here said you said one shouldn't rely on DNSSEC anyway?
They are last-generation legacy elliptic curve operations. Cryptographers have moved on to significantly better, safer designs.
It's 2016 and the vanguard of cryptographic modernity in DNSSEC is P-curve ECDSA --- if you use it, you're taking a risk that some deployed resolvers won't understand your zones. The breaking-change new crypto in DNSSEC is already obsolete.
I do agree that DANE is a horrible idea for exactly the reasons he lays out though. I would never trade in private CAs for a PKI owned by either the UN or the U.S. Government (although USG would have some obligation not to suppress my zone without a court order, as I am a citizen). The UN is enough of a joke that they appointed Saudi Arabia to serve on the Human Rights Council (granted that expires next year); This year the Philippines joined.
Changed October 1, 2016. So yes very recent.
Why can't those primes be calculated on the fly? Checking a number for being within a certain distance of a power of two is well within the programming capability of a freshman CS student.
> I don't know enough about DH to know if that's really the case here or if it's merely done to avoid having to compute fresh large primes, but I'm going to guess there's a good reason for it.
Given the incentives involved, I see strong reason to believe this is not the case.
You're assuming that that's the only criteria for a bad prime, but as this article points out, it's not.
> Given the incentives involved, I see strong reason to believe this is not the case.
Incentives where? The incentives by nearly everybody involved in making crypto is to be secure. The NSA wants to be able to break crypto, but the NSA didn't write the software that uses these primes (e.g. Apache).
From the article...
> Unlike prime numbers in RSA keys, which are always
> supposed to be unique, certain Diffie-Hellman primes
> are extremely common.
As to the technical reason why the DH parameters were reused - speed and efficiency. A DH key exchange would take longer if you had to generate those parameters each time.
What this really is, is one less reason to use conventional Diffie Hellman over Elliptic Curve Diffie Hellman.
Are you aware of work on this area?
( https://en.wikipedia.org/wiki/Bruce_Schneier )
All tease, no delivery.
If you go into the Linux kernel (or other software) and look at the randomness testing and primality testing code, you can see that it is not extremely complex. Basic checks include things like making sure that a stream of data from /dev/random is not just a repetition of the pattern 1010101010101010... Prime candidates are typically checked with trail division routines. After making it through the basic checks, software will typically do something more advanced and computationally expensive. Usually, a Miller-Rabin test is run on the candidate number. That is usually all there is to it. Extensive verification of cryptographic primitives ("random" large primes, etc) is typically not feasible.