Dual_EC is a PKRNG. PKRNGs are a kind of crypto random number generator (CSPRNG). All the crypto keys in modern cryptosystems come from CSPRNGS.
PKRNGs are special because they embed a public key in the generator. Anyone who holds the corresponding private key can "decrypt" the output of the RNG and recover the generator's "state"; once they have that, they can fast-forward and rewind through it to find all the other numbers (read: crypto keys) it can generate.
Juniper is here saying that they recognize the problem of Dual_EC --- it's a PKRNG, and the USG may hold its private key.
So instead, they generated their own private keys and embedded them in the CSPRNGs of the VPNs they sold to customers.
But see also this thread:
The NIST document specifying Dual EC offers default values for each curve. P is the usual base point for the curve; an arbitrary point Q is provided without justification or details of its generation.
Because the NIST curves have cofactor 1, all points other than the identity generate the same subgroup. This means any two points P and Q are related by some scalar d such that d * P = Q. Knowledge of d is the back door in the generator.
This also implies a simple means for choosing Q given P: pick a random integer d and calculate Q = d * P. Publish P and Q and then write down d someplace safe. This is exactly how NSA is speculated to have chosen the Dual EC parameters.
However, the NIST document also specifies a method for generating alternative points. It boils down to hashing a random seed and mapping the result to a curve point. If you generate the base points P and Q like this, the relationship between them is unknown. The scalar d still exists, but now no one knows what it is. Without that knowledge, there is no back door.
It's not clear from that page how Juniper chose the parameters. Maybe they did choose a random scalar and multiply P, or maybe they followed the standard. The information on that page isn't enough to say one way or the other.
EDIT: Just to be clear, I'm not saying this isn't something to worry about. You should distrust and avoid anything that relies on Dual EC. I'm only saying there is not enough information to say definitively that Juniper put a back door in their own product, intentionally or otherwise.
The real danger of including Dual EC in any system is the risk that it adds to code. Now instead of having to include a large, easy-to-detect passive decryption backdoor, an attacker only has to change a few bytes. Sometimes just a pointer value. Once you do that, Dual EC is often ridiculously easy to attack:
Basically one of the problem you are trying to solve with a CSPRNG is how you avoid to disclose the generator state with the random numbers. Obviously once the state is known it is easy to replay the sequence.
Since we don't like to reinvent the wheel overtime, a solution is leveraging known properties of hashing and encryption algorithms. The idea behind a PKRNG is to use a fairly simple state evolution function but then to encrypt the output with a known public key. Since a property of public key encryption is that you can't know the message if you don't know the private key, the state is safe for everybody except the owner of the key. If you then truncate the encrypted message and you choose the public key without computing the private key you get a very strong CSPRNG. To be more clear:
- There are procedure to generate a public key without generating the private key (but you should trust the person who generated the numbers).
- The encrypted message is truncated, so even if you still have the private key, you should guess the missing bit of the message to decode the state.
The problem with Dual EC is that the resulting encrypted message is not truncated enough (it is enough to protect against a casual attacker, not an organization with massive computing power like an intelligence agency). Plus doubt were casted on the procedure used to generate the public key, given that you were forced to use the one in the standard and not your own if you want to get certified.
If you can point to a "good" PKRNG that sees any use, that would be an interesting way to rebut my argument.
Its a lot easier to sneak a build or configuration flag in somewhere than it is to sneak in an entire CSPRNG subsystem.
They were way, way ahead of their time.
Isn't it possible to still get this PRNG right by discarding the private part?
Not that mere use of such infamous algorithm isn't crazy anyway, especially since even if they did the right thing they can't prove it.
If so, it's important to get a quality layman's explanation out fast, and this is the framework of a great one.
I don't think legislating away single-user encryption is a good idea, but not because backdoors are inherently bad.
I'm sure they could work out some sort of subkey scheme and key splitting (via e.g. Shamir's scheme) to lower the possibility of compromise and reduce damage if a subkey was leaked, but the possibility won't be zero. And with that big of a target the number of people trying to acquire the keys means the chance of the keys eventually being leaked is pretty good.
How did they bypass the review process? Was the process socially engineered, or was the repo hacked directly?
What will the new process be to ensure this doesn't happen again?
This has implications for the process at most companies.
How did they bypass the review process?
For example, changing the repository from Mercurial to Git. Splitting or combining two repositories. Moving lots of files between directories. Running an autoformatter over the entire codebase. Something like that.
I'd wager there aren't many reviewers with the patience and attentiveness to spot 1 evil change among 1000 trivial changes.
Sometimes I wonder how difficult is it for $TLA to send their employees off into the world (on their payroll), as sleeper coders, to be activated whenever a small piece of code needs to be inserted in the right place. You don't even need too many; just a handful would be sufficient.
Disclosure: I also worked for Juniper by being in a startup acquisition. I don't remember any audits of our existing code.
Does anyone have a pointer to a proof-of-concept "evil" (or "escrow-enabled") system based around such an RNG?
There's been quite some research about the exploitability of this construction:
Once the backdoor administration password is posted publicly, we can try to use it against older versions of ScreenOS code to do a process of elimination to find out how long ago it was added.
ASCII is slow but everyone interprets it the same way.