Hacker News new | past | comments | ask | show | jobs | submit login
Juniper: Recording Some Twitter Conversations (imperialviolet.org)
335 points by tptacek on Dec 19, 2015 | hide | past | favorite | 38 comments

Stop and consider for a second how crazy this page is:


Dual_EC is a PKRNG. PKRNGs are a kind of crypto random number generator (CSPRNG). All the crypto keys in modern cryptosystems come from CSPRNGS.

PKRNGs are special because they embed a public key in the generator. Anyone who holds the corresponding private key can "decrypt" the output of the RNG and recover the generator's "state"; once they have that, they can fast-forward and rewind through it to find all the other numbers (read: crypto keys) it can generate.

Juniper is here saying that they recognize the problem of Dual_EC --- it's a PKRNG, and the USG may hold its private key.

So instead, they generated their own private keys and embedded them in the CSPRNGs of the VPNs they sold to customers.


But see also this thread:


It depends on how P and Q are generated.

The NIST document specifying Dual EC offers default values for each curve. P is the usual base point for the curve; an arbitrary point Q is provided without justification or details of its generation.

Because the NIST curves have cofactor 1, all points other than the identity generate the same subgroup. This means any two points P and Q are related by some scalar d such that d * P = Q. Knowledge of d is the back door in the generator.

This also implies a simple means for choosing Q given P: pick a random integer d and calculate Q = d * P. Publish P and Q and then write down d someplace safe. This is exactly how NSA is speculated to have chosen the Dual EC parameters.

However, the NIST document also specifies a method for generating alternative points. It boils down to hashing a random seed and mapping the result to a curve point. If you generate the base points P and Q like this, the relationship between them is unknown. The scalar d still exists, but now no one knows what it is. Without that knowledge, there is no back door.

It's not clear from that page how Juniper chose the parameters. Maybe they did choose a random scalar and multiply P, or maybe they followed the standard. The information on that page isn't enough to say one way or the other.

EDIT: Just to be clear, I'm not saying this isn't something to worry about. You should distrust and avoid anything that relies on Dual EC. I'm only saying there is not enough information to say definitively that Juniper put a back door in their own product, intentionally or otherwise.

Certicom did the same thing. The argument given was that these companies have less access to data interception, therefore the impact of a maliciously-generated set of parameters is lower. That's absolutely crazy.

The real danger of including Dual EC in any system is the risk that it adds to code. Now instead of having to include a large, easy-to-detect passive decryption backdoor, an attacker only has to change a few bytes. Sometimes just a pointer value. Once you do that, Dual EC is often ridiculously easy to attack:


Just out of interest, is there any legitimate use of a PKRNG or is it just a backdoor enabler?

Not totally an expert but the explanation of what a PKRNG is overly simplified. The problem with Dual EC is not that it is a PKRNG, but that some design choice made allowed for allegedly planting a backdoor for agencies with enough computing power.

Basically one of the problem you are trying to solve with a CSPRNG is how you avoid to disclose the generator state with the random numbers. Obviously once the state is known it is easy to replay the sequence.

Since we don't like to reinvent the wheel overtime, a solution is leveraging known properties of hashing and encryption algorithms. The idea behind a PKRNG is to use a fairly simple state evolution function but then to encrypt the output with a known public key. Since a property of public key encryption is that you can't know the message if you don't know the private key, the state is safe for everybody except the owner of the key. If you then truncate the encrypted message and you choose the public key without computing the private key you get a very strong CSPRNG. To be more clear:

- There are procedure to generate a public key without generating the private key (but you should trust the person who generated the numbers).

- The encrypted message is truncated, so even if you still have the private key, you should guess the missing bit of the message to decode the state.

The problem with Dual EC is that the resulting encrypted message is not truncated enough (it is enough to protect against a casual attacker, not an organization with massive computing power like an intelligence agency). Plus doubt were casted on the procedure used to generate the public key, given that you were forced to use the one in the standard and not your own if you want to get certified.

There's no reason to use a public key transform to generate random bits other than to leverage the fact that the tranform is trapdoored with a private key. They are otherwise cost-prohibitive.

If you can point to a "good" PKRNG that sees any use, that would be an interesting way to rebut my argument.

The consensus opinion among cryptographers is converging on "no"; that the only reason to use a PKRNG is if you want key escrow.

And I don't think Dual_EC is even a particularly good RNG either.

This is a perfect example of why have a suspect feature present at all is problematic. Its usually just one configuration or compile time switch away from being used in real situations. The only reasonable response to the Dual_EC backdoor is to just delete the code entirely, keeping it around is asking for situations like this.

Its a lot easier to sneak a build or configuration flag in somewhere than it is to sneak in an entire CSPRNG subsystem.

Anybody who (as I do) finds this kind of thing fascinating should go back and read Young and Yung's work on kleptography from the late 90's: http://www.cryptovirology.com/cryptovfiles/research.html

They were way, way ahead of their time.

> So instead, they generated their own private keys and embedded them in the CSPRNGs of the VPNs they sold to customers.

Isn't it possible to still get this PRNG right by discarding the private part?

Not that mere use of such infamous algorithm isn't crazy anyway, especially since even if they did the right thing they can't prove it.

Depends on how hard it is to break. Even if it takes a while and a lot of money, for something as widespread as their firewalls, it might have been worth the effort to someone. Of course only if it is in the category of "a while and expensive", not in "impossible"?

Surely that would be the response of someone who believes his systems have been compromised and looks to invalidate all the keys an attacker could have gained access to? Or were the previous points part of the standard?

Is this going to become the primary case study on why back doors are a bad idea?

If so, it's important to get a quality layman's explanation out fast, and this is the framework of a great one.

No. It will not. Backdoors are fine as long as they are NOBUS and the master key remains safe. Notice, for example, how every major corporation has a "backdoor" in all their employee's hard drive encryption schemes, except they call them "data recovery options".

The assumption of the master key remaining safe is the problem though. If e.g. the government forced a known back door in encryption systems, that master key then becomes the single juciest, most delicious target of every cyber criminal and foreign intelligence agency in the world. It's hard to be able to keep something like that secure while making it accessible enough to actually use for its intended purpose.

We have ways of keeping keys like that secure. For example, the master recovery keys used for HD encryption on NSA / Google / Lockheed laptops. Those are all pretty valuable, and yet they've been kept secure.

I don't think legislating away single-user encryption is a good idea, but not because backdoors are inherently bad.

It's possible under ideal circumstances, but it's an uphill battle in the real world. The examples you gave are small potatoes compared to how frequently a master key for criminal cases would be needed and dramatically lower risk. If the NSA leaked their hard drive encryption keys hundreds of people would probably die. Bad yes, but if the master key to (virtually) all encryption schemes in the country is leaked many people would end up getting killed and it would be an economic disaster. Not to mention the potential for limitless privacy abuse, whether in the name of a righteous cause or not.

I'm sure they could work out some sort of subkey scheme and key splitting (via e.g. Shamir's scheme) to lower the possibility of compromise and reduce damage if a subkey was leaked, but the possibility won't be zero. And with that big of a target the number of people trying to acquire the keys means the chance of the keys eventually being leaked is pretty good.

Nah, they'll just say that an attack has to be carefully crafted, and that will lull everyone to sleep.

I would love to know what the commits in the source code look like for these. Author, message, date.

How did they bypass the review process? Was the process socially engineered, or was the repo hacked directly?

What will the new process be to ensure this doesn't happen again?

This has implications for the process at most companies.

  How did they bypass the review process?
You wouldn't have to - you could wait for an opportunity to slip it into a huge, mundane change.

For example, changing the repository from Mercurial to Git. Splitting or combining two repositories. Moving lots of files between directories. Running an autoformatter over the entire codebase. Something like that.

I'd wager there aren't many reviewers with the patience and attentiveness to spot 1 evil change among 1000 trivial changes.

We would all love to see how it went down. But I'd bet good money that Juniper will not reveal what happened.

Sometimes I wonder how difficult is it for $TLA to send their employees off into the world (on their payroll), as sleeper coders, to be activated whenever a small piece of code needs to be inserted in the right place. You don't even need too many; just a handful would be sufficient.

But are sleepers agents reliable enough ? Considering they would be living in peace in a modern and comfortable country, wouldn't they hesitate to act when activated?

Sleeper agents should be reliable enough, considered that they (the in the long past "recruited") want to have their family and loved ones safe -- it would really be a pity if some stupid accident happened to any of them on their way home... Sounds morbid, yes, but rest assured, with enough "motivation" financial options and "moral flexibility" you can always motivate others, to do as demanded.

Ah, (you're) right. My game on these matters isn't top-notch.

What if the sleeper "agent" is employed by the NSA?

Has there been any disclosure of when these changes were made? NetScreen was a Juniper acquisition, so they could date back quite a while to a frenetic startup environment and misguided/malicious employees in those days.

Disclosure: I also worked for Juniper by being in a startup acquisition. I don't remember any audits of our existing code.

I think all evidence shows that it has been probably there since at least 2012.

It's present in ScreenOS 6.2, released in 2008 - and probably in even earlier versions too.

Juniper acquired NetScreen in 2004, with NetScreen itself only just acquiring Neoteris. I'd love to know just how far back this issue goes!

So, if we assume this is indeed a backdoored Dual_EC PKRNG - how are those typically initialized? Are we looking at something equivalent to the Debian ssh/ssl bug, where we have some millions of "known bad keys", or is it more likely each case is different (ie some knowledge of the state is needed for a useful attack)?

Does anyone have a pointer to a proof-of-concept "evil" (or "escrow-enabled") system based around such an RNG?

It's not like the Debian bug, because that would be something everyone can detect and exploit. The interesting case with of this backdoor is that it can only be abused by the person who has the secret points used to generate the parameters.

There's been quite some research about the exploitability of this construction: http://dualec.org/ https://projectbullrun.org/dual-ec/

If you recall, Juniper was a part of the China backed Auroa attacks (http://www.marketwatch.com/story/juniper-networks-investigat...), which was in 2010.

Once the backdoor administration password is posted publicly, we can try to use it against older versions of ScreenOS code to do a process of elimination to find out how long ago it was added.

Not the most serious issue but, how come they're encoding these constants as ASCII strings?

Usually it's because you only have to read them once, and you're uncertain about the most compatible way to encode bignums. You're often generating parameters on one system or in one language, but then using them in another.

ASCII is slow but everyone interprets it the same way.

Meh, sounds lazy to me!

It's lazy, but it's the good kind of lazy.

No Such Agency...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact