Hacker News new | past | comments | ask | show | jobs | submit login
NSA could put undetectable “trapdoors” in crypto keys (arstechnica.com)
155 points by BerislavLopac on Oct 11, 2016 | hide | past | web | favorite | 41 comments



Original research paper: https://eprint.iacr.org/2016/961.pdf

It annoys the heck out of me when news sites bury the link to the source 2/3rd of the way through the article (nearly as much as when they don't link to it at all!).


Interestingly the idea of these trapdoor primes goes back to 1992 (by Gordon, see section 4 "Heidi hides her polynomials" of the paper above.)


The way this headline reads, it sounds like NSA could, at any moment, flip a switch and _poof_ there's a backdoor in your crypto. I'd prefer to see some deeper explanation that gets us from "NSA" to "backdoor in your crypto."

Personally, I understand that NSA has made recommendations to NIST for decades. Those recommendations typically make it into standards. Those standards get implemented in software. I also understand there's lots of ~paranoia~ concern about anything NSA recommends and that oftentimes, software authors don't take their advice. (Sure, we don't know if the NSA recommendations are honestly excellent, or designed to facilitate backdooring; but not knowing whether the recommendations are trustworthy is part of the problem...)


They may put backdoor into future cryto systems, and they may have already put a backdoor into existing crypto systems. Just as possibly they may know a cryto system is weak and not tell anyone, or suggest an improvement to a proposed cyrpto system.

So, really unless you understand the advice you can't judge their suggestions.


I find the headline fair, after reading the article. The implication of the article is they "could have put" those trapdoors in, since there are primes that have actually been widely adopted following their recommendations. BUt if the headline actually read "could have put" then it would sound like an accusation, or speculation. By putting it in the present tense it simply shows that it is possible for this to be performed. It's a very neutral phrasing.


"And, to this day, the DNSSEC specification for securing the Internet's domain name system limits keys to a maximum of 1,024 bits."

That is a factually incorrect statement. Currently the ZSK protecting the root zone is using a 2,048 bit RSA key. There are also many TLDs that are protected by 2,048 keys. There is absolutely nothing in DNSSEC that limits key length to 1,024 bits.



No, it's true, you get the choice between RSA and terrifying NIST ECDSA on the terrifying NIST P- curves.


There's still the 25519 curve, whose constants are highly constrained around reasonable criteria. DJB most probably haven't put any backdoor in those.

Plus, as far as big number arithmetic goes, it's relatively easy to implement.


I just want to point out that the P-curves need not have unique features that allow for more efficient computations during code breaking in order to be "weak" or "backdoored". Simply the fact that the curves are standardized may allow expedited code breaking. We saw this with POISON NUT/CORAL REEF (where specialized hardware (PIQ Blade), pre-computation work, and an otherwise fine small standardized set of primitives allowed for rapidly broken encryption).


25519 is great. But you can't use it in DNSSEC.



Oh, yeah, standards.

I believe someone here said you said one shouldn't rely on DNSSEC anyway?


We're on a subthread about DNSSEC, not about curves.


Oops, sorry.


As someone who is an amateur at cryptography, are you being earnest or facetious about the terrifying nature of ECDSA?


Both ECDSA and the P- curves are terrifying. Not because NIST backdoored either of them, but because they're both comically easy to fuck up.

They are last-generation legacy elliptic curve operations. Cryptographers have moved on to significantly better, safer designs.

It's 2016 and the vanguard of cryptographic modernity in DNSSEC is P-curve ECDSA --- if you use it, you're taking a risk that some deployed resolvers won't understand your zones. The breaking-change new crypto in DNSSEC is already obsolete.


But DNSSEC is pretty much a false sense of security. See the copious and lengthy critiques on it from tptacek (here on HN) about it. However, this blog post (also from him) is a good start:

https://sockpuppet.org/blog/2015/01/15/against-dnssec/


Given that the body which holds the root zone keys is the same body which delegated the zones in the first place, seems like it is about the best you can get with DNS itself. If we didn't trust the root, then DNS would be pointless anyway since end users would need to be mailed a public key before they could get any records.

I do agree that DANE is a horrible idea for exactly the reasons he lays out though. I would never trade in private CAs for a PKI owned by either the UN or the U.S. Government (although USG would have some obligation not to suppress my zone without a court order, as I am a citizen). The UN is enough of a joke that they appointed Saudi Arabia to serve on the Human Rights Council (granted that expires next year); This year the Philippines joined.


Did this change? As recently as a few weeks ago the root ZSK was 1024 bits.


http://www.circleid.com/posts/20160928_increasing_strength_o...

Changed October 1, 2016. So yes very recent.


This underscores the need for flexible security that doesn't rely on magic numbers. If an algorithm needs a number that can't be one of the obvious ones (0, 1, 2, 3, 2^x - 1, largest prime < 2^x) it should be generated.


I would assume that DH typically uses pre-baked primes instead of calculating them for a reason. One possible reason is that pre-baked primes can be designed to avoid certain pitfalls (e.g. primes close to powers of 2, which the article implies can be solved using the much faster algorithm) or otherwise to be resistant against certain cryptographic attacks. I don't know enough about DH to know if that's really the case here or if it's merely done to avoid having to compute fresh large primes, but I'm going to guess there's a good reason for it.


> One possible reason is that pre-baked primes can be designed to avoid certain pitfalls (e.g. primes close to powers of 2, which the article implies can be solved using the much faster algorithm) or otherwise to be resistant against certain cryptographic attacks.

Why can't those primes be calculated on the fly? Checking a number for being within a certain distance of a power of two is well within the programming capability of a freshman CS student.

> I don't know enough about DH to know if that's really the case here or if it's merely done to avoid having to compute fresh large primes, but I'm going to guess there's a good reason for it.

Given the incentives involved, I see strong reason to believe this is not the case.


> Why can't those primes be calculated on the fly? Checking a number for being within a certain distance of a power of two is well within the programming capability of a freshman CS student.

You're assuming that that's the only criteria for a bad prime, but as this article points out, it's not.

> Given the incentives involved, I see strong reason to believe this is not the case.

Incentives where? The incentives by nearly everybody involved in making crypto is to be secure. The NSA wants to be able to break crypto, but the NSA didn't write the software that uses these primes (e.g. Apache).


I also assume it is a good reason. I'm curious, what is that reason?

From the article...

    > Unlike prime numbers in RSA keys, which are always
    > supposed to be unique, certain Diffie-Hellman primes
    > are extremely common.
Does anyone know why?


Basically, because people thought it was safe. It didn't seem significant to use the same prime since the key exchange process involves both Alice and Bob generating a secret. Solving the discrete logarithm of g^a mod p was still thought to be intractable given a large enough (1024bits) p.

As to the technical reason why the DH parameters were reused - speed and efficiency. A DH key exchange would take longer if you had to generate those parameters each time.


I agree with your comment, but I'd like to add that if you speedily/efficiently do the wrong thing, it's no good. The first step of optimization is correctness.


So many discussion about what crypto to use when all of this relies on a broken CA system that everyone in power can abuse... This should be priority #1.


The bottom line on this isn't that NSA can hide backdoors in crypto (we already knew that). It's that ensuring clean parameters for DH is comparably difficult to ensuring clean parameters for elliptic curve: you need "nothing up my sleeves" prime numbers for DH the same way you need "nothing up my sleeves" coefficients for curves.

What this really is, is one less reason to use conventional Diffie Hellman over Elliptic Curve Diffie Hellman.


Hijacking: Schoen and I got sidetracked a day or so back on the concept of honeytrap IDS indicators. Things like fictitous entries on maps and databases. Cliff Stoll's fake SDI project at LBL. Anything which if tickled live let's you know there's been a leak.

Are you aware of work on this area?


It is one step worse than that: The evil prime numbers can be undetectable. That's what's new in the paper the article is based on.


That has long been the concern about elliptic curve parameters as well. (Presumably, we agree: ECDH > DH).


Ahem, no. The NSA should spend their time protecting systems and strengthening encryption instead of this nonsense. Yes, yes, coding breaking is their mandate. I have a new mission.


That's correct in that we get what we pay for. Right now we are paying tens of billions for pervasive surveillance and tens of millions to enhance security.


This is why there have long been questions about NIST ECC constants:

https://www.schneier.com/blog/archives/2013/09/the_nsa_is_br...

( https://en.wikipedia.org/wiki/Bruce_Schneier )


Remember how differential cryptanalysis was discovered by the academic community and subsequently the DES S-boxes pre-dating that discovery were found to specifically make DES resistant to that attack?


It would have been nice if the article spent a paragraph or two on what constituted a trapdoor prime. The only thing I saw was that trapdoors were close to a power of two.

All tease, no delivery.


If you are extremely paranoid about security, you generate your own primes and do a lot of (super-computer style) computation to verify their strength. I have not read this whole paper yet but it seems like there is growing concern about widely used primes published by authorities without proving how they came up with the number.


The problem is that since the qualities of these special semi-primes that allow SNFS to work on them is not very well understood there is not an easy way to test if an attacker could use SNFS rather than GNFS. Additionally, running SNFS (or GNFS) on a semi-prime is not guaranteed to result in a factorization. Unlucky polynomial choice, for example, is one of the reasons such an attempt may fail. So, you could expend massive resources attempting to perform an SNFS factorization and not have a clear answer.

If you go into the Linux kernel (or other software) and look at the randomness testing and primality testing code, you can see that it is not extremely complex. Basic checks include things like making sure that a stream of data from /dev/random is not just a repetition of the pattern 1010101010101010... Prime candidates are typically checked with trail division routines. After making it through the basic checks, software will typically do something more advanced and computationally expensive. Usually, a Miller-Rabin test is run on the candidate number. That is usually all there is to it. Extensive verification of cryptographic primitives ("random" large primes, etc) is typically not feasible.


This is why you must use Honey Encryption https://en.wikipedia.org/wiki/Honey_Encryption




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: