
NSA could put undetectable “trapdoors” in crypto keys - BerislavLopac
http://arstechnica.com/security/2016/10/how-the-nsa-could-put-undetectable-trapdoors-in-millions-of-crypto-keys/
======
jackgavigan
Original research paper:
[https://eprint.iacr.org/2016/961.pdf](https://eprint.iacr.org/2016/961.pdf)

It annoys the heck out of me when news sites bury the link to the source 2/3rd
of the way through the article _(nearly_ as much as when they don't link to it
at all!).

~~~
kurlberg
Interestingly the idea of these trapdoor primes goes back to 1992 (by Gordon,
see section 4 "Heidi hides her polynomials" of the paper above.)

------
delinka
The way this headline reads, it sounds like NSA could, at any moment, flip a
switch and __poof__ there's a backdoor in your crypto. I'd prefer to see some
deeper explanation that gets us from "NSA" to "backdoor in your crypto."

Personally, I understand that NSA has made recommendations to NIST for
decades. Those recommendations typically make it into standards. Those
standards get implemented in software. I also understand there's lots of
~paranoia~ concern about anything NSA recommends and that oftentimes, software
authors don't take their advice. (Sure, we don't know if the NSA
recommendations are honestly excellent, or designed to facilitate backdooring;
but not knowing whether the recommendations are trustworthy is part of the
problem...)

~~~
Retric
They may put backdoor into future cryto systems, and they may have already put
a backdoor into existing crypto systems. Just as possibly they may know a
cryto system is weak and not tell anyone, or suggest an improvement to a
proposed cyrpto system.

So, really unless you understand the advice you can't judge their suggestions.

------
AndyMcConachie
"And, to this day, the DNSSEC specification for securing the Internet's domain
name system limits keys to a maximum of 1,024 bits."

That is a factually incorrect statement. Currently the ZSK protecting the root
zone is using a 2,048 bit RSA key. There are also many TLDs that are protected
by 2,048 keys. There is absolutely nothing in DNSSEC that limits key length to
1,024 bits.

~~~
jgrahamc
Also DNSSEC doesn't have to use RSA: [https://blog.cloudflare.com/a-deep-dive-
into-dns-packet-size...](https://blog.cloudflare.com/a-deep-dive-into-dns-
packet-sizes-why-smaller-packet-sizes-keep-the-internet-safe/)

~~~
tptacek
No, it's true, you get the choice between RSA and terrifying NIST ECDSA on the
terrifying NIST P- curves.

~~~
loup-vaillant
There's still the 25519 curve, whose constants are highly constrained around
reasonable criteria. DJB most probably haven't put any backdoor in _those_.

Plus, as far as big number arithmetic goes, it's relatively easy to implement.

~~~
tptacek
25519 is great. But you can't use it in DNSSEC.

~~~
loup-vaillant
Oh, yeah, _standards_.

I believe someone here said you said one shouldn't rely on DNSSEC anyway?

~~~
tptacek
We're on a subthread about DNSSEC, not about curves.

~~~
loup-vaillant
Oops, sorry.

------
imagist
This underscores the need for flexible security that doesn't rely on magic
numbers. If an algorithm needs a number that can't be one of the obvious ones
(0, 1, 2, 3, 2^x - 1, largest prime < 2^x) it should be generated.

~~~
eridius
I would assume that DH typically uses pre-baked primes instead of calculating
them for a reason. One possible reason is that pre-baked primes can be
designed to avoid certain pitfalls (e.g. primes close to powers of 2, which
the article implies can be solved using the much faster algorithm) or
otherwise to be resistant against certain cryptographic attacks. I don't know
enough about DH to know if that's really the case here or if it's merely done
to avoid having to compute fresh large primes, but I'm going to guess there's
a good reason for it.

~~~
imagist
> One possible reason is that pre-baked primes can be designed to avoid
> certain pitfalls (e.g. primes close to powers of 2, which the article
> implies can be solved using the much faster algorithm) or otherwise to be
> resistant against certain cryptographic attacks.

Why can't those primes be calculated on the fly? Checking a number for being
within a certain distance of a power of two is well within the programming
capability of a freshman CS student.

> I don't know enough about DH to know if that's really the case here or if
> it's merely done to avoid having to compute fresh large primes, but I'm
> going to guess there's a good reason for it.

Given the incentives involved, I see strong reason to believe this is not the
case.

~~~
eridius
> _Why can 't those primes be calculated on the fly? Checking a number for
> being within a certain distance of a power of two is well within the
> programming capability of a freshman CS student._

You're assuming that that's the only criteria for a bad prime, but as this
article points out, it's not.

> _Given the incentives involved, I see strong reason to believe this is not
> the case._

Incentives where? The incentives by nearly everybody involved in making crypto
is to be secure. The NSA wants to be able to break crypto, but the NSA didn't
write the software that uses these primes (e.g. Apache).

------
RRRA
So many discussion about what crypto to use when all of this relies on a
broken CA system that everyone in power can abuse... This should be priority
#1.

------
tptacek
The bottom line on this isn't that NSA can hide backdoors in crypto (we
already knew that). It's that ensuring clean parameters for DH is comparably
difficult to ensuring clean parameters for elliptic curve: you need "nothing
up my sleeves" prime numbers for DH the same way you need "nothing up my
sleeves" coefficients for curves.

What this really is, is one less reason to use conventional Diffie Hellman
over Elliptic Curve Diffie Hellman.

~~~
Zigurd
It is one step worse than that: The evil prime numbers can be undetectable.
That's what's new in the paper the article is based on.

~~~
tptacek
That has long been the concern about elliptic curve parameters as well.
(Presumably, we agree: ECDH > DH).

------
lasermike026
Ahem, no. The NSA should spend their time protecting systems and strengthening
encryption instead of this nonsense. Yes, yes, coding breaking is their
mandate. I have a new mission.

~~~
Zigurd
That's correct in that we get what we pay for. Right now we are paying tens of
billions for pervasive surveillance and tens of millions to enhance security.

------
nickysielicki
This is why there have long been questions about NIST ECC constants:

[https://www.schneier.com/blog/archives/2013/09/the_nsa_is_br...](https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1675929)

(
[https://en.wikipedia.org/wiki/Bruce_Schneier](https://en.wikipedia.org/wiki/Bruce_Schneier)
)

~~~
vabmit
Remember how differential cryptanalysis was discovered by the academic
community and subsequently the DES S-boxes pre-dating that discovery were
found to specifically make DES resistant to that attack?

------
pm24601
It would have been nice if the article spent a paragraph or two on what
constituted a trapdoor prime. The only thing I saw was that trapdoors were
close to a power of two.

All tease, no delivery.

------
seibelj
If you are extremely paranoid about security, you generate your own primes and
do a lot of (super-computer style) computation to verify their strength. I
have not read this whole paper yet but it seems like there is growing concern
about widely used primes published by authorities without proving how they
came up with the number.

~~~
vabmit
The problem is that since the qualities of these special semi-primes that
allow SNFS to work on them is not very well understood there is not an easy
way to test if an attacker could use SNFS rather than GNFS. Additionally,
running SNFS (or GNFS) on a semi-prime is not guaranteed to result in a
factorization. Unlucky polynomial choice, for example, is one of the reasons
such an attempt may fail. So, you could expend massive resources attempting to
perform an SNFS factorization and not have a clear answer.

If you go into the Linux kernel (or other software) and look at the randomness
testing and primality testing code, you can see that it is not extremely
complex. Basic checks include things like making sure that a stream of data
from /dev/random is not just a repetition of the pattern 1010101010101010...
Prime candidates are typically checked with trail division routines. After
making it through the basic checks, software will typically do something more
advanced and computationally expensive. Usually, a Miller-Rabin test is run on
the candidate number. That is usually all there is to it. Extensive
verification of cryptographic primitives ("random" large primes, etc) is
typically not feasible.

------
zerognowl
This is why you _must_ use Honey Encryption
[https://en.wikipedia.org/wiki/Honey_Encryption](https://en.wikipedia.org/wiki/Honey_Encryption)

