
NSA in P/poly: The Power of Precomputation - evanb
http://www.scottaaronson.com/blog/?p=2293
======
tptacek
_But maybe the right lesson to draw is mod-p groups and elliptic-curve groups
both seem to be pretty good for cryptography, but the mod-p groups are way
less good if everyone is using the same few prime numbers p, and the elliptic-
curve groups are way less good if everyone is using the same few elliptic
curves._

I don't know about the mod-p analysis, but that would be an...
idiosyncratic... conclusion to draw about modern elliptic curve cryptography.
The trend seems very much in the opposite direction, towards standardization
and reliance on a few very carefully designed curves (optimized for
performance on modern CPUs, and to avoid implementation pitfalls that give
rise to vulnerabilities with older curves).

In fact, drawing any kind of conclusion about the diversity of fields in use
seems a little strange. Yes, if you're going to use 1024-bit prime group, you
definitely want to be using one very few others use. _But just don 't use
1024-bit prime groups_. It doesn't much matter how many people use your
(properly selected) 2048 bit group; no amount of precomputation puts it in
reach of attackers.

~~~
pdkl95
> towards standardization and reliance on a few very carefully designed curves

That was the subject of djb's recent talk[1]. He brings up the very good point
that we don't know that curve wasn't designed to have some weakness that was
creator has kept hidden. This includes the case where a "random", "not biased"
generation method, which only moves the attack point back one step.

[1]
[https://news.ycombinator.com/item?id=9568659](https://news.ycombinator.com/item?id=9568659)

~~~
jeffreyrogers
djb and Tanja Lange created this site[1] that catalogs the commonly used
curves and lists which ones are thought to be safe.

[1]: [http://safecurves.cr.yp.to/](http://safecurves.cr.yp.to/)

------
mkempe
Whenever the NSA deliberately lets people and companies in the USA continue to
use a broken system for "secure" communications or transactions, they are
putting all of them at risk instead of helping protect them -- in direct
contradiction with the NSA's official charter.

Many other nations have the financial and technical ability to perform the
same attacks, and the inclination to do so. And soon, other entities will,
too.

------
hnolable
OTR anyone?

[https://news.ycombinator.com/item?id=7252159](https://news.ycombinator.com/item?id=7252159)

------
mortehu
Am I correct in assuming that if I have generated a 2048 bit dhparam.pem file
which I pass to my web server, I am not using one of those "same few primes"?

~~~
meowface
Yes. And even if you did happen to be sharing a 2048 bit prime number with
every other server in the world, it would still be essentially impossible for
the NSA to crack in our lifetime (without massive advancements in quantum
computing at least). 1024 bit is feasible for a powerful nation-state; 2048
bit is not.

~~~
falcolas
Quick question - how many more times infusible is it? The article mentions
roughly 7.5 million times from 512 to 1024, what is the rough jump required
for 2048?

My math fu is not up to this at the moment.

~~~
pbsd
2048-bit keys/primes are around 1 billion times harder than 1024.

------
higherpurpose
> _A third solution is to migrate to elliptic-curve cryptography (ECC), which
> as far as anyone knows today, is much less vulnerable to descent attacks
> than the original Diffie-Hellman scheme. Alas, there’s been a lot of
> understandable distrust of ECC after the DUAL_EC_DBRG scandal, in which it
> came out that the NSA backdoored some of NIST’s elliptic-curve-based
> pseudorandom generators by choosing particular elliptic curves that it knew
> how handle.

But maybe the right lesson to draw is mod-p groups and elliptic-curve groups
both seem to be pretty good for cryptography, but the mod-p groups are way
less good if everyone is using the same few prime numbers p, and the elliptic-
curve groups are way less good if everyone is using the same few elliptic
curves. (A lot of these things do seem pretty predictable with hindsight, but
how many did you predict?)_

And that's why we need browsers to support curves other than NIST's P-256 and
such. I know Chrome intents to, and I imagine Firefox isn't far behind. What's
Microsoft's plan for the Edge browser regarding this? I haven't seen them say
anything about it in all of their recent Edge-related posts.

------
netheril96
Can anyone explain to me why some implementations only support 512-bit or
1024-bit parameter? Aren't the algorithms the same for all sizes? Why can't a
certain implementation handle arbitrarily large parameters?

~~~
alexbecker
Probably because for performance reasons, much of this is hard coded. Buffers
of a fixed length, etc.

~~~
pdkl95
"Performance reasons" is one of the better excuses to use if you want to force
a committee to approve a weakened version of a standard. It has the air of
"meeting everyone's needs" and it is unlikely that complaints about the
potential weakening (or downgrade attack) will be listened to.

~~~
CHY872
Probably a good idea to see if the performance issues are real, though.
Modular arithmetic _is_ really expensive; to do the two exponentiations used
in (non elliptic) 2048 bit DH, you'll be needing 6 million cycles on an Intel
chip at an absolute minimum. That's 5 milliseconds in wall time (at a mobile
standard 1.2GHz)).

Imagine that you've got to load code from 8 different domains for a website.
Then your CPU is working flat out for 40ms. Probably more like 50ms on ARM.

That's a lot of time.

Don't try to discredit those with perf concerns when the operation they're
complaining about is incredibly expensive; they have genuine concerns.

~~~
pdkl95
I'm not trying to imply that _all_ of these concerns are fake, just that
performance is one of the easier places to hide bullrun-style sabotage. All
claims should be checked, of course.

The link in my earlier post to djb's talk also discusses this issue, if youre'
interested.

As someone who has written an embedded webserver... including the underlying
TCP/IP layer and the driver for the "supposedly NE2000 compatible" Realtek
chip... on a Z80 clone using only about 4k of flash and ~1.5k-2k of RAM, I
sympathetic to real performance limitations. (that device only handled once
minimum-size packet only one socket at a time)

That said, if you have a 1.2GHz chip, you have enough CPU for crypto. 40ms is
a trivial cost for crypto, especially as you only use DH and pubkey to
negotiate a symmetric key that isn't going to cause the same kind of CPU load.

There isn't anywhere close to a real performance limitation on that kind of
platform, and I would regard any complaint about the performance on a >1GHz
CPU as highly suspicious. When you have 1/100 or even 1/1000 the CPU cycles,
that something else entirely.

------
ColinWright
A random thought: Perhaps excellent implementations explicitly limit
themselves to a specific key size, like 1024 bits, because then the
implementation can ensure constant-time computations. We know that there are
exotic timing attacks, and these would be more difficult to thwart if we have
to use general implementations to cope with arbitrary length keys.

Is that a reasonable argument for the inertia over key sizes?

~~~
xnull2guest
I don't think so. The bid for constant-time implementations is fairly new
whereas the key size issue has a long history. What I hear all the time in the
practice is speed and backwards compatibility.

------
dlitz
Hopefully now OpenSSH (and the SSH RFCs) will drop "diffie-hellman-
group1-sha1" from the default list of allowed KexAlgorithms.

------
natehouk
P=NP!=MAD

