Hacker News new | past | comments | ask | show | jobs | submit login

I agree with pushing elliptic curves, but I don't like the way they get there.

First, some historical corrections: the first L[1/2] algorithm for either factorization or discrete log was CFRAC, presented by Brillhart and Morrison in 1974 [1]. This was the first time a L[1/2] algorithm was formalized: descriptions of this basic method go back to the 1930s and the work of Lehmer and Powers [2] and Kraitchik. I presume 1979 refers to Adleman's subexponential discrete log paper [3].

1984 saw Coppersmith's L[1/3] algorithm for discrete logarithms in GF(2^n) [4]. The trick was that, in GF(2^n), the freshman's dream is true: A^2 + B^2 = (A + B)^2. This makes the polynomials we need to check for smoothness much smaller (for appropriately chosen polynomials), so much so that the overall asymptotic running time decreases.

Then in 1990 the number field sieve [5] appears for factorization, also at L[1/3]. You'd think that this had something to do with 1984's Coppersmith, right? Wrong. The number field sieve came from independent efforts, started by ElGamal [6], Coppersmith et al [7], and Pollard [8].

So you can see that Coppersmith's GF(2^n) trick never factored into the number field sieve. The function field sieve [9] is, however, a continuation of Coppersmith's effort in fields of small characteristic (e.g., GF(2^n), GF(3^n), etc). There are many improvements over the FFS that I'm going to gloss over now. More recently, this work has been continued by Joux and friends [10,11,12,13], who have basically annihilated sieving for some specific fields, and most of the time is spent on individual logs. Note that even for small characteristic, where such algorithms apply, well-chosen fields will still be fairly secure: for example GF(2^2039) still has about 77 bits of security, similar to RSA-1024.

There is no reason to believe that the tricks used by Joux carry over to the integers or GF(p), as the 1984 trick never carried over either. We might see an improvement to integer factorization soon, but I seriously doubt these tricks will be in its origin. My guess is that RSA will get an long overdrawn death like Rivest's other brainchild, RC4.

There are plenty of reasons to prefer elliptic curves over finite fields. Elliptic curves are smaller, faster, safer, and shinier. But like the finite field case, there are also classes of curves that are cheaper to attack. Did you know that all elliptic curves over GF(2^n) can be solved in subexponential time [14]? Let's move to NTRU! This kind of scare tactics is unproductive.

[1] http://www.ams.org/journals/mcom/1975-29-129/S0025-5718-1975...

[2] http://www.ams.org/journals/bull/1931-37-10/S0002-9904-1931-...

[3] http://cr.yp.to/bib/1979/adleman.html

[4] http://www.enseignement.polytechnique.fr/profs/informatique/...

[5] http://www.iai.uni-bonn.de/~adrian/nfs/lenstra90number.pdf

[6] http://link.springer.com/chapter/10.1007%2F978-1-4684-4730-9...

[7] http://cr.yp.to/bib/1986/coppersmith.html

[8] http://link.springer.com/content/pdf/10.1007/BFb0091536.pdf

[9] http://cr.yp.to/bib/1994/adleman-ffs.html

[10] http://eprint.iacr.org/2012/720

[11] http://eprint.iacr.org/2013/074

[12] http://eprint.iacr.org/2013/095

[13] http://eprint.iacr.org/2013/400

[14] http://eprint.iacr.org/2012/146




This is a self-evidently excellent comment, and thank you for posting it. I want to make a couple clarifications:

* From the conversations I've had with cryptographers, there does seem to be a sort of consensus that Joux's DLP work is unlikely to result in a viable attack on RSA any time soon, or, for that matter, on any mainstream DLP scheme.

* But: that isn't a uniformly held opinion. Dan Boneh is an example of someone who seems to believe the line of research Joux is pursuing is going to harm RSA "sooner" rather than "later".

* As you note: RSA isn't looking sturdy today. I think the RC4 example is a good one. The recent attacks on RC4 weren't based on a newly discovered vulnerability, but rather on a more diligent effort to exploit an old one. I'm not convinced that there was any better reason for those attacks being unveiled in 2013 than that nobody had a strong reason to investigate them.

* A (just say) 10-year time scale is still relatively short! If we can reasonably predict material weaknesses in a cryptosystem 10 years out, we should replace it now. We don't generally choose to work with cryptosystems that offer us a margin of just 10 years to work with.

Finally, and most importantly: please understand that the audience at Black Hat isn't cryptographically literate. There are a couple crypto people that show up every year, but the attendees at Black Hat generally bring very little background in the subject with them. So, I think there's value in distilling subtle messages down to actionable information for that audience. A subset of the people in the audience at Black Hat are at some point going to try to develop their own cryptosystems, and they're much more likely to do that by trying to hack together RSA than they are ECC. The less attractive RSA looks to them, the better.

Thanks again for the corrective comment, though. I sort of wish Tom and Javed could have had you up on stage debating them. :)


Not cryptographically literate, and hacking together cryptosystems? This can only end badly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: