Hacker News new | past | comments | ask | show | jobs | submit login
The Factorization of RSA-240 (nodak.edu)
130 points by hsnewman 2 days ago | hide | past | web | favorite | 35 comments





Wolfram Alpha won't do the multiplication for me. Google does it but I only get the 7 most significant digits. Any other online ways to check this?

https://www.google.com/search?q=5094359522858399145550510235...

EDIT: This one works: http://www.javascripter.net/math/calculators/100digitbigintc...


Python. It has unlimited precision out of the box.

python

>>>509435952285839914555051023580843714132648382024111473186660296521821206469746700620316443478873837606252372049619334517 * 244624208838318150567813139024002896653802092578931401452041221336558477095178155258218897735030590669041302045908071447

124620366781718784065835044608106590434820374651678805754818788883289666801188210855036039570272508747509864768438458621054865537970253930571891217684318286362846948405301614416430468066875699415246993185704183030512549594371372159029236099


Your browser can do it...

509435952285839914555051023580843714132648382024111473186660296521821206469746700620316443478873837606252372049619334517n * 244624208838318150567813139024002896653802092578931401452041221336558477095178155258218897735030590669041302045908071447n


I will take the opportunity to promote Maxima, which is a very nice open source maths tool with a very long history (started in the 60's, and still being developed). It leverages the bignum support in Common Lisp, and has no problems handling numbers like these.

http://maxima.sourceforge.net/

https://wxmaxima-developers.github.io/wxmaxima/

You can use Maxima's built-in command "factor" on this number as well, although I guess you'll have to be very patient.

I obviously also want to promote my own project, which is different UI on top of Maxima:

https://flathub.org/apps/details/com.dhsdevelopments.Climaxi...


Bignums are there because of Macsyma (Maxima's ancestor)!

"Bignums—arbitrary precision integer arithmetic—were added [to MacLisp] in 1970 or 1971 to meet the needs of Macsyma users."

[https://www.dreamsongs.com/Files/HOPL2-Uncut.pdf, P. 10 bottom]




Interesting, RSA-230 was factored a little more than 1 year ago.

Does any one here care to explain what impact this will have on modern crypto security systems?

Zero. If you're using anything with 800-bit security, you're probably a time traveler from the 90s.

The more fun academic contribution to this is they found a faster sieve, here http://cado-nfs.gforge.inria.fr. It's very neat stuff, I think they claim 30% less CPU years using the identical hardware to crack RSA-768, when applying this same software to RSA-768.

As with the original list, it's just unsolved problems that may help reveal more and more efficient ways to find/generate primes.


> I think they claim 30% less CPU years using the identical hardware to crack RSA-768, when applying this same software to RSA-768.

Are you sure? My reading was "in 70% of the the time it took to crack RSA-768, using their hardware, we cracked RSA-240" - this would match up relatively well with the fact that the expected difficulty increase was x2.25 and then they say "Taking this into account, and still using identical hardware, our computation was 3 times faster than the expected time that would have been extrapolated from previous records."

Collaborating this, the previous record took ~4000+1500+900+200=6600 core years and the current one 4000. The Xeon Gold 6130 used has a single thread passmark of 1754 while the Xeon E5-2660 used in the RSA-768 factorization scored 1471, from [0]. Normalizing for passmark, the new RSA-240 factorization took 72.3% as much compute as the RSA-768 factorization. I would not be at all surprised if their software saw smaller gains through the generations than Passmark did, and it wouldn't take much to bring 72.3% to 70%.

0: https://www.cpubenchmark.net/compare/Intel-Xeon-Gold-6130-vs...


I agree, this was 3x faster than expected.

Note: Factoring RSA-240 only took 900 core-years the additional 3100 core-years were on the discrete log.

RSA 240 is 8 digits longer, it takes ~2x as much work for each 5.5 additional digits so this should have taken 2.7x as much compute.


Small nitpick, finding/generating primes is already very efficient/easy, e.g. if you want a 1000-digit prime, then in that range about one in every 2300 is prime (by standard heuristics based on the prime number theorem), so if you try a few thousand random numbers and test them for primality (which is fast too), you'll have found a prime with high probability -- and this is even without trying to avoid obviously non-prime numbers like multiples of 2 or 3.

What is hard is factoring, i.e. finding the precise two primes p and q that were multiplied to result in pq. (In fact RSA relies on being able to pick two random primes easily, but which are hard to recover from their product.)


It's more than a small nitpick since if we depended on factorization to find primes, RSA wouldn't be a practical cryptosystem to begin with.

They have factorized another challenge number [1]. It doesn't have any surprising impact. As computational power grows, it becomes easier and easier to factorize semi-primes (ignoring quantum computers). So either we have to use larger semi-primes in RSA, or use a different kind of algorithm like elliptic curves.

[1] https://en.wikipedia.org/wiki/RSA_numbers


It means we're getting closer to break 1024-bit RSA, as they basically broke a 795-bit RSA modulus.

Most systems and applications currently uses at least 2048-bit keys and are still considered secure.

If someone still uses RSA under 2048-bit, I'd urge them to upgrade their keys or switch to Elliptic Curve if the application can handle it.


I think there's a sort of general folk consensus among cryptographers that a state-level adversary that wanted to break 1024-bit RSA has already done so. Eran Tromer costed out a hardware-optimized break of RSA-1024 inside the budget of many tech companies, and that was over a decade ago.

It's worth keeping in mind that this isn't a linear progression. The thing that breaks RSA-2048 might very well break all of RSA; I certainly wouldn't flag anyone for using it instead of 4096-bit RSA.


This paper basically says the NSA is doing it[1]. They estimate the price of hardware, and correlate it with the NSA's budget and capabilities listed in slides Edward Snowden leaked.

[1] https://weakdh.org/imperfect-forward-secrecy.pdf


It's largely talking about fixed DH groups rather than RSA.

It supposes that the NSA might plausibly break the handful of fixed groups because this would let them passively read the master keys (and thus all the data) for all traffic using these DH groups, which at the time they wrote that was a sizeable fraction of all IPSec and HTTPS traffic (for HTTPS the paper shows the fraction affected passively and the larger fraction that could be coerced by an active attacker with the risk of detection, the NSA's policy objectives mean it prefers not to risk being detected).

In contrast with RSA keys each user will have their own key and might rotate it periodically. If you spent just $1M to break my RSA key and then I replace every 60 days, you're now spending $6M per year _just on attacking me_ and typical estimates are _far_ higher than $1M and you need to do this fast, if you amortize discovery so that you take 100 days to recover new keys you can only begin to read things on average 70 days after they are sent.

For the web, which this paper is especially concerned with, 1024-bit RSA isn't a thing any more, and even if it was RSA key agreement also isn't a thing any more. So the NSA has much less to gain from an attack on RSA (even if it was still 1024-bit) than DHE.


A 1024-bit break seems like the kind of thing Google would do to show off, like they have with a few other projects. Kind of surprised we haven't seen something along those lines.

Google did such a thing with the SHA-1 collission. However RSA-1024 is still challenging even with a Google budget. Maybe they could do it, but likely it's still outside the "we'd do it just to proove a point" range.

Also notably the SHA-1 thing was done in a time where SHA-1 was still widely used and Google engineers were part of discussions about SHA-1 deprecation. RSA-1024 is basically dead already.


The cost-effective ways to do it might require making custom hardware from which Google couldn't derive any other benefit, and perhaps from which it wouldn't even derive very much insight that was useful to its business.

The hyperscalers all have lots of idle compute, I'm not sure how many cores they have, but I believe the big three cloud services operate on the order of millions of servers, most of which have tens of cores now. If a 1024 bit key is proportionally more complex to solve, it seems they could solve it with idle compute.

I don't think any of them would want the negative publicity that could arise.


Surely it's at least a different power bill on modern hardware depending on what the hardware is computing, right?

Unless I'm dumb, the 2.25 increased hardness from 768 bits to 795 bits would suggest a 2.25^(1/(795 - 768)) hardness increase per bit. That is roughly 1.0305. Taking 1.0305^(1024-795) yields something like a 970-fold increase.

A thousandfold increase is in the right range.

But this factorization only took a thousand core years.

Millions of servers averaging one donated core each can hit a thousand thousand core years in a few weeks.


I take it as a... hmm, not a lie of omission, but more, like, an omission of "trust us"? Not really sure what words would be best suited to describe this.

It sounds line the kind of thing that would generate only negative publicity

It's worth keeping in mind that this isn't a linear progression

An additional small detail that makes it somewhat more confusing than it should be is that some challenge numbers (like this one, RSA-240) have the decimal, rather than binary size in their name.


a 1024-bit RSA number is ~5000 times harder (each additional 5.5 digits doubles the difficulty[1])

[1] https://www.mersenneforum.org/showpost.php?p=531861&postcoun...


Maybe I am missing something.

2^((1024 - 795)/5.5) = 3.4e12

Isn't that over 3.4 trillion times harder?


I believe the "digits" above refers to decimal digits, not binary digits. RSA-240 has 240 decimal digits. A 1024-bit number should have 309 digits. So:

2^((309 - 240) / 5.5) = 5978


What is going on with the URL? Also this user submitted something along this RSA-240 factoring twice. Seems really phishy that they're using a .exe in the url.

wa.exe is the web interface for LISTSERV on Windows: https://www.lsoft.com/manuals/16.0/htmlhelp/list%20subscribe...

AFAICT this mailing-list post is the original public announcement, but perhaps the submitter followed up with the Schneier link in anticipation of concerns like yours.


Thanks for the explanation!



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: