>>>509435952285839914555051023580843714132648382024111473186660296521821206469746700620316443478873837606252372049619334517 * 244624208838318150567813139024002896653802092578931401452041221336558477095178155258218897735030590669041302045908071447
509435952285839914555051023580843714132648382024111473186660296521821206469746700620316443478873837606252372049619334517n * 244624208838318150567813139024002896653802092578931401452041221336558477095178155258218897735030590669041302045908071447n
You can use Maxima's built-in command "factor" on this number as well, although I guess you'll have to be very patient.
I obviously also want to promote my own project, which is different UI on top of Maxima:
"Bignums—arbitrary precision integer arithmetic—were added [to MacLisp] in 1970 or 1971 to meet the needs of Macsyma users."
[https://www.dreamsongs.com/Files/HOPL2-Uncut.pdf, P. 10 bottom]
The more fun academic contribution to this is they found a faster sieve, here http://cado-nfs.gforge.inria.fr. It's very neat stuff, I think they claim 30% less CPU years using the identical hardware to crack RSA-768, when applying this same software to RSA-768.
As with the original list, it's just unsolved problems that may help reveal more and more efficient ways to find/generate primes.
Are you sure? My reading was "in 70% of the the time it took to crack RSA-768, using their hardware, we cracked RSA-240" - this would match up relatively well with the fact that the expected difficulty increase was x2.25 and then they say "Taking this into account, and still using identical hardware, our computation was 3 times faster than the expected time that would have been extrapolated from previous records."
Collaborating this, the previous record took ~4000+1500+900+200=6600 core years and the current one 4000. The Xeon Gold 6130 used has a single thread passmark of 1754 while the Xeon E5-2660 used in the RSA-768 factorization scored 1471, from . Normalizing for passmark, the new RSA-240 factorization took 72.3% as much compute as the RSA-768 factorization. I would not be at all surprised if their software saw smaller gains through the generations than Passmark did, and it wouldn't take much to bring 72.3% to 70%.
Note: Factoring RSA-240 only took 900 core-years the additional 3100 core-years were on the discrete log.
RSA 240 is 8 digits longer, it takes ~2x as much work for each 5.5 additional digits so this should have taken 2.7x as much compute.
What is hard is factoring, i.e. finding the precise two primes p and q that were multiplied to result in pq. (In fact RSA relies on being able to pick two random primes easily, but which are hard to recover from their product.)
Most systems and applications currently uses at least 2048-bit keys and are still considered secure.
If someone still uses RSA under 2048-bit, I'd urge them to upgrade their keys or switch to Elliptic Curve if the application can handle it.
It's worth keeping in mind that this isn't a linear progression. The thing that breaks RSA-2048 might very well break all of RSA; I certainly wouldn't flag anyone for using it instead of 4096-bit RSA.
It supposes that the NSA might plausibly break the handful of fixed groups because this would let them passively read the master keys (and thus all the data) for all traffic using these DH groups, which at the time they wrote that was a sizeable fraction of all IPSec and HTTPS traffic (for HTTPS the paper shows the fraction affected passively and the larger fraction that could be coerced by an active attacker with the risk of detection, the NSA's policy objectives mean it prefers not to risk being detected).
In contrast with RSA keys each user will have their own key and might rotate it periodically. If you spent just $1M to break my RSA key and then I replace every 60 days, you're now spending $6M per year _just on attacking me_ and typical estimates are _far_ higher than $1M and you need to do this fast, if you amortize discovery so that you take 100 days to recover new keys you can only begin to read things on average 70 days after they are sent.
For the web, which this paper is especially concerned with, 1024-bit RSA isn't a thing any more, and even if it was RSA key agreement also isn't a thing any more. So the NSA has much less to gain from an attack on RSA (even if it was still 1024-bit) than DHE.
Also notably the SHA-1 thing was done in a time where SHA-1 was still widely used and Google engineers were part of discussions about SHA-1 deprecation. RSA-1024 is basically dead already.
I don't think any of them would want the negative publicity that could arise.
But this factorization only took a thousand core years.
Millions of servers averaging one donated core each can hit a thousand thousand core years in a few weeks.
An additional small detail that makes it somewhat more confusing than it should be is that some challenge numbers (like this one, RSA-240) have the decimal, rather than binary size in their name.
2^((1024 - 795)/5.5) = 3.4e12
Isn't that over 3.4 trillion times harder?
2^((309 - 240) / 5.5) = 5978
AFAICT this mailing-list post is the original public announcement, but perhaps the submitter followed up with the Schneier link in anticipation of concerns like yours.