The algorithm used here is GNFS (general number field sieve), which is the same algorithm that's been used for about two decades. In other words, this has no impact on the security of RSA.
More information: http://en.wikipedia.org/wiki/Integer_factorization_records
I don't know the details of the algorithm but can we assume that Google can factor 20 bits more?
I wrote an speculation based on the difficulty to read and understand papers before this post is gone.
But are you saying Google could factor all that... if they took all their servers that they ordinarily use for their business, turned their business off, and instead used them to factor an RSA key?
EDIT: Wikipedia article says 696 bits.
> Mon Sep 23 11:09:41 2013 commencing Lanczos iteration (32 threads)
> Mon Sep 23 11:09:41 2013 memory use: 26956.9 MB
> Thu Sep 26 07:17:57 2013 elapsed time 51:56:44
I’m not sure that I’m understanding all the details. Does this mean that they factored a 210-digit number in 52 hours in a single machine?
The time of the square root is where you are seeing the 52 hours. That time is essentially 0 in relation to the whole process.
You can also see that the last 1033963 - 1026631 = 7332 iterations of Lanczos (on a 64Mx64M ~24GB matrix!) took 15 hours. We can then estimate that the matrix step required around 3 months on that particular setup.
We don't know anything about the sieving, but I'd venture it took significantly more computing power than the matrix, as is usually the case.
Knowing when you've got enough relations is an inexact science given there will be some duplicates amongst relations generated on separate machines, so it's not uncommon for a final bit of sieving to be required once msieve is pointed at the combined relations file (which is seen in the log).
It's the block Lanczos and matrix square root steps that cannot be parallelised efficiently.
Interest in the various Number Field Sieve algorithms (SNFS/GNFS) is one of the main reasons I started (and finished!) a part time degree in Mathematics to go along with my Comp Sci degree. Next on the list (after a break from 8 years of part time study) will be the M.Sc. courses.
1. It can be parallelised, but not efficiently, i.e. throwing 10 similar machines at the problem does not get the job done in a tenth of the time, only ~95% of the time. No efficient algorithm exists for doing this. Yet. If that does appear then RSA starts to look very very shaky.
2. The first of which (M820 The Calculus of Variations and Advanced Calculus) can be downloaded from here:
If we extrapolate couple of days, to 4 days we can add +1 bit, 8 days, +2 bits, 16 days +3, 32 +4, 64 +5, 128 +6, 256 +7, 512 (1.4 years) +8, 2.8 yrs +9, 5.6 yrs +10.
By that time again whatever is sitting there is obsolete.
Still, we're up to 733 bits. If we assume some massive growth and large economies of scale it is quite conceivable that $300B gets you a 10,000,000x increase on the bang per buck based on economies of scale alone (23 bits) working with today's technology; or that by waiting, within 5 years breakthrough technology would cause another 1,000,000 fold increase (call it another 20 bits). We are now up to 776 bits. That is just 248 bits away from 1024 bits:
If we make ALL of the above assumptions, and you throw $300B at the problem for 5 years and get to experience 1 million fold better technology and also a ten million fold better price than the commodity demonstration, you can brute force
1 / 452312848583266388373324160190187140051835877600158453279131187530910662656th of the keyspace.
Thus I would say that the demonstration is NO threat of "advancing technology", on the basis provided.
According to this article:
"For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm (Source: RSA Security)."
"As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys. RSA claims that 1024-bit keys are likely to become crackable some time between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. An RSA key length of 3072 bits should be used if security is required beyond 2030. NIST key management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit symmetric keys."
As such my post contains very grave misinformation and should be disregarded!
The analysis in it applies to symmetric cipher key size.
You might also think in terms of the security level that a 1024, 2048, 4096 &c key gets you. It isn't 1024 bits for a 1024 bit RSA key!
There's still 1024bits, which is still in common use and more people are switching to 2048. You're still fine.