
RSA-210 factored - Mithrandir
http://www.mersenneforum.org/showpost.php?p=354259
======
randomwalker
To clarify, this is not a new factoring record for products of two primes.
RSA-768, a 232-digit number (22 digits longer) was factored in 2009, and that
record still stands.
[http://en.wikipedia.org/wiki/RSA_numbers#RSA-768](http://en.wikipedia.org/wiki/RSA_numbers#RSA-768)

The algorithm used here is GNFS (general number field sieve), which is the
same algorithm that's been used for about two decades. In other words, this
has no impact on the security of RSA.

More information:
[http://en.wikipedia.org/wiki/Integer_factorization_records](http://en.wikipedia.org/wiki/Integer_factorization_records)

~~~
pbsd
While it does have no impact in the expected complexity of factoring larger
keys, this kind of thing is informative of the maturity of public factoring
software: a single person with (presumably) access to a number of machines can
pull off this kind of job on their own. Given the number of things that can go
wrong in a complex method like the NFS, this is noteworthy.

~~~
wslh
How many servers does Google have? from search results it seems more than 1
million.

I don't know the details of the algorithm but can we assume that Google can
factor 20 bits more?

~~~
jrochkind1
The answer to "I don't know what I'm talking about, but can I assume..." is
always 'no'.

~~~
wslh
You are wrong, my intuition was correct, the algorithm is pretty
parallelizable and I worked in the computer security field.

I wrote an speculation based on the difficulty to read and understand papers
before this post is gone.

~~~
jrochkind1
See, now you haven't assumed it, you've researched it.

But are you saying Google could factor all that... if they took all their
servers that they ordinarily use for their business, turned their business
off, and instead used them to factor an RSA key?

------
simonster
"RSA-210" is the official name for this challenge, but it is a bit of a
misnomer. The modulus has 210 _decimal_ digits but about 210 * log2(10) = 697
bits.

EDIT: Wikipedia article says 696 bits.

~~~
trebor
Ah, thank you. I hadn't heard of the challenge and didn't know the size of the
number being factored. So this will mainly affect 1024 bit keys, rather than
2048+ bit keys, I presume.

~~~
jd007
It doesn't really even affect 1024 bit keys as even the largest RSA keys
factored (this one is close but isn't) are quite a bit far away from 1024
(remember factoring difficulty scales exponentially with number of bits, not
linearly). But 2048 bit keys are recommended I believe.

~~~
kingkilr
For new keys, 4096 is preferred.

~~~
tptacek
I don't think it's that simple. A 4096 bit key is extremely unlikely to get
you _fewer_ bits of security than a 2048 bit key. But the thing that makes
2048 bit keys a realistic threat is likely to make the "preference" be for
something other than RSA entirely.

------
Mithrandir
In case you've never heard of the challenge:
[https://en.wikipedia.org/wiki/RSA_Factoring_Challenge](https://en.wikipedia.org/wiki/RSA_Factoring_Challenge)

------
gus_massa
From the logs:

 _> Mon Sep 23 11:09:41 2013 commencing Lanczos iteration (32 threads)_

 _> Mon Sep 23 11:09:41 2013 memory use: 26956.9 MB_

 _> [...]_

 _> Thu Sep 26 07:17:57 2013 elapsed time 51:56:44_

I’m not sure that I’m understanding all the details. Does this mean that they
factored a 210-digit number in 52 hours in a single machine?

~~~
pbsd
No. What you're seeing in that log is only the two last steps in the number
field sieve: linear algebra and the square root step. The rest of the details
are missing.

The time of the square root is where you are seeing the 52 hours. That time is
essentially 0 in relation to the whole process.

You can also see that the last 1033963 - 1026631 = 7332 iterations of Lanczos
(on a 64Mx64M ~24GB matrix!) took 15 hours. We can then estimate that the
matrix step required around 3 months on that particular setup.

We don't know anything about the sieving, but I'd venture it took
significantly more computing power than the matrix, as is usually the case.

~~~
alexkus
The sieving stage can be massively parallel and doesn't require large amounts
of memory, with enough machines you could perform the sieve stage in under a
day (and with more machines within an hour). A 100,000 node botnet could be
very useful in this regard.

Knowing when you've got enough relations is an inexact science given there
will be some duplicates amongst relations generated on separate machines, so
it's not uncommon for a final bit of sieving to be required once msieve is
pointed at the combined relations file (which is seen in the log).

It's the block Lanczos and matrix square root steps that cannot be
parallelised efficiently[1].

Interest in the various Number Field Sieve algorithms (SNFS/GNFS) is one of
the main reasons I started (and finished!) a part time degree in Mathematics
to go along with my Comp Sci degree. Next on the list (after a break from 8
years of part time study) will be the M.Sc. courses[2].

1\. It can be parallelised, but not efficiently, i.e. throwing 10 similar
machines at the problem does not get the job done in a tenth of the time, only
~95% of the time. No efficient algorithm exists for doing this. Yet. If that
does appear then RSA starts to look very very shaky.

2\. The first of which (M820 The Calculus of Variations and Advanced Calculus)
can be downloaded from here:

    
    
        http://chilli.open.ac.uk/dr9/
    

Yes, the entire course can be downloaded, (although it's more of a guided read
of Apostol than a series of lectures.)

~~~
vmilner
I'm not aware M823 can be downloaded (which is the first of the two number
theory courses using Apostol)

~~~
alexkus
Yes, sorry, got the courses mixed up.

------
tgb
I was curious as to how the cash prizes [1] for this challenge compared to
playing the lottery. Consider the next smallest one left: RSA-768. By my very
rough estimates [2], $1 worth of computing time on a typical desktop gives you
a probability of 10^-58 of factoring the prime by picking random numbers
smaller than its square root.

[1]
[https://en.wikipedia.org/wiki/RSA_Factoring_Challenge](https://en.wikipedia.org/wiki/RSA_Factoring_Challenge)
[2]
[https://www.google.com/#q=%241+%2F+(%240.09+%2F+kwh)+%2F+100...](https://www.google.com/#q=%241+%2F+\(%240.09+%2F+kwh\)+%2F+100+watt+*+100e9%2Fsecond+%2Fsqrt\(4120234369866595438555313653325759481798116998443279828454556264338764455652484261980988704231618418792614202471888694925609317763750334211309823974\))

------
DanBC
What hardware is used for the first round of sieving? Is it "just" distributed
computing on normal machines, or are they using special FPGA farms?

------
arange
For us that don't understand, what does this mean? Is RSA less secure now?

~~~
Sanddancer
It's as secure as it ever was, it just shows how advancing technology means
that you can brute force a larger number. RSA is dependent on math problems
that take a fairly long time to solve unless you know one of the base factors.
What this is saying is that on a ~$5000 computer, it will take a bit over a
couple days to factor a 697 bit RSA number. This is more a demonstration as to
why you need to continually increase RSA keysizes -- at this point, a 1024 bit
number is probably within range of something a three letter agency could
factor within reason.

~~~
logicallee
The implication of the parent is that with a ~ $10000 computer you can take a
couple of days to factor a 698 bit RSA number. Or $20,000 can factor 699 in a
couple of days - $40,000 gets you 700, $80K for 701, $160K 702, $320K 703,
$640K 704, $1.28M 705, $2.56M 706, call it $5M 707, $10M 708, $20M 709, $40M
710, $80M 711, $160M 712, $320M 713, $640M 714, call it $1.2B 715, $2.4B 716,
$4.8B 717, $9.6B 718, $19.2B 719, $38.4B 720, $76.8B 721, $153.6B 722, call it
$300B for 723. We'll stop here because long before reaching this amount you
would have realized massive economies of scale such as running entire plants
making custom chips. Then again, we're talking about what can be done in a
"couple days".

If we extrapolate couple of days, to 4 days we can add +1 bit, 8 days, +2
bits, 16 days +3, 32 +4, 64 +5, 128 +6, 256 +7, 512 (1.4 years) +8, 2.8 yrs
+9, 5.6 yrs +10.

By that time again whatever is sitting there is obsolete.

Still, we're up to 733 bits. If we assume some massive growth and large
economies of scale it is quite conceivable that $300B gets you a 10,000,000x
increase on the bang per buck based on economies of scale alone (23 bits)
working with today's technology; or that by waiting, within 5 years
breakthrough technology would cause another 1,000,000 fold increase (call it
another 20 bits). We are now up to 776 bits. That is just 248 bits away from
1024 bits:

If we make ALL of the above assumptions, and you throw $300B at the problem
for 5 years and get to experience 1 million fold better technology and also a
ten million fold better price than the commodity demonstration, you can brute
force

1 /
452312848583266388373324160190187140051835877600158453279131187530910662656th
of the keyspace.

Thus I would say that the demonstration is NO threat of "advancing
technology", on the basis provided.

~~~
logicallee
I stand corrected by the two replies!

According to this article:
[http://en.wikipedia.org/wiki/Key_size](http://en.wikipedia.org/wiki/Key_size)

"For example, the security available with a 1024-bit key using asymmetric RSA
is considered approximately equal in security to an 80-bit key in a symmetric
algorithm (Source: RSA Security)."

"As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in
strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys
and 3072-bit RSA keys to 128-bit symmetric keys. RSA claims that 1024-bit keys
are likely to become crackable some time between 2006 and 2010 and that
2048-bit keys are sufficient until 2030. An RSA key length of 3072 bits should
be used if security is required beyond 2030. NIST key management guidelines
further suggest that 15360-bit RSA keys are equivalent in strength to 256-bit
symmetric keys."

As such my post contains very grave misinformation and should be disregarded!

The analysis in it applies to symmetric cipher key size.

------
mrcactu5
how did they know it was the product of two primes in the 1st place?

~~~
tgb
It's a competition set up with prizes for factoring some 'semiprimes' (numbers
with exactly two prime divisors). So unless the challenge organizers had
messed up, they were guaranteed this.

[1]
[https://en.wikipedia.org/wiki/RSA_Factoring_Challenge](https://en.wikipedia.org/wiki/RSA_Factoring_Challenge)

