"These new algorithms factor integers N ≈ 2^400 and N ≈ 2^800 using 7x10^10 and 4.3x10^12 arithmetic operations".
He does not specify how many it uses for 2^1024 or 2^2048. This is clearly nonlinear. Assume (nevertheless) that you multiply x100 every 400 binary digits, then for
2^2048, you get
(2048-800) equiv 1200/400 equiv 3, so you would need approx. 10^12*10^6 operations. Assuming you can do 10^6 ops , you need 10^12 seconds (like 30000 years).
BUT those are very very rough assumptions (an "arithmetic op" might probably take more than 1 microsec).
EDIT2: sorry again: Schnorr is 78 years old. I am not gerontophobic (being 50 I am approaching that age) but: Atiyah claimed the Riemann Hypothesis, Hironaka has claimed full resolution of singularities in any characteristic... And I am speaking of Fields medalists.
So: you do really need peer-review for strong arguments.
I'm getting 1e9 multiplication (2.5e8 div) ops per second in Node on a 2018 Macbook pro. So that would be 30 - 120 years. Not sure how well this algorithm can be parallelized, but since it's factoring, I think it can. So I would think that a dedicated attacker could get at least another 1000x on that, which gives 11-44 days. Note that I am pretty much guessing here but this is my test code:
function test(opp, n) {
const start = Date.now();
let i = 1;
if (opp === '*') {
let prod = 1;
while (i < n) {
i++;
prod *= i;
}
} else if (opp === '/') {
let quot = 100000000000000;
while (i < n) {
i++;
quot /= i;
}
} else if (opp === '+') {
let sum = 0;
while (i < n) {
i++;
sum += i;
}
} else if (opp == '-') {
let sum = 100000000000000;
while (i < n) {
i++;
sum -= i;
}
}
const end = Date.now();
console.log(`${n} ${opp} ops in ${end - start} ms`);
}
Guess: The "multiplications" we're talking about are those of 400-bit or 800-bit numbers (and, in general, would be the size of the number you're trying to factor), which are much more expensive than 64-bit x 64-bit or float x float multiplications. If you want to test the cost of that, I recommend GMPlib's bignums (or possibly just your favorite language that has built-in bignum support, although if it's not built on GMPlib it might not be using the most efficient algorithms).
If one takes a good FPGA and implements a 4096 bit multiplier in it, would it be faster?
Or, if we take a GPU, can we split a 4096 bit number into, say, 32-bit fragments, mass-multiply them, and combine the results faster than on a CPU? I suspect pretty common hardware can help speed such things up a lot; isn't crypto mining already using some of these approaches?
Naive multiplication is O(n^2) (where n = number of digits). The fastest algorithms seem to involve either cutting the numbers into pieces and doing multiplies, adds, shifts, and maybe subtracts on the pieces, and doing so recursively, which are O(n^[something slightly greater than 1]); or doing Fourier transforms, which apparently approach O(n log n). I don't know how easy it is to implement pieces of these more advanced approaches in hardware (especially if the size of the numbers isn't pre-chosen).
"These new algorithms factor integers N ≈ 2^400 and N ≈ 2^800 using 7x10^10 and 4.3x10^12 arithmetic operations".
He does not specify how many it uses for 2^1024 or 2^2048. This is clearly nonlinear. Assume (nevertheless) that you multiply x100 every 400 binary digits, then for
2^2048, you get
(2048-800) equiv 1200/400 equiv 3, so you would need approx. 10^12*10^6 operations. Assuming you can do 10^6 ops , you need 10^12 seconds (like 30000 years).
BUT those are very very rough assumptions (an "arithmetic op" might probably take more than 1 microsec).
EDIT: sorry, this link is pretty interesting. Factor RSA-260, which only has 862 bits. Should be feasible in about 2 hours. The link: https://crypto.stackexchange.com/questions/88582/does-schnor...
EDIT2: sorry again: Schnorr is 78 years old. I am not gerontophobic (being 50 I am approaching that age) but: Atiyah claimed the Riemann Hypothesis, Hironaka has claimed full resolution of singularities in any characteristic... And I am speaking of Fields medalists.
So: you do really need peer-review for strong arguments.