The required number of qubits to execute Shor’s algorithm is way larger than 2500 qubits as the error ceiling for logical qubits must decrease exponentially with every logical qubit added to produce meaningful results. Hence, repeated applications of error correction or an increase in the surface code would be required. That would significantly blow up the number of physical qubits needed.
I actually thought the number of logical qubits needed was around 20 for factorisation as the state space size is 2^(2^n) and hence did not recognise them as the number of logical qubits required. It is often misunderstood that error correction needs to be done only once, as with classical quantum computers, and the numbers would fit together with one pass of error correction.
The Shor's algorithm requires binary encoding; hence, 2048 logical qubits are needed to become a nuance for cryptography. This, in turn, means that one will always be easily able to run away from a quantum adversary by paying a polynomial price on group element computations, whereas a classical adversary is exponentially bounded in computation time, and a quantum adversary is exponentially bounded with a number of physical qubits. Fascinating...
My completely unfounded tin foil hat at the moment is that ECC was pushed as a standard not because it was faster/ smaller, but the smaller bit size makes it less quantum resistant and is more prone to be broken first (if not already) via quantum supremacy.
IMO, that article has a clear conclusion that you should aim your software at the libsodium-1.0.16 pattern (no edge cases).
The problem it's presenting is more about software on the wild having different behavior... And if "some people connect on the internet and use software that behaves differently from mine" is a showstopper for you, I have some really bad news.
An intersting read. So the general idea as I understand is that one can make signatures that pass or does not pass validation depending on the implementation while all implementations do protect against forgery.
In my opinion the correct approach here is the most liberal one for the Q, R points. One checks each point cofactor at parsing 8P=0 and then use unbatched equation for verification. This way implementations can be made group agnostic.
Having group agnostic implementations is important as it creates a proper separation of concerns between curve implementations and the code that uses them. For instance if we were to accept strict validation as the ground truth and best practice one would have enormously hard time specifying verifiers for zero knowledge proofs and would also double time and code for the implementers without any effect on soundness.
The most recent result on reducing the number of logical qubits is [1]. They show how to use residue arithmetic to factor n bit numbers using n/2 + o(n) logical qubits (they give the example of 1730 qubits to factor a 2048 bit number).
Well, this is embarrassing. I just realised I had wrongly interpreted the result in [1]. I made an error on how Shor's algorithm encodes the numbers, wrongly assuming that numbers are encoded into quantum state space, which is 2^(2^n), where instead, there is one bit encoded into one qubit, which is also more practical.
The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.
On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.
What they claim is that adding physical qubits reduce error rate of the logical qubits exponentially. For the Schor algorithm the error rate of the logical qubits must decrease exponentially with every single logical qubit added to make the system produce meaningful results.
To see how it plays out consider adding a single logical qubit. First you need to increase the number of physical qubits to accommodate the new logical qubit at the same error rate. Then multiply the number of physical qubits to accommodate for exponentially decreased error rate which would be a constant factor N ( or polynomial but let’s keep things simple) by which the number of physical qubits need to be multiplied to produce a system with one additional logical qubit with an error rate to produce meaningful results.
To attain 1024 logical qubits for Schor algorithm one would need N^1024 physical qubits. The case where N<1 would be possible if error would decrease by itself without additional error correction.