Hacker News new | past | comments | ask | show | jobs | submit login

The required number of qubits to execute Shor’s algorithm is way larger than 2500 qubits as the error ceiling for logical qubits must decrease exponentially with every logical qubit added to produce meaningful results. Hence, repeated applications of error correction or an increase in the surface code would be required. That would significantly blow up the number of physical qubits needed.



He’s quoting the number of logical qubits (which is 1024 IIRC, not 2500), after error correction.

ETA: Wikipedia 2330 qubits, but I'm not sure it is citing the most recent work: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#ci...


I actually thought the number of logical qubits needed was around 20 for factorisation as the state space size is 2^(2^n) and hence did not recognise them as the number of logical qubits required. It is often misunderstood that error correction needs to be done only once, as with classical quantum computers, and the numbers would fit together with one pass of error correction.

The Shor's algorithm requires binary encoding; hence, 2048 logical qubits are needed to become a nuance for cryptography. This, in turn, means that one will always be easily able to run away from a quantum adversary by paying a polynomial price on group element computations, whereas a classical adversary is exponentially bounded in computation time, and a quantum adversary is exponentially bounded with a number of physical qubits. Fascinating...


1024 is for RSA-1024, which is believed to be broken by classical means at this point. Everyone doing anything with RSA is on 4k or larger.


2048. There is no plausible conventional attack on 2048; whatever breaks 2048 is probably going to break 4096, as I understand it.

https://crypto.stackexchange.com/questions/1978/how-big-an-r...


> Everyone doing anything with RSA is on 4k or larger.

The Let's Encrypt intermediate certificates R10 and R11 seem to be only 2048 bit.


A signature (not encryption) with short-term valid life is fine at 2048 still.


They are? The short term recommendation is 3072, and I still see lots of 2048. Actually, it's mostly 2048.


Reminder to anyone if DKIM keys haven't been rotated in a while they might still be 1024. Eg., Google Workspace but new keys are 2048 now.


I took this conversation to be about ECC, not RSA.


My completely unfounded tin foil hat at the moment is that ECC was pushed as a standard not because it was faster/ smaller, but the smaller bit size makes it less quantum resistant and is more prone to be broken first (if not already) via quantum supremacy.


ECC is easier to implement right. Even the older ones that are full of footguns have orders of magnitude less footguns than RSA.

So don't fear them due to unfounded theories.


Is there consensus on what signature validation is correct? https://hdevalence.ca/blog/2020-10-04-its-25519am


IMO, that article has a clear conclusion that you should aim your software at the libsodium-1.0.16 pattern (no edge cases).

The problem it's presenting is more about software on the wild having different behavior... And if "some people connect on the internet and use software that behaves differently from mine" is a showstopper for you, I have some really bad news.


An intersting read. So the general idea as I understand is that one can make signatures that pass or does not pass validation depending on the implementation while all implementations do protect against forgery.

In my opinion the correct approach here is the most liberal one for the Q, R points. One checks each point cofactor at parsing 8P=0 and then use unbatched equation for verification. This way implementations can be made group agnostic.

Having group agnostic implementations is important as it creates a proper separation of concerns between curve implementations and the code that uses them. For instance if we were to accept strict validation as the ground truth and best practice one would have enormously hard time specifying verifiers for zero knowledge proofs and would also double time and code for the implementers without any effect on soundness.


You're right, that's pretty unfounded.


Discrete log doesn't use Shor's algorithm, and appears to need more qubits to break (per key bit).


Is it broken? Seems still no one solved RSA-1024 challenge.


The state actors aren't going to reveal their capabilities by solving an open challenge.


The most recent result on reducing the number of logical qubits is [1]. They show how to use residue arithmetic to factor n bit numbers using n/2 + o(n) logical qubits (they give the example of 1730 qubits to factor a 2048 bit number).

[1]: https://eprint.iacr.org/2024/222


Well, this is embarrassing. I just realised I had wrongly interpreted the result in [1]. I made an error on how Shor's algorithm encodes the numbers, wrongly assuming that numbers are encoded into quantum state space, which is 2^(2^n), where instead, there is one bit encoded into one qubit, which is also more practical.

The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.

On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.

[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3


Isn't that what they are claiming is true now? That the errors do decrease exponentially with each qubit added?


What they claim is that adding physical qubits reduce error rate of the logical qubits exponentially. For the Schor algorithm the error rate of the logical qubits must decrease exponentially with every single logical qubit added to make the system produce meaningful results.

To see how it plays out consider adding a single logical qubit. First you need to increase the number of physical qubits to accommodate the new logical qubit at the same error rate. Then multiply the number of physical qubits to accommodate for exponentially decreased error rate which would be a constant factor N ( or polynomial but let’s keep things simple) by which the number of physical qubits need to be multiplied to produce a system with one additional logical qubit with an error rate to produce meaningful results.

To attain 1024 logical qubits for Schor algorithm one would need N^1024 physical qubits. The case where N<1 would be possible if error would decrease by itself without additional error correction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: