Hacker News new | past | comments | ask | show | jobs | submit login

It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.



> It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.

This statement should delimit between theory and experiment.

Theoretically, the question of building a quantum computer with low enough noise to run Shor's has been solved. In fact it was solved by Shor himself in the 1990s: https://arxiv.org/abs/quant-ph/9605011.

Experimentally, we are just getting started with building quantum computers with low enough noise: https://blogs.microsoft.com/blog/2024/04/03/advancing-scienc....

Experimentally, it will always be open whether a quantum computer can run Shor's until it actually run Shor's. The point is that progress in the field has not stagnated since it's founding.


That paper of Shor just shows how a quantum computer with a large number of bad qubits can be the equivalent of a quantum computer with a small number of good qubits.

The paper does not prove anything about the upper limit for the number of bad qubits that are physically realisable.

There are doubts that this upper limit, which is unknown yet, is high enough for most practical applications.


> The paper does not prove anything about the upper limit

Nothing can prove how many qubits can be realizable except trying to realize them. There will never be a theorem that says "The maximum number of qubits that can ever be controlled in a lab is X". That's what experiment is for.

I will say, it's difficult to doubt that the upper limit to the number of qubits we can realize is infinity. We can now trap ~10 million atoms and efficiently control hundreds of them: https://www.nature.com/articles/s41586-023-06927-3. The question is not "Could we ever realize billions of qubits?". It's "When can we realize billions qubits?". The answer could be decades or centuries but as long as people are building these devices, it will happen eventually.


Yeah well you can rephrase my statement to say that the theory underlying quantum computers still hasn’t been validated experimentally, so it’s still open whether it models reality sufficiently well to allow us to run the algorithms it predicts on physical machines.


I've been going down this rabbit hole for a few weeks. Here is what I now think.

Cai 2023 (https://pages.cs.wisc.edu/~jyc/Shor-Algorithm-with-Noise.pdf) showed that there is a definite noise floor for Shor's and gave a way to determine this.

New algorithms are better than Shor's the in that they are smaller in space and time and more resilient to noise. The state of the art is Ragavan et-al https://arxiv.org/abs/2310.00899. The insight in Cai's about the QFT applies to this algorithm but is less damaging as is scales the noise floor at n^3/2 not n^2. It does seem that it is believed that error correction can be implemented to bring the effective noise down, but it seems that this will be very expensive for large numbers - probably at the scale of previous estimates that indicate that more than 20 million gates will be required to be operational for about 8 hours. The gates required will have to be much better than the current ones (my reading is about an order of magnitude) but this is believed to be on the table for the future. I think these two assertions by the community are debatable but honestly these folks are the experts and we have to go with what they say and respect them on this until it turns out that they were wrong.

From what I read it was never the case that Shor's was thought to be any more noise tolerant than other algorithms, but Cai proved it, which is different. There is some debate about the insight, because a chunck of the community is like "this is not helpful and will make us even more misunderstood than we already are" but personally I find this attitude really irritating becuase it's part of the gradual reveal that QC people do about the practicality of the tech. I have no respect for anyone working in QC who doesn't say something like "there is no chance of a practical application in the next 30 years and it's likely that there will be no practical application for 50 years at least. But, this is really interesting and important physics and the techniques we are developing may help us build new classical devices or other quantum devices in a shorter life time." This rider should be mandated for all grant applications and in every popular interview. Instead of which we hear (repeatedly) "whooo whooo QC will crack RSA any day now and help us do machine learning better than you can ever imagine". These folks say "whelp I never said that" but the reality is that they use the idea to seduce politicians and CEO's to give up significant money that they would not if they had a clear idea of the technical problems with QC and which could be used to do a lot of good if spent on things like feeding kids and stopping climate change.

This is introducing new security issues because things like QKD and post quantum are problematic in new ways. QKD has end points that are silently vulnerable to people with guns, pliers and evil intents. Post quantum is very computationally expensive and errr suspicious in errr various ways. Introducing it is going to create gaps, and those gaps are unnecessary if current encryption is not threatened.

Quantum ML is another thing that make me really annoyed. The algorithms need data to be loaded (surprise) which is extremely problematic and slow, and they need quantum memory - which exists only on a very small scale and the tech used just seems wildly impractical to me. Yet, folks are out there talking about it as if it's coming in the next 5, 10, 20 years! The reality is that we are about 6 major breakthroughs from doing this and once we have those 6 breakthroughs expect that this is going to take 10 years for a practical implementation at least Again - I have no problem with the theoretical exploration of the topic, but to simulate circuits and make claims about how they will work without mentioning that you don't have a clue how to implement the system that will implement it is pretty bad behaviour.

All they need to do is put a single line in the paper like "This technique has theoretical interest in Computer Science and Physics but major open questions prevent it being practical for real world problem solving in the near or medium term." I have total respect. 100%. And I think that the work is interesting

But no, because "want money" and "don't give a shit about the damage we are doing".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: