The error rates given are still horrendous and nowhere near low enough for the Quantum Fourier Transform used by Shor's algorithm. Taking qubit connectivity into account, a single CX between 2 qubits that are 10 edges aways gives an error rate of 1.5%.
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.
In addition to that, the absolutely enormous domains that the Fourier Transform sums over (essentially, one term in the sum for each possible answer), and the cancellations which would have to occur for that sum to be informative, means that a theoretically-capable Quantum Computer will be testing the predictions of Quantum Mechanics to a degree of precision hundreds of orders of magnitude greater than any physics experiment to date. (Or at least dozens of orders of magnitude, in the case of breaking Discrete Log on an Elliptic Curve.) It demands higher accuracy in the probability distributions predicted by QM than could be confirmed by naive frequency tests which used the entire lifetime of the entire universe as their laboratory!
Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.
We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.
Re: AI, it's a long way off still. The big limitation to anything quantum is always going to be decoherence and t-time [0]. To do anything with ML, you'll need whole circuit (more complex than shor's) just to initialize the data on the quantum device; the algorithms to do this are complex (exponential) [1]. So, you have to run a very expensive data-initialization circuit, and only then can you start to run your ML circuit. All of this needs to be done within the machine's t-time limit. If you exceed that limit, then the measured state of a qubit will have more to do with outside-world interactions than interactions with your quantum gates.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
> I'm curious if EC can mitigate the sub-par decoherence times.
The main EC paper referenced in this blog post showed that the logical qubit lifetime using a distance-7 code (all 105 qubits) was double the lifetime of the physical qubits of the same machine.
I'm not sure how lifetime relates to decoherence time, but if that helps please let me know.
That's very useful, I missed that when I read through the article.
If the logical qubit can have double the lifetime of any physical qubit, that's massive. Recall IBM's chips, with t-times of ~400microseconds. Doubling that would change the order of magnitude.
It still won't be enough to do much in the near term - like other commenters say, this seems to be a proof of concept - but the concept is very promising.
The first company to get there and make their systems easy to use could see a similar run up in value to NVIDIA after ChatGPT3. IBM seems to be the strongest in the space overall, for now.
I'm sorry if this is nitpicky but your comment is hilarious to me - doubling something is doubling something, "changing the order of magnitude" would entail multiplication by 10.
Hahaha not at all, great catch. Sometimes my gray matter just totally craps out... like thinking of "changing order of magnitude" as "adding 1 extra digit".
Reminds me of the time my research director pulled me aside for defining CPU as "core processing unit" instead of "central processing unit" in a paper!
Wouldn’t thoose increased decoherence times need to be viewed in relation to the time it takes to execute a basic gate? If the time to execute a gate also increases it may overtake practicality of having less noisy logical qubits.
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.