Hacker News new | past | comments | ask | show | jobs | submit login
Strong quantum computational advantage using a superconducting quantum processor (arxiv.org)
50 points by zekrioca 24 days ago | hide | past | favorite | 19 comments

From the abstract

> Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time.

Bold statement for a paper that then only provides a guesstimate of the time needed for classical algorithms.

The argument in the paper is essentially a glorified begging of the question: the authors presuppose the optimality of direct simulation of their quantum circuits via Schrodinger-Feynman. They then show that S-F is significantly slower than their quantum machine, "proving" quantum supremacy. They fail, of course, to aargue that there is no alternate classical algorithm that can accomplish the same thing; a pitfall that has trapped multiple similar papers in the recent past.

In essence, this is just a new-age variant of the P != NP "proof" that says "well there can't be any solution faster than trying every path". Quantum computers may down the line prove themselves to be superior to classical ones, but failure of imagination is on its own no proof of classical inadequacy.

My understanding is that this simulation of random quantum circuits is already known to be hard in a theoretical sense[1], and there is no argument that a classical computer could compete with a quantum computer much bigger than 50-60ish qubits. And that arguments against those previous papers has more to do with classical optimizations such as using disk w/ RAM rather than just RAM alone.

For example, take this line from IBM's refutation of Google's Quantum Supremacy experiment: "64 PiB of disk space are required for 53-qubit circuits, and 128 PiB for 54-qubit circuits. Both fit within the 250 PiB available on Summit."[2] If you have to use over 50% of the total disk space on one of the world's strongest supercomputers, you're probably going to run into some major issues once we start hitting the 55, 56, 57, 58, etc. qubit territory. And although this is a moving goalpost, the supercomputers clearly aren't going to get better faster than adding single qubits onto the current chips. Already Rigetti is announcing an 80-qubit quantum computer, and IBM is making an 127-qubit computer.[3]

So, IMHO we are far past the point where quantum computers have to prove themselves superior to classical ones, at least on designed tasks. I don't think we should compare failed P vs NP proofs (which, IMHO is a problem we're anywhere from 5 to 10000 years out from solving).

All this being said, I do have to agree with IBM on the value of the term "Quantum Advantage" versus "Quantum Supremacy". Quantum computers have not yet demonstrated a significant advantage in terms of cost on any problem, but I'm optimistic they will in about a decade. Probably in problems that are already naturally fit to quantum computers, like studying interactions between particles in quantum physics. Also for the record I'm a layperson with just an unusual interest in this field, so it's easily possible I've missed some incredibly important central problem that everyone is being quiet about.

[1] https://arxiv.org/abs/2007.07872 [2] https://arxiv.org/pdf/1910.09534.pdf [3] https://arstechnica.com/science/2021/06/quantum-computing-st...

>have to prove themselves superior to classical ones, at least on designed tasks.

i think calling them "designed tasks" is laundering what's going on. call a spade a spade - it's simulating completely random circuits. so what's the point?

>but I'm optimistic they will in about a decade.

you and every other researcher/vc/funding agency.

20 cycles... I think this kind of result demonstrates the gap between the aspirations of the QC community and the expectations of technology spectators (like investment banks) - how many cycles would be required to do "How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits."?

While breathless press announces are indeed often too hyped, I think we should keep in mind the history of the transistor. We went from fragile expensive single transistors to VLSI in half a century. And we consider that amazing.

That doesn't mean the same thing will happen with QC, it just means that maybe we should QC some slack, and keep cheering for the incremental improvements that arrive every couple of years?


There's one big issue with QC currently. There is no mathematical demonstration that QC is scalable. All the speed-ups need some form of error correction to scale the way digital computation has. All hardware advances are just increases in computational fidelity so far to allow more useful qubits. Shor has an error correction algorithm for a single qubit, and there have been improvements to that, but nothing general enough to show that QC can even scale like digital computers can.

From my understanding, the quantum threshold theorem states that if you have a low base error rate (1-3%) you can apply multiple levels of a single quantum error correcting code to get arbitrarily low levels of errors on a logical qubit ("remove errors faster than you introduce them"). Right now that means that you need approximately 1000x physical qubits for 1 logical qubit, but to me that's a solid mathematical/theoretical demonstration that you can scale QC.

You're right. I've looked at quantum codes before, but apparently my understanding was lacking - thanks for the info.

Yup - agree, and also "genetics tech" 70 years from Crick and Watson to mRNA vaccines and then actual genetic engineering in adults with CRISPR [1].

This kind of progress is amazing, I think we should start the clock on QC from either Shor's paper or maybe from the first 2 bit QC in 1998 - so that takes us to say 2060?

[1] https://www.nature.com/articles/d41586-021-01776-4

I dont think anyone other than snake-oil salesmen have seriously suggested that quantum computers will break RSA as used in real life in the near future. We're simply quite far away from that milestone.

I think that there is a bigger gap that many think between what the community sees as realistic and what is read into that by investors and other technologists. I think that many people believe that a current 50-60qubit device "runs" in the way that a valve based computer did in the 1950s - for hours before a bug got into a valve (for example), and I also don't think that people appreciate that it takes 36 hrs to turn them on and calibrate them. I believe that most technologists think that getting to 20m noisy qubits is the challenge - but the reality is that there are a plethora of challenges that must be overcome to actually move an 20m noisy qubit device to a usable 20m noisy qubit device.

Is this one of the first examples where a quantum computer is noticeably faster vs classical; or have there been past examples?

Well if we're counting every quantum interaction that would take a long time to solve on a classical computer as 'computational advantage' then arguably any nontrivial quantum interaction would do the trick. And as far as I can tell it's not calculating something that anyone would consider useful, so it's a bit debatable in what respect this result is any different.

So well, it depends on your perspective I guess.

That's been the problem with the Google result too, even though they did have "controlled" operations. They basically showed that a quantum computer doing random operations has an output that is difficult to predict with classical simulation.

It also solves no problem at all. They tried to shoehorn into the paper a "CSRNG" application but it's dubious at best if you have any kind of cost constraint.

The first successful quantum supremacy experiment, as accepted by the QC community in general, was proven by Google. Here's Scott Aaronson's write up on it: https://www.scottaaronson.com/blog/?p=4372

I believe there was also a second successful, but somewhat more controversial, experiment by a Chinese team.

>The first successful quantum supremacy experiment, as accepted by the QC community in general

That's a debateable claim. IBM's QC team certainly doesn't accept it, and have demonstrated that what Google did could be done by a conventional computer: https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...

The main point of IBM was that by using RAM and Disk storage you could perform computations of the google system that were not as onerous as was put out in the google paper. That said, if you had an extra 10+ qubits (which is essentially what this new work does) you are back in the range where the supercomputer will be taking more time than is reasonable. For a quick eyeball imagine what adding 10+ qubits on to this kind of plot would do (from ibm post) https://www.ibm.com/blogs/research/wp-content/uploads/2019/1...

Oh, and you need a lot of storage, just a couple of more qubits and it would not have been able to fit in the 250 PiB on Summit, so the IBM argument would have been void if google had 55 qubits in their work.

IBM is also salty that they aren't considered "big tech" anymore.

There is no I in FAANG.

There have been past claims of quantum advantage that so far always turned out to be rather questionable when the details are discussed.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact