The reality is that quantum computing technology has been improving at the same steady pace for decades, and quantum supremacy was recently invented as a concrete but not very useful goal that people could realistically aim for in the near term. That is a totally standard and legitimate way of guiding technological progress.
The problem is that quantum supremacy is vaguely defined (what does "infeasible on any classical computer" mean? what computers can you use, and for how long?) but it is presented to the public as a sharp boundary -- which means that every company in the field has a huge incentive to claim they are the first. This IBM claim is not a fundamental objection. Given that everything they say is correct, Google could still achieve quantum supremacy by their standards just by incremental improvement, slapping a few more qubits on. So why even bother disputing Google's claim? Because it will set them back a few years, and that's enough time for IBM to make a claim of its own. That's why academics, who have no stake in what company gets the hype, will just consider this whole episode rather distasteful.
Yes, Google probably can achieve quantum supremacy by slapping a few more qubits. But a few more qubits were, in fact, not slapped yet. So quantum supremacy is very much not achieved, and Google should not claim so.
That's why arguing over whether it's really quantum supremacy is just not useful. People can always move the goalposts, so we might as well ban that word from discussion. The underlying fact, which nobody is debating, is that quantum computers are regularly doing tasks that are harder and harder for classical computers to.
And yes, the ECT is obviously false -- or at the very least, not obviously true. I'm surprised at how many CS people think it is some unassailable principle. Most CS courses devote precisely zero time to non-classical models of computation. If you've never studied a model of computation that could challenge ECT, then how can you be so sure of it? This is like being absolutely certain that I am the tallest person in the world, and refusing to ever leave my house to check.
The Extended version goes on to say a probabilistic TM can efficiently simulate all realistic models of computation. Quantum computers very likely violate the efficiency claim and this supremacy result is strong evidence in support.
Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot. The weaker term Quantum advantage refers to the potential to solve problems faster. In computational-complexity-theoretic terms, this generally means providing a superpolynomial speedup over the best known or possible classical algorithm. The term was originally popularized by John Preskill but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's and Richard Feynman's proposals of quantum computing.
IBM's algorithm scales approximately linearly in the number of qubits. So, you'd need more than a few more qubits...
The only question is we don't know if there is a better classical factoring algorithm.
> On a quantum computer, to factor an integer N, Shor's algorithm runs in polynomial time. This is almost exponentially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time
Compare this for example to sorting. We have proven that any sorting algorithm working with comparisons can at best be O(n*log(n)) fast, it's impossible for a faster classical algorithm to exist.
You can have a faster classical algorithm if it's distributed across n threads for a size n array. For each index i in an array arr, spawn a thread that counts the number of values in arr that are less than arr[i] (an O(n) linear scan), call this x, then do outputArray[x] = (arr[i], 1), where outputArray is initialised to all zeros. If outputArray[x] already exists, instead increment the second term (count) in the tuple. Once all threads are finished, then outputArray will contain the sorted values and the count of duplicate values (so this only works if !(x < y) && !(y < x) implies y == x). Each thread does O(n) work, so total work is O(n^2), but because the threads all run in parallel, the runtime is only O(n).
"Total Work" is the important part to minimize in practice. If "total work" scales at O(n^2), then in practice, the job scales at O(n^2). Its infeasible to build O(n^2) CPUs running O(n^2) threads as n-grows.
Of course in reality we care about execution time, and in order to go from "we need O(n * log(n)) comparisons" to "we need O(n * log(n)) time" you need additional assumptions, like "memory access takes O(1) time, and the speed of one comparison is independent of the total amount of data". But for any reasonable set of assumptions an algorithm that uses O(n * log(n)) comparisons takes at least O(n * log(n)) time (probably more since memory access isn't really O(1), it's just common to pretend that it is).
That's like saying multiplication can't have any lower bound because multiplying by zero is O(1).
The thing that gets smoothed over with QC is managing the error rates and the fact that it hasn't been shown to be physically possible to scale up computation without the error rates also scaling up exponentially and making the thing useless.
If your set of solutions has N candidates, this requires only O(sqrt(N)) evaluations rather than O(N). If that function takes exponential time, then there is still of course no polynomial-time solution, but the quadratic speedup holds.
There are some NP problems for which we know techniques that are faster than brute-force search, but even then we can generally speed up those techniques in the same way.
(I guess you could worry that for every problem there is an unknown techniques for classically speeding things up over brute-force search, but I thought this was known to be false, and in any case is unrelated to your point. I'm not an expert though.)
If IBM can cast doubt by describing a 2.5 day classical run, why not just do that instead, and blow the claim out of the water?
There were a couple of responses (thanks) but I still don't understand - it looks like it's not as easy as they imply.
If you don't follow what is being said there, reposting your question will not help.
Thank you for the link to a potential explanation.
Google implied their supremacy over all of the world computers combined together.
There's also an alternative algorithm that requires less space, but it takes longer.
> The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
EDIT: Just as a note, lest you dismiss Kalai as an outsider/crackpot, you can look at the researchers who show up in the comments on the blog post.
Especially when Peter Shore is disagreeing with the critique (and I agree with him).
Saying that a quantum computation does not qualify because it included (a short) classic computation does not seem a fundamentally fair criticism to me.
Page 17 of the supplemental data  shows the calibration process in schematic, and appears to indicate that the calibration was necessary for each input circuit C, since the graph showing the fidelity improving with iterations of the calibration is labelled "b, Data from a two-qubit XEB experiment." The water is admittedly muddy, and hopefully clarification will be forthcoming from some quarter to determine whether Kalai's particular critique has substance.
> Once you lose coherence, you have to re-calibrate
I didn't realize this, where did you see that?
I have no idea. All I know (or at least I'm pretty sure of) is that you have to re-calibrate every time you lose coherence, otherwise it would not be calibration, it would be a one-time process that would be considered part of the construction of the QC.
> where did you see that?
Nowhere. I inferred it. It's possible that I'm wrong, but I'll give you long odds against. I have a pretty good understanding of how QCs work.
This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.
I understand that some of these optimisations might be on the processing logic, but doesn't this end up being like comparing apples to oranges?
I understand that there's also what seems to be technicalities around what is "quantum supremacy", but I assume that on a 'processor against processor' basis, this is still significant. Am I oversimplifying things?
>We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.
>Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.
The high order bit is that the IBM claims do not cast much of a shadow over the claims of quantum supremacy and still show the same exponential difference.
Google's post https://news.ycombinator.com/item?id=21332768
Scott Aaronson's post https://news.ycombinator.com/item?id=21335907
1. Actually preform the experiment per their approach, compare the results, and provide tangible proof in the coming days.
2. Retract their claims.
In either case, exciting times ahead.
[Edit: Yes, I read their published paper which only claims it is possible to achieve and does not perform the same experiment.
Its like IBM and Google at a high level work on different problems. IBM is focused on research and public discussion of the problem space. Google on the other hand is focusing on marketing using fancy terms like "supremecy" that can easily be misunderstood by the public.
Quote 2: " ...by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days."
Well, it's still 200 seconds vs 2.5 days, so I'd say the claim kinda stands. I know I would definitely buy the computer with 200 seconds vs the one with 2.5 days.
> Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)
>The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
> Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want.
2.5 days is feasible. 10,000 years is not feasible.
They rigged the benchmark to be something quantum-specific before saying classical computers couldn’t do it. I get why. I’d just rather it be something they both could do, classical having great algorithms, and quantum one outperforms it many-fold. An old example was factoring. I’m sure there’s other examples out there. Maybe try to simplify them for low-qubit circuits.
The biggest gripe: basing the argument on what classical computers can’t do. Can’t is an assumption that’s been dis-proven about so many things so many times. Who knows what new algorithms people will come up with down the line doing what we previously couldn’t do. I want an argument that’s instead based on what can and has been done vs the new thing being done. I similarly preferred constructivist logic to classical since the latter was assuming things didn’t exist. Universe surprises me too often for that stance.
FPGA’s are already an example of how this might play out. They’re friggin amazing with all kinds of potential. I’d love to have a desktop loaded with them with libraries using them to accelerate my apps. Them being invented and improved wasn’t enough for that, though. The duopoly, esp supported by patents, has kept prices of high-performance FPGA’s way too high.
It would be better if they were priced more like flash memory with the synthesis software being more like mass-market A/V software. We’d see an explosion of use. Instead, the legal system allowed the inventors to keep competition and innovation at low levels to maximize profit. I expect the same with practical QC for at least 20 years. I’ll still be using a mix of NUMA, GPU’s and FPGA’s while early adopters mess around with quantum computers.
The QC's aren't big enough for that yet. Surely there are other problems ultra-optimized for classical computers, lots of effort at improving them, and might still have a QC algorithm that beats it by QS-supporting speed-up. Something that fits in these smaller machines. Again, the claim that QC did what CC "likely couldn't" would be validated by the effort CC researchers put into doing it without same results.
I doubt their setup had nearly as many person years of R&D thrown at solving it as factoring, sorting, strings, BLAS, certain assembly routines, optimization techniques, etc. There's probably something in those with an effective alternative that's QC-able.
I don't see the argument here
Real quantum computers should be able to do calculations that are that powerful, assuming we can prove the best classical algorithms are really exponential.
Boom & bust cycle appears wasting more resources and progressing slower overall than a steady refinement.
A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.
For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.
Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.
Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.
As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.
So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.
Quotes: "Fear, uncertainty and doubt (FUD) is a tactic used in sales, marketing, public relations, politics and propaganda." and "FUD was first defined (circa 1975) by Gene Amdahl after he left IBM to found his own company, Amdahl Corp.:"FUD is the fear, uncertainty, and doubt that IBM sales people instill in the minds of potential customers who might be considering Amdahl products.""
"Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met."
In other words, "quantum supremacy" is not being used as just a buzzword.
So yeah, IBM is casting doubt on Google's claims of "quantum supremacy" because "supremacy" doesn't just mean "better than everything else."
Edit: Thanks guys for pointing out where I misread the article, leaving mistake intact so the replies make sense.
A still fuzzy and arbitrary but imo somewhat elegant cutoff would let feasibility be set by the market and technological development, in the sense of something not being worth throwing compute at _now_ because the timescale of work is slow enough that future classical efficiencies will progress (much) quicker.
The ROI was terrible, but it wasn't mere busywork when Galileo was mucking around with balls and ramps.
You perform busywork, if for marketing purposes usually of the kind or at least set up in a way favorable to your product, and on the presumption that it will in some manner transfer to real work.
The more synthetic the less likely that is, but it's still indicative of something and I don't see what you mean is so different here.
Meaning when supremacy soon is demonstrated even accounting for storage, it should be able to be undone for a while yet by, say, appropriating everyone’s phones and fridge storage, Silicon Valley style.
But you’re right, it is a subjective judgement.
Supremacy means exactly what it's supposed to mean for everyone outside and inside of science. What's the next step, google can't call its pixels "Just Black and Clearly White" because people would be confused about if it is racist or not... The world truly is becoming a weirder place by the minute.
Original comment for reference:
I really don't think IBM implied the social (and definitely modern, circumstancial) figurative meaning of supremacy.
Any tech geek, or history major for that matter, would just read it as "crushing domination", with a strong implication that others won't be able to catch up, ever. You know, like Kings and Empires impose their supremacy on other, lesser powers.
In that sense, the expression "quantum supremacy" (QS) is indeed ambiguous insofar as:
- it's a convention, a simple "we agree to say that QS means <mathematical definition>" (not a very clear one at that, the concept lacks boundaries).
- "supremacy" in QS refers to "quantum", i.e. it's the domination of Q computers versus classic computers; and absolutely not that of the company behind the tech —a recent history of the Fortune 500 makes it emphatically clear that there is no such thing as definitive supremacy for companies. In fact, a certain IBM... [snip]
Big blue is apparently vexed, here; but disingenuous linguistically? Not so much in my opinion. More like downplaying —that is their pettiness if you ask me: why not salute the engineering feat, and look at yourself asking "OK guys, how do we win the next step?" In other words, play it fair.
> Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”
> Both are sensible objections. And we would further add...
I would almost qualify that as intellectual self-subjection (and perhaps ethical, in the way of science). Aiming so low that it's trolling, for lack of a better word.
Wikipedia's "Quantum supremacy" article:
> The term was originally popularized by John Preskill but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980) and Richard Feynman's (1981) proposals of quantum computing.
: https://arxiv.org/abs/1203.5813 (2012, in the abstract: “[...] we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers [...]”)
> It is well known in the quantum community that we at IBM are concerned of where the term “quantum supremacy” has gone. The origins of the term, including both a reasoned defense and a candid reflection on some of its controversial dimensions, were recently discussed by John Preskill in a thoughtful article in Quanta Magazine. Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”
Kind of silly to say we can't use the word supreme or supremacy anymore, just because it's used for white supremacists.
It isn’t even the right word to use since the intended meaning is a quantum computer that can do something that a classic computer can’t do at all, not just better at it.
Quantum Possible sounds better to me.
When speed is an aspect of the requirement, then it's something the other computer can't do.
E.g. 'complete this calculation in n days on hardware x' or 'completed this calculation with time complexity t and memory complexity m'
No, that's exactly what it means in Googles research. It means it can do it faster than a classical computer, not that it's impossible for a classical computer to do.
"Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot."
We can argue about what practically means but I read that as a very very long time. No one worries about someone breaking RSA-2048 with a classical computer because it is "practically" impossible.
>In putting together our video, we estimated the age of the Universe to be 13,751,783,021 years or a little over 13.75 billion years*, therefore if you tried to break a DigiCert 2048-bit SSL certificate using a standard modern desktop computer, and you started at the beginning of time, you would have expended 13 billion years of processing by the time you got back to today, and you would still have to repeat that entire process 468,481 times one after the other into our far far distant future before there was a good probability of breaking the certificate. In fact the Universe itself would grow dark before you even got close.
This may not be the case with Quantum Possible computation.
Most of the ideas about Quantum-Computers can never be done. You can run certain optimizations when you can model some applications as energy states, but very limited spectrum of applications.