Hacker News new | past | comments | ask | show | jobs | submit login
IBM casts doubt on Google's claims of quantum supremacy (ibm.com)
417 points by furcyd on Oct 23, 2019 | hide | past | favorite | 140 comments

Following the quantum supremacy controversy is very frustrating. It gets constantly framed as a decisive step that changes everything, which is wrong. But then there's also an opposite and equally wrong reaction that any challenges to the claim prove that quantum computing doesn't work at all.

The reality is that quantum computing technology has been improving at the same steady pace for decades, and quantum supremacy was recently invented as a concrete but not very useful goal that people could realistically aim for in the near term. That is a totally standard and legitimate way of guiding technological progress.

The problem is that quantum supremacy is vaguely defined (what does "infeasible on any classical computer" mean? what computers can you use, and for how long?) but it is presented to the public as a sharp boundary -- which means that every company in the field has a huge incentive to claim they are the first. This IBM claim is not a fundamental objection. Given that everything they say is correct, Google could still achieve quantum supremacy by their standards just by incremental improvement, slapping a few more qubits on. So why even bother disputing Google's claim? Because it will set them back a few years, and that's enough time for IBM to make a claim of its own. That's why academics, who have no stake in what company gets the hype, will just consider this whole episode rather distasteful.

Quantum supremacy is a big deal. It is a goddamn experimental evidence against Extended Church-Turing Thesis. If you never believed ECT (for example, all physicists seem to think ECT is obviously false) it may not matter to you, but it still is a serious claim.

Yes, Google probably can achieve quantum supremacy by slapping a few more qubits. But a few more qubits were, in fact, not slapped yet. So quantum supremacy is very much not achieved, and Google should not claim so.

It is serious, but as I said, it's also vaguely defined. We can continue quibbling over details for decades to come. Suppose Google does slap on more qubits so the time estimate goes up from 2.5 days to 25000 years. Is that quantum supremacy yet? Well, maybe if you used literally every classical computer in the world in parallel, and also used a very optimized algorithm, it would only take 2.5 years, which isn't that long if you're patient. So you could say that Google needs to slap on yet more qubits.

That's why arguing over whether it's really quantum supremacy is just not useful. People can always move the goalposts, so we might as well ban that word from discussion. The underlying fact, which nobody is debating, is that quantum computers are regularly doing tasks that are harder and harder for classical computers to.

And yes, the ECT is obviously false -- or at the very least, not obviously true. I'm surprised at how many CS people think it is some unassailable principle. Most CS courses devote precisely zero time to non-classical models of computation. If you've never studied a model of computation that could challenge ECT, then how can you be so sure of it? This is like being absolutely certain that I am the tallest person in the world, and refusing to ever leave my house to check.

I think the problem is that perfectly wonderful science has been put in the hands of marketing people. And those are the same marketing people that kept telling us L5 autonomy was right around the corner. Or more lately that we couldn't discount the possibility of a near-term AGI. Great science, lousy PR IMO.

How does quantum computation conflict with the ECT? I think Aaronson had talked about this but I don't remember his position on this.

The original Church Turing thesis concerns itself with computability and quantum computers do not violate it.

The Extended version goes on to say a probabilistic TM can efficiently simulate all realistic models of computation. Quantum computers very likely violate the efficiency claim and this supremacy result is strong evidence in support.

Since I had to look it up on Wikipedia.

Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot. The weaker term Quantum advantage refers to the potential to solve problems faster. In computational-complexity-theoretic terms, this generally means providing a superpolynomial speedup over the best known or possible classical algorithm. The term was originally popularized by John Preskill but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's and Richard Feynman's proposals of quantum computing.

It sounds like Quantum Supremacy is the Turing Test of our time. A worthwhile milestone to aim for, but not the holy grail people thought it was.

> Just by adding a few more qubits

IBM's algorithm scales approximately linearly in the number of qubits. So, you'd need more than a few more qubits...

No. It scales linearly with circuit size. Exponentially with qubits size. Just read Scott Aaronson's blog, probably the greatest expert on this subject. https://www.scottaaronson.com/blog/?p=4372

No, in order to get linear time, they used an algorithm which requires exponential space. Once you add a few more qubits that breaks down too.

The main takeaway here is figure 1 in this article: they show that for an increasing circuit depth, computation time (on a classical computer) scales linearly. Google claims, on the other hand, that a classical calculation would scale exponentially. This is the basis for the graph in the Google blog [1], which seems to suggest that the Quantum computer can easily reach points (such as qbits=50, cycles=25) which the classical computer would never be able to reach. This is not true, if IBM is right. Their linearly scaling graph proves that they are not nitpicking their input, I would say.

[1]: https://ai.googleblog.com/2019/10/quantum-supremacy-using-pr...

There are many problems where the best known quantum algorithm is asymptotically better than the best known classical algorithm but, as far as I've heard, nobody has ever found a case where the best quantum algorithm can be proved to be better than the best classical algorithm. So there's always going to be a danger of this happening no matter what problem you attack.

Isn't quantum factoring proven to be exponentially faster than the best known classical one?

The only question is we don't know if there is a better classical factoring algorithm.


> On a quantum computer, to factor an integer N, Shor's algorithm runs in polynomial time. This is almost exponentially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time

Yes, for factoring integers the best known quantum algorithm is better than the best known classical algorithm. The catch is that we don't know if a better classical algorithm exists but just wasn't discovered yet.

Compare this for example to sorting. We have proven that any sorting algorithm working with comparisons can at best be O(n*log(n)) fast, it's impossible for a faster classical algorithm to exist.

>We have proven that any sorting algorithm working with comparisons can at best be O(n*log(n)) fast, it's impossible for a faster classical algorithm to exist.

You can have a faster classical algorithm if it's distributed across n threads for a size n array. For each index i in an array arr, spawn a thread that counts the number of values in arr that are less than arr[i] (an O(n) linear scan), call this x, then do outputArray[x] = (arr[i], 1), where outputArray is initialised to all zeros. If outputArray[x] already exists, instead increment the second term (count) in the tuple. Once all threads are finished, then outputArray will contain the sorted values and the count of duplicate values (so this only works if !(x < y) && !(y < x) implies y == x). Each thread does O(n) work, so total work is O(n^2), but because the threads all run in parallel, the runtime is only O(n).

For the most part, classical algorithms assume a fixed-amount of compute power.

"Total Work" is the important part to minimize in practice. If "total work" scales at O(n^2), then in practice, the job scales at O(n^2). Its infeasible to build O(n^2) CPUs running O(n^2) threads as n-grows.

This is an apples to orange comparison. Time complexities are with respect to some computational model. When not specified one typically assumes it's with respect to a deterministic turing machine or something a lot like it. Te model you are describing allows for unbounded parallel execution.

No, the bound is not on the performance of the sorting algorithm, but rather on the minimum number of comparisons required

I would argue that "number of comparisons" is just the popular performance metric.

Of course in reality we care about execution time, and in order to go from "we need O(n * log(n)) comparisons" to "we need O(n * log(n)) time" you need additional assumptions, like "memory access takes O(1) time, and the speed of one comparison is independent of the total amount of data". But for any reasonable set of assumptions an algorithm that uses O(n * log(n)) comparisons takes at least O(n * log(n)) time (probably more since memory access isn't really O(1), it's just common to pretend that it is).

I disagree, because it is perfectly possible to sort without comparisons. For example, sorting a list of distinct integers is theoretically O(N). (radix sorting)

That's not sorting, that's sorting distinct integers, which is a different problem.

That's like saying multiplication can't have any lower bound because multiplying by zero is O(1).

That's not a definition supported by any literature I've read. Seems like you're moving the goalpost because you don't like the definition. For what it's worth, radix sort is listed on the wikipedia page for "Sorting Algorithms", and its runtime is listed as O(n * k) where k is the number of digits per number: https://en.wikipedia.org/wiki/Sorting_algorithm#Radix_sort

As far as I'm aware there actually is no proof that there is no polynomial time factoring algorithm. Complexity theory contains a lot of cargo cult belief with little solid proofs unfortunately. One reason is of course that it is a very hard field of mathematics. See https://www.math.ias.edu/avi/book for a recent survey.

But parent was stating something very different, that today quantum factoring is only asymptotically better that classical one, when that is clearly not the case.

The other thing we don't know is whether it's physically possible to build a quantum computer capable of it. As opposed to a theoretical ideal quantum computer.

The thing that gets smoothed over with QC is managing the error rates and the fact that it hasn't been shown to be physically possible to scale up computation without the error rates also scaling up exponentially and making the thing useless.

There are provable quadratic speedups over all possible classical algorithms.

Those are all about problems started in terms of querying some blackbox oracle. Whether the speedup holds up with a concrete instantiation of the blackbox is unknown.

Could you elaborate? You can search for the solution to any computational problem for which there is a binary function f(n) mapping candidates n to {0,1}, depending on whether it's a solution. Here, f is the "oracle", but it's only playing the role of representing the problem you're trying to solve, and I believe an f exists for all problems in NP.

If your set of solutions has N candidates, this requires only O(sqrt(N)) evaluations rather than O(N). If that function takes exponential time, then there is still of course no polynomial-time solution, but the quadratic speedup holds.


There are some NP problems for which we know techniques that are faster than brute-force search, but even then we can generally speed up those techniques in the same way.


(I guess you could worry that for every problem there is an unknown techniques for classically speeding things up over brute-force search, but I thought this was known to be false, and in any case is unrelated to your point. I'm not an expert though.)

But is there actually a proof that these are improvements over all possible classical algorithms or are they just improvements over the best currently known classical algorithms?

That's what my last two sentences address.

So if they increase the number number of qubits they'll be have a classically infeasible calculation again? Should be interesting.

If the problem is linear, then frankly I don't think anyone cares if there's a QC implementation.

My impression is that the difficulty is linear in the depth of the calculation, exponential in the number of qubits.

Repeating my question from the other thread:

If IBM can cast doubt by describing a 2.5 day classical run, why not just do that instead, and blow the claim out of the water?

There were a couple of responses (thanks) but I still don't understand - it looks like it's not as easy as they imply.

Such a demonstration would prove nothing of importance, partly because the issue is not how fast a particular run takes, but how running time grows with the size of the problem. More significantly, the issue is whether the paper's argument for its original claim is sound, and that would be best addressed by showing where it goes wrong. For a brief and clear explanation, read this excellent post:


If you don't follow what is being said there, reposting your question will not help.

I'm sure you didn't intend snark, but no need for the tone - that's a new post and not a response to my original question.

Thank you for the link to a potential explanation.

My apologies for the tone. I should have said that the best route to understanding the issue is probably via the linked post - it worked for me.

Thanks no worries

I stand corrected by Scott Aaronson! [1] One of the things he says is that it would be useful to run IBM's algorithm, not to show that it can be done, but to verify Google's 53-qubit and depth-20 result, as, so far, Google has only been able to present indirect evidence that they are getting the correct result at this size of problem.

[1] https://www.scottaaronson.com/blog/?p=4372

Yes, thanks, I've since read that - excellent and thorough article.

They didn't say 2.5 days on a laptop. They probably mean on a bunch of datacenters together. That is expensive.

Google implied their supremacy over all of the world computers combined together.

They are talking about 2.5 days on the world's most powerful super computer, which Google had previously claimed would take 10,000 years to complete the task.

These calculations of 2.5 day and 1000 years, they're calculations based on specific parameters, right? Could they just prove it on a smaller version of the problem instead? Can they do a version of the problem which takes maybe 1m on with IBMs algorithm and 1 year with Google's?

Aaronson's blog goes into this. The problem is exponentially difficult to simulate in the number of qibits. Each quibdit doubles the storage required. 25 quibits can be done on a laptop. 50 requires a supercomputer. 53 is just small enough to fit on the biggest supercomputer. 55 doesn't fit.

There's also an alternative algorithm that requires less space, but it takes longer.

The 2.5 day run is for Summit, the world's #1 supercomputer, owned by Oak Ridge National Laboratory. Getting use of the entire machine for 2.5 days may take some doing.

IBM's head of research had a pretty confident statement the other day: "I’m convinced there are more quantum computers working here than the rest of the world combined, in this building," Dr. Gil said.


This had been a consistent problem with quantum computers - - they come up with a quantum computer that can basically do one thing and compare it against a general piece of software not tuned for that problem. Once the software is tuned for the exact problem it is much closer in performance to the quantum computer. The same thing arises with dwave.

Yes « Tuned » with 250 Petabytes of storage

To be fair, that exact computer is, I believe, what Google originally composted against.

Also of note is Gil Kalai's updated take on the experiment [1]. The heart of the complaint is that the quantum computer's solution to this problem relies on a calibration process, which is done on a classical computer and requires resources orders of magnitudes higher than the quantum portion of the computation:

> The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.

EDIT: Just as a note, lest you dismiss Kalai as an outsider/crackpot, you can look at the researchers who show up in the comments on the blog post.

[1] https://gilkalai.wordpress.com/2019/10/13/the-story-of-poinc...

Crucial question - is the calibration a one time effort where the same calibration can be used to calculate any number of results within the calibrated space? In other words, can you amortize the calibration cost across 1,000 runs and come out an order of magnitude ahead?

"John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want."


Just saw this! As Aaronson mentions, coupled with the IBM work, if it does work out, it should be possible to verify that the 53-bit samples are actually valid through the cross entropy test, which would be very exciting.

See the comments on Kalai's post from Peter Shor, where Kalai attempts to address this, and my reply to mantap on this thread where I try to give my poor understanding of the critique.

I'll just say the line between being unfairly critical and raising valid points seem to be a very thin one.

Especially when Peter Shore is disagreeing with the critique (and I agree with him).

Saying that a quantum computation does not qualify because it included (a short) classic computation does not seem a fundamentally fair criticism to me.

If I understand his point correctly, it's not the calibration per se that is problematic, but whether the calibration means that it only performs for a small subspace of inputs, which raises the question how large is that subspace?

My understanding is that the calibration could only be feasibly carried out for smaller circuits where simulating the circuit classically was possible. At issue is whether the calibration is generalizable; whether the calibration carried out for a particular small circuit C was reused to simulate another circuit C', or whether fresh calibrations were carried out.

Page 17 of the supplemental data [1] shows the calibration process in schematic, and appears to indicate that the calibration was necessary for each input circuit C, since the graph showing the fidelity improving with iterations of the calibration is labelled "b, Data from a two-qubit XEB experiment." The water is admittedly muddy, and hopefully clarification will be forthcoming from some quarter to determine whether Kalai's particular critique has substance.

[1] https://static-content.springer.com/esm/art%3A10.1038%2Fs415...

Even if the calibration process "cheats", why should we care? It's not like being practical is the goal. After the calibration is done, you still have a device that can do something believed intractable on classical computers. If it is possible for a classical computer to get comparable results just from a long calibration process, then that means the cross entropy test is flawed/limited, which is a separate argument.

At issue, as I understand it, is whether the results from the 53-bit simulation (which would take 10,000 years, or 2.5 days, depending on who you believe) are valid or whether they are just so much junk because they haven't been calibrated and rely on calibrations that may or may not generalize. Kalai responds to Peter Shor making similar critiques in the comment threads in the blog, but it seems it comes down to clarifications on methodology, so may not be a fatal defect.

If you're going to allow cheating on calibration then it's trivial to achieve "quantum supremacy": simply precompute the answer and call that "calibration"!

There's no "answer" to precalculate, it's drawing from a distribution. Could you calibrate a non-quantum circuit to have a distribution passing the test? Supposedly, this is not possible even given arbitrary pre-computation time.

I can pre-calculate as many samples from the distribution as you can generate during the coherence time of the QC. Once you lose coherence, you have to re-calibrate, and during that time I can pre-compute another set of samples.

That's true, if you pre-sample you can beat the test. Is that the effect you think the calibration is having?

> Once you lose coherence, you have to re-calibrate

I didn't realize this, where did you see that?

> Is that the effect you think the calibration is having?

I have no idea. All I know (or at least I'm pretty sure of) is that you have to re-calibrate every time you lose coherence, otherwise it would not be calibration, it would be a one-time process that would be considered part of the construction of the QC.

> where did you see that?

Nowhere. I inferred it. It's possible that I'm wrong, but I'll give you long odds against. I have a pretty good understanding of how QCs work.

You can bring the complexity on a classical machine down to linear by allowing for large disk space, that is precisely the claim that IBM makes in response to the argument.

According to Scott Aaronson, Martinis himself rebutted Kalai's objections: the calibration still allows you to load any circuit later.

The linked paper shows the "classical system" is the Summit supercomputer[1], using 128 petabytes of disk. Still, its remarkable that the choice of a different simulation algorithm by IBM that trades memory for CPU usage cuts the time from 10,000 years to only 2.5 days.

[1] https://www.olcf.ornl.gov/olcf-resources/compute-systems/sum...

Summary: - Google overhyped their results against a weak baseline.

This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.

Also AlphaZero vs Stockfish, the setup has/had a big question mark over it as well.

I'm a layman on this, but reading IBM's rebuttal, it seems that they're saying "you can't say that a conventional supercomputer would take 10'000 years because you haven't accounted for spilling to disk and other optimisations".

I understand that some of these optimisations might be on the processing logic, but doesn't this end up being like comparing apples to oranges?

I understand that there's also what seems to be technicalities around what is "quantum supremacy", but I assume that on a 'processor against processor' basis, this is still significant. Am I oversimplifying things?

No, the point is that spilling to disk allows you to run classical algorithm linearly, so there is no exponential speedup. I elaborated this in detail here: https://news.ycombinator.com/item?id=21334025

They're saying explicitly:

>We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.

and therefore

>Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.

Scott Aaronson now has a detailed analysis and covers the IBM claims as well: https://www.scottaaronson.com/blog/?p=4372

The high order bit is that the IBM claims do not cast much of a shadow over the claims of quantum supremacy and still show the same exponential difference.

There's a thread discussing Scott Aaronson's post here: https://news.ycombinator.com/item?id=21335907

There are three threads. The others are:

Google's post https://news.ycombinator.com/item?id=21332768

Scott Aaronson's post https://news.ycombinator.com/item?id=21335907

If a worst case to simulate on a classical computer is 2.5 days (per IBM's blog) then they should either:

1. Actually preform the experiment per their approach, compare the results, and provide tangible proof in the coming days.


2. Retract their claims.

In either case, exciting times ahead.

[Edit: Yes, I read their published paper[1] which only claims it is possible to achieve and does not perform the same experiment.

1. https://arxiv.org/pdf/1910.09534.pdf]

They need the world’s most powerful supercomputer to do that, and it may not be available for them to use.

Seeing this be published on Science Magazine is interesting by itself.

Its like IBM and Google at a high level work on different problems. IBM is focused on research and public discussion of the problem space. Google on the other hand is focusing on marketing using fancy terms like "supremecy" that can easily be misunderstood by the public.

Isn't Google's result published in Nature -- a peer reviewed journal? Shouldn't IBM just do the same with their refutation?

I am sure this exact rebuttal is already sent to Nature and under review and will be published in the next issue.

Quote 1: "The Google group reiterates its claim that, in 200 seconds...."

Quote 2: " ...by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days."

Well, it's still 200 seconds vs 2.5 days, so I'd say the claim kinda stands. I know I would definitely buy the computer with 200 seconds vs the one with 2.5 days.

Not sure it's that clear cut, as per the blog post[0] linked elsewhere in this thread[1]:

> Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)

>The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.

0: https://gilkalai.wordpress.com/2019/10/13/the-story-of-poinc... 1: https://news.ycombinator.com/item?id=21334403

According to Scott Aaronson this claim is incorrect.

> Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want.

Nobody is denying that the quantum system performs the task faster. The question is whether it performs a task that can't feasibly be performed by a classical system. That is what the researchers are specifically referring to when they say "quantum supremacy."

2.5 days is feasible. 10,000 years is not feasible.

Quantum supremacy is more about quantum computers being able to do something that classical computers could never do (1). I think that's the heart of the controversy. I'm quite sure Quantum has already proved faster for certain use cases already (2)

1. https://twitter.com/NatureNews/status/1186969282632134656?re... 2. https://arxiv.org/abs/1904.05803

Part of this controversy is reminiscent of the Higgs Boson "God particle" nonsense, where an unscientific catchphrase created by scientists got picked put by the media and a misinterpretation overshadowed the real scientific issue.

I’m with them. My two problems when reading the original argument:

    They rigged the benchmark to be something quantum-specific before saying classical computers couldn’t do it. I get why. I’d just rather it be something they both could do, classical having great algorithms, and quantum one outperforms it many-fold. An old example was factoring. I’m sure there’s other examples out there. Maybe try to simplify them for low-qubit circuits.

    The biggest gripe: basing the argument on what classical computers can’t do. Can’t is an assumption that’s been dis-proven about so many things so many times. Who knows what new algorithms people will come up with down the line doing what we previously couldn’t do. I want an argument that’s instead based on what can and has been done vs the new thing being done. I similarly preferred constructivist logic to classical since the latter was assuming things didn’t exist. Universe surprises me too often for that stance.
Another thing about these is the comparison to supercomputers. Me personally, I don’t care if the quantum computer outperforms supercomputers. I’m looking at practical QC like anything else: does it do the job more cost-effectively than alternative solutions? That cost includes hardware, any leasing/licensing, maintenance, specialists, etc. If it does, then they’ve achieved something for me. If it doesn’t, then QC was an implementation detail for a system that didn’t make the cut.

FPGA’s are already an example of how this might play out. They’re friggin amazing with all kinds of potential. I’d love to have a desktop loaded with them with libraries using them to accelerate my apps. Them being invented and improved wasn’t enough for that, though. The duopoly, esp supported by patents, has kept prices of high-performance FPGA’s way too high.

It would be better if they were priced more like flash memory with the synthesis software being more like mass-market A/V software. We’d see an explosion of use. Instead, the legal system allowed the inventors to keep competition and innovation at low levels to maximize profit. I expect the same with practical QC for at least 20 years. I’ll still be using a mix of NUMA, GPU’s and FPGA’s while early adopters mess around with quantum computers.

Your biggest gripe can't be solved. P vs PSPACE is a great unsolved problem in computer science (almost as big as P vs NP), and since PSPACE can simulate quantum computer, any solution to your gripe immediately leads to resolution of P vs PSPACE.

That's disproven or just side-stepped given there's already a solution addressing my biggest gripe: factoring. It's been researched and optimized to death on classical computers. Quantum should get a massive speedup. Such a demo of quantum supremacy would be quite convincing even if not 100% proven.

The QC's aren't big enough for that yet. Surely there are other problems ultra-optimized for classical computers, lots of effort at improving them, and might still have a QC algorithm that beats it by QS-supporting speed-up. Something that fits in these smaller machines. Again, the claim that QC did what CC "likely couldn't" would be validated by the effort CC researchers put into doing it without same results.

I doubt their setup had nearly as many person years of R&D thrown at solving it as factoring, sorting, strings, BLAS, certain assembly routines, optimization techniques, etc. There's probably something in those with an effective alternative that's QC-able.

Is 200 seconds not faster than 2.5 days??

I don't see the argument here

The key claim is linear scaling. IBM: "The runtime of our simulation method scales approximately linearly with the circuit depth". That directly contradicts what Google wrote in the paper: "... algorithm becomes exponentially more computationally expensive with increasing circuit depth".

Supremacy does not mean simply faster. That is already proven. It means impossible to do on a classical computer.

I'm sure you mean infeasible. "Impossible" goes back to a problem's decidability. If a problem is undecidable, there's no way you're going to find its solution by switching computational models; classical computers can do anything a quantum computer can do, just "slower".

Not if they use up all the energy in the visible universe or they need so much power they collapse and form a black hole.

Real quantum computers should be able to do calculations that are that powerful, assuming we can prove the best classical algorithms are really exponential.

The quantum circuit based simulation actually has much worse accuracy. A classical computer can simulate the class of circuits they are proposing with linear increasing runtime (as opposed to exponential as Google claims) and _much_ higher accuracy according to IBM.

It is difference between with this quantum chip we can brake encryption that would take 100000 years in 30 minutes or it would take us 50 years.

I believe this is a healthy exchange.

Boom & bust cycle appears wasting more resources and progressing slower overall than a steady refinement.

Embarrassing for IBM (the snarky quote of Dario Gil) and for Science (the snarky headline).

There seems to be a lot of confusion here about the complexity, for example calling the simulation here "in the linear regime" due to IBMs result. This is inaccurate.

A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.

For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.

Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.

Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.

As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.

So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.

I wish IBM chose their words carefully here.

We should probably amend the link to point to IBM's actual blog post, https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...

I don't trust anything IBM says. They have a long history of lying and misleading that continues to this day. It's about as trustworthy as Facebook saying you shouldn't be concerned about privacy on Facebook.

Do you have any sources that elaborate on your claims of IBM being misleading?


Quotes: "Fear, uncertainty and doubt (FUD) is a tactic used in sales, marketing, public relations,[1][2] politics and propaganda." and "FUD was first defined (circa 1975) by Gene Amdahl after he left IBM to found his own company, Amdahl Corp.:"FUD is the fear, uncertainty, and doubt that IBM sales people instill in the minds of potential customers who might be considering Amdahl products.""

Is IBM still relevant?

They are doing a whole bunch of research, including in quantum computing. It's hard to tell what's really "relevant" in research; if you knew, it wouldn't be research. IBM stands as good a chance as anybody to produce a real-live quantum computer, and they've hired the people who are in a position to judge what Google really has and hasn't achieved thus far.

Not really though - they just said it wasn't quite as supreme as Google said.

IBM's response to Google called "On 'Quantum Supremacy'" has this to say:

"Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met."

In other words, "quantum supremacy" is not being used as just a buzzword.

So yeah, IBM is casting doubt on Google's claims of "quantum supremacy" because "supremacy" doesn't just mean "better than everything else."

But it's still pretty fuzzy. If we agree that 10,000 years on a classical system is supremacy but 2.5 years isn't, where's the cutoff? 25 years? 250? An average human lifespan?

Edit: Thanks guys for pointing out where I misread the article, leaving mistake intact so the replies make sense.

2.5 _days_, not years.

A still fuzzy and arbitrary but imo somewhat elegant cutoff would let feasibility be set by the market and technological development, in the sense of something not being worth throwing compute at _now_ because the timescale of work is slow enough that future classical efficiencies will progress (much) quicker.

For market forces to be meaningful, the calculation would need to provide value to someone. At the moment the desired output is simply "a distribution which is hard to simulate classically," so assessment in terms of market value would be very premature.

But that goes for any raw benchmarking, it’s busywork by design.

I'm pretty cynical about this specific experiment, but that's taking it to another level.

The ROI was terrible, but it wasn't mere busywork when Galileo was mucking around with balls and ramps.

Haha sorry should've been much clearer, I meant specifically computer power benchmarking.

You perform busywork, if for marketing purposes usually of the kind or at least set up in a way favorable to your product, and on the presumption that it will in some manner transfer to real work.

The more synthetic the less likely that is, but it's still indicative of something and I don't see what you mean is so different here.

The cutoff is on a complexity level. IBM claims it is linear complexity so that is easily solved in a classical computer, regardless of the time it takes. Google's claim is that it is exponential which means they achieved quantum supremacy or proven a quantum processor that can solve a problem a classical processor cannot.

Yes, but the interesting thing to me is how that cutoff between linear and exponential complexity comes down not to the processor but other computing resources.

Meaning when supremacy soon is demonstrated even accounting for storage, it should be able to be undone for a while yet by, say, appropriating everyone’s phones and fridge storage, Silicon Valley style.

2.5 days, as reported by IBM, is more clearly feasible.

But you’re right, it is a subjective judgement.

Judging entirely by my experience with IBM- put up or shut up. Every interaction I've had with IBM the last few years has been shit. It's to the point that all of my confidence in the entire organization is gone, regain my confidence. Put up or shut up IBM.

Pretty rich by IBM to voice any criticism after its "Watson" disaster.

Saying that it evokes "white supremecy" is kind of a pathetic move.

> IBM's researchers argue that the term "supremacy" is in danger of being misunderstood by those outside the rarefied world of quantum geeks.

Supremacy means exactly what it's supposed to mean for everyone outside and inside of science. What's the next step, google can't call its pixels "Just Black and Clearly White" because people would be confused about if it is racist or not... The world truly is becoming a weirder place by the minute.

Edit: I am definitely wrong about my interpretation (see child post by DashAnimal)


Original comment for reference:

I really don't think IBM implied the social (and definitely modern, circumstancial) figurative meaning of supremacy.

Any tech geek, or history major for that matter, would just read it as "crushing domination", with a strong implication that others won't be able to catch up, ever. You know, like Kings and Empires impose their supremacy on other, lesser powers.

In that sense, the expression "quantum supremacy" (QS) is indeed ambiguous insofar as:

- it's a convention, a simple "we agree to say that QS means <mathematical definition>" (not a very clear one at that, the concept lacks boundaries).

- "supremacy" in QS refers to "quantum", i.e. it's the domination of Q computers versus classic computers; and absolutely not that of the company behind the tech —a recent history of the Fortune 500 makes it emphatically clear that there is no such thing as definitive supremacy for companies. In fact, a certain IBM... [snip]

Big blue is apparently vexed, here; but disingenuous linguistically? Not so much in my opinion. More like downplaying —that is their pettiness if you ask me: why not salute the engineering feat, and look at yourself asking "OK guys, how do we win the next step?" In other words, play it fair.

No, they definitely did. In the original blog:

> Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”

> Both are sensible objections. And we would further add...


Oh, my. IBM is way lower than I thought. Thank you for setting my perception straight; I amended GP accordingly.

I would almost qualify that as intellectual self-subjection (and perhaps ethical, in the way of science). Aiming so low that it's trolling, for lack of a better word.

Wikipedia's "Quantum supremacy"[1] article:

> The term was originally popularized by John Preskill[2] but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980) and Richard Feynman's (1981) proposals of quantum computing.

[1]: https://en.wikipedia.org/wiki/Quantum_supremacy

[2]: https://arxiv.org/abs/1203.5813 (2012, in the abstract: “[...] we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers [...]”)

It's almost as if a word could have a different meaning depending on the other words surrounding it.

Or a more boring place as our language shrinks.

Where do you see anything like that in the article?

It's in their actual response: https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...

> It is well known in the quantum community that we at IBM are concerned of where the term “quantum supremacy” has gone. The origins of the term, including both a reasoned defense and a candid reflection on some of its controversial dimensions, were recently discussed by John Preskill in a thoughtful article in Quanta Magazine. Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”

Kind of silly to say we can't use the word supreme or supremacy anymore, just because it's used for white supremacists.

So let's be angry at the recently emboldened white supremacist movement in the US, instead of people who are saying "yeh, kind of sucks that we have this concept that has a name clash with a racist belief that we thought had largely died decades ago" or getting mad about completely made-up ideas of not being able to say "black" or "white" (like a sibling comment did).

Ah. Thanks for the link.

Not as pathetic as "We at IBM are concerned of".

Well the word creeps me out and I wish they would choose another.

It isn’t even the right word to use since the intended meaning is a quantum computer that can do something that a classic computer can’t do at all, not just better at it.

Quantum Possible sounds better to me.

> a quantum computer that can do something that a classic computer can’t do at all, not just better at it.

When speed is an aspect of the requirement, then it's something the other computer can't do.

E.g. 'complete this calculation in n days on hardware x' or 'completed this calculation with time complexity t and memory complexity m'

"It isn’t even the right word to use since the intended meaning is a quantum computer that can do something that a classic computer can’t do at all, not just better at it."

No, that's exactly what it means in Googles research. It means it can do it faster than a classical computer, not that it's impossible for a classical computer to do.


First line- "Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot."

We can argue about what practically means but I read that as a very very long time. No one worries about someone breaking RSA-2048 with a classical computer because it is "practically" impossible.

>In putting together our video, we estimated the age of the Universe to be 13,751,783,021 years or a little over 13.75 billion years*, therefore if you tried to break a DigiCert 2048-bit SSL certificate using a standard modern desktop computer, and you started at the beginning of time, you would have expended 13 billion years of processing by the time you got back to today, and you would still have to repeat that entire process 468,481 times one after the other into our far far distant future before there was a good probability of breaking the certificate. In fact the Universe itself would grow dark before you even got close.


This may not be the case with Quantum Possible computation.

If given enough time, classical computers can calculate anything that quantum computers can.

I know they don't understand λ_spm, I know their quantum orbit is totally fucked up. I know that they don't understand short magnetic field condition. I know their particle sizes are all off. (Reason you can' unify QM and GR btw). They are lacking the relativistic micro-curviture sourrounding the nucleus. I know they don't understand superconductor correctly, thats why the fractional quantum hall effect is a mystery. Superposition does not mean what they think (over interpretation).

Most of the ideas about Quantum-Computers can never be done. You can run certain optimizations when you can model some applications as energy states, but very limited spectrum of applications.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact