Hacker News new | past | comments | ask | show | jobs | submit login
Quantum Supremacy Using a Programmable Superconducting Processor (googleblog.com)
701 points by ilamont 21 days ago | hide | past | web | favorite | 179 comments



There are three threads. The others are:

IBM's critique https://news.ycombinator.com/item?id=21333105

Scott Aaronson's blog https://news.ycombinator.com/item?id=21335907



> This result is the first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any “reasonable” model of computation. With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.

Note, that the "extended Church-Turing thesis"[1] says something very different than the actual "Church-Turing thesis" (and is also not due to either Church or Turing).

The original Church-Turing thesis is not challenged at all by these experiments. Indeed, it has been shown that quantum Turing machines are not computationally more powerful than deterministic Turing machines - as in: any program that can be computed by a quantum Turing machine can in principle also be computed by a deterministic Turing machine (it just might take a little/a lot longer).

The extended Church-Turing thesis is concerned with the performance of a computation and says something about the "efficiency" of the computation.

The original thesis is concerned with what's possible, while the extended thesis is concerned with what's feasible.

[1] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#V...


Interesting refutation by IBM here:

In the preprint, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.

https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...


IMO the whole IBM blog is interesting, your quote isn't just a summary.

Basically what IBM is claiming is that Google's circuit doesn't do anything useful really, it is just meant to be very complicated to do in traditional computing systems. And even in this idealized position, Google's quantum computer doesn't transform an unsolvable problem into a solvable one.

According to IBM, "Quantum Supremacy" means accomplishing something useful to the society, that simply wouldn't be possible to do otherwise. Google's article doesn't show any sign of that, IBM claim.


"Quantum Supremacy" is a technical term, not a colloquial one. It refers to showing that there exists a problem that a real, physical quantum computer can solve quickly that a classical computer cannot.

Reading the IBM article, they are fully aware of what "quantum supremacy" means in a technical sense, and they are urging the media not to use that term, since it will be misunderstood by the public. Their claim that Google has failed to achieve supremacy rests solely on their claim that they can simulate the circuit far faster (and scale the simulation linearly) using better classical algorithms.

That's a strong claim, and I'm interested in seeing what Google responds with.

Disclosure: I work at Google, but hahaha, no, I'm not cool enough to work on this.


Physicists here. Note that they say it scales linearly in circuit depth (which is a trivial fact, and has always been true for classical simulations of quantum computers which are optimal in that regard --in fact, that is the case when doing it in the most naive way), not the number of qubits which is the quantum speed up referred in "quantum supremacy".

Another thing, this is actually Martinis' decades long work. I know Google recently started raining money down on his lab a couple years back, helping with the classical aspects, design etc, and media loves reporting as Google's Quantum Computer, but the actual quantum computer, the nitty gritty physics isn't Google's work. Martinis already had a working setup with ~10 qubits when Google started supporting him ~5 years ago.

This IBM "rebuttal" sounds a bit like cheating on multiple aspects, and the timing of the announcement is interesting. Note that they don't tell you how the memory requirements grow with the number of qubits either (which is exponential as well). I expect the response will be new toy computation proposals which will also be prohibitively expensive in classical memory (not just classical CPU with limited memory) in current supercomputers as well. If the experimentalists can roll out more qubits faster though (less likely), the "concern" will be addressed as well.


Thanks for the detailed response; I am not a physicist, so I didn’t catch the sleight of hand in the linear scaling claim. The timing of the “rebuttal” is almost certainly intentional, and possible because of the accidental pre-publication last month. I hope the rebuttal is indeed specious, because it’s an exciting advance; I’m sure time will tell.


It's pretty easy and requires no physics, actually.

Here's a simple, extended version.

A quantum gate is, mathematically speaking, a matrix. For a given physical system, of fixed number of qubits, obtaining that matrix on a classical computer takes (on average) a fixed amount of time, let's say T seconds. A quantum "circuit" is a sequence of quantum gates, applied consecutively in time, and you simply multiply them all to get their overall effect.

So if your circuit is made of 10 gates, the total CPU time is 10T, plus the time for 10-1 matrix multiplications. If it is 20 gates, then it is 20T plus time for 20-1 matrix multiplications.

Since multiplying two matrices of the same dimension also takes a fixed amount, on average, the simulation time grows linearly with circuit depth.

The quantum supremacy is related to how T grows as you increase the number of qubits, n (which is exponential, it's a 2^n by 2^n matrix).


No, if you read Google paper, quantum supremacy is entirely about circuit depth scaling.


No idea where exactly you got that idea (feel free to quote any part of the paper), but no, it isn't.

Even the brute force "simulation" of a quantum computer is like UN...U2.U1 where Us are unitarity matrices. The hard part is obtaining those unitaries (whose dimensions grow exponentially with the number of qubits). For fixed number of qubits though, once you have N unitaries, you do N matrix multiplications. If you double N, it'll take twice long on a classical and roughly twice on a quantum computer (different gates take different amount of time to implement). But on an actual quantum computer, there are tricks you can do (if the Hamiltonian allows) which may allow you to do it in fewer unitaries.

Circuit depth is still important because it is important 1) for modelling the noise in the device and extracting gate fidelities, that's basically how randomized benchmarking works although they're doing something else for fidelity estimates it still is a function of circuit depth 2) for doing anything meaningful when using a given set of basic building block gates.


From the blog post (I haven't read the paper yet) that's also what I understand

> we ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible

Or am I misunderstanding something? (ELI5 plz, I'm not well-versed in quantum computing).


They're using a "clever trick" to approximately evaluate the overall gate from this paper https://arxiv.org/abs/1807.10749 which is computationally cheaper than doing a "brute force" simulation (which scales linearly in the number of gates), but it quickly becomes worse as you increase the number of gates. That's basically what it says.

It looks like Martinis' group thought a "brute-force" simulation for 54 qubits is impossible, and this appoximate and "clever trick" is the only way to go at this number of qubits, but IBM says that with some different tricks, 54 qubits is still doable (I'm just guessing what they were thinking, and this is the only plausible explanation I can think of).

Overall, a discussion which has nothing to do with quantum supremacy really...

Whether it is a factor of a million or thousand though, the gap between a quantum computer will increase exponentially as the number of qubits is increased. This is fact, assuming quantum mechanics is correct.

Actually, physicists have been trying to deal with this painful fact for quite a long time: it is also the reason why many body physics is so hard computationally and we spent almost a century to develop approximate methods to calculate even the simplest idealistic situations even with hundreds or thousands of atoms using density functional theory, quantum monte carlo etc etc. The whole idea of quantum computation is to turn this difficulty upside down and try to use it into our advantage.


> The gap between a quantum computer will increase exponentially as the number of qubits is increased. This is fact, assuming quantum mechanics is correct.

I agree, but then there is no need to prove quantum supremacy after all. This entire business is about whether quantum mechanics is correct or not.


Quantum mechanics is about a 100 years old, and no violation has even been observed in laboratory, particle accelerators or outer space. The quantum theory is the most accurate theory we ever had in the history, tested to less than 1 in a billion precision. Even classical computers rely on it. Physicists don't have doubts about the quantum theory, we know it is possible, the problem is an engineering problem of attaining precise control over quantum systems, which is a very very hard engineering problem but there is nothing in physics which says it can't be achieved.

There is still a need, but it is for an entirely different reason: not everyone (people with money and funding agencies, in particular) is physicist.


General relativity is also 100 years old, no violation etc. Still, discovery of gravitational wave was very welcome, because test of general relativity in strong force regime was not very good.

Quantum computing is analogous: test of quantum mechanics in "strong computational regime" is scant. You seem knowledgeable, but your comments on current claim of quantum supremacy is akin to, say, when claim of discovery of gravitational wave was made and then disputed, replying, "gravitational wave will be discovered, this is a fact assuming general relativity is correct, general relativity is 100 years old, no physicists doubt the theory" etc. All true, but rather pointless.


It's a very different thing. I'm not just talking about the age of the theory. I'm talking about the length of the period during which it was tested so many times, to the level of precision that no other theory got tested and stood.

General relativity was, and still has never been tested to anywhere near that level of precision, and that many times. And in fact, we still have strong reasons to doubt general relativity because there may or may not be deviations from it observed in galaxies and large scale universe.

General relativity may be correct in that scale (with the ad-hoc addition of a cosmological constant) but to be consistent with those observations, one requires the existence of black holes, dark energy and and dark matter, things we never truly observed and don't know for sure exists (although it is our best explanation at this moment).

We don't really understand how gravity behaves in very small scales, extremely large scales, or in the presence of very strong energy densities. One thing we know for sure is, general relativity is not the ultimate theory of gravity, it spectacularly fails in very small scales.

We would like to stress-test all aspects of general relativity to 1 in a billion precision as well, but we can't.

This is basically because gravity is very weak and you can't design all sorts of controlled experiments to test it. The best you can do is to make observations in the vicinity of readily massive things like Earth, Sun or a black hole, which you have no control over. You can't make two black holes, pit them together and see what happens in the lab. A situation very different from the quantum theory.

Physicists did expect to observe gravitational waves, and it wasn't a shocker to anyone. The thing that makes is very big deal for physicists is that we now have a whole new way probing things that we couldn't before, in particular things which we don't understand yet, including the violations of general relativity which we do expect to see.

We don't expect to see deviations in quantum theory (unless you bring a black hole nearby your quantum computer).


The theory of epicycles very accurately explained observed phenomena, and though the conditions for science at that time were very different, its popularity and accuracy very much comparable to those of quantum theory.

In the Hipparchian and Ptolemaic systems of astronomy, the epicycle (from Ancient Greek: ἐπίκυκλος, literally upon the circle, meaning circle moving on another circle[1]) was a geometric model used to explain the variations in speed and direction of the apparent motion of the Moon, Sun, and planets. In particular it explained the apparent retrograde motion of the five planets known at the time. Secondarily, it also explained changes in the apparent distances of the planets from the Earth.

(...)

Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favour with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.

https://en.wikipedia.org/wiki/Deferent_and_epicycle

So the epicycles worked very well to explain and predict observations, but for reasons irrelevant to what really caused the motion of the planets. It's still possible that quantum mechanics will fall in the same way.

Note I don't have a horse in this race. I have no opinion on whether quantum mechanics is right or wrong.


>Quantum mechanics is about a 100 years old, and no violation has even been observed in laboratory, particle accelerators or outer space. The quantum theory is the most accurate theory we ever had in the history, tested to less than 1 in a billion precision.

sorry, i think you're doing a slew of hands here. Quantum computing relies not just on QM, it relies on Copenhagen interpretation of it - superposition being a physical reality, not just statistical description. That interpretation is tested by the Bell experiments and granted where have been a bunch of them which do look like confirming the Copenhagen.

>Even classical computers rely on it.

all that confirms QM, not the Copenhagen interpretation. Wrt. Google supremacy demonstration - it would work the same in statistical aggregate interpretation too thus actually not showing anything quantum computing.


No. An interpretation is just that. No matter what interpretation you use, the experimental measurement results are the same.

If they don't give the same results, it won't be interpretations: you'd have two competing theories and one of them will be wrong since it can be ruled out experimentally.

This is also why majority of physicists don't care much about such philosophical aspects. You can argue that they should, are there are a few people working on foundations of quantum mechanics, but most physicists (including me) see it as semantics and choose to spend their time on practical physics. At least that's what my field (condensed matter physics) is about, which also encompasses the realization of these quantum computers. You can't change the conductivity of a material, or the measured charge state of a transmon qubit by using a different interpretation.


>No matter what interpretation you use, the experimental measurement results are the same

Sorry, no. Bell experiments do produce different measurements for different interpretations thus ruling one of them true. As it stands now they seems to confirm Copenhagen for pretty much everyone.


Then I don't know which crackpot "interpretation" (that doesn't even agree with the experiments, unlike MWI etc) you are referring to, but you can rest assured that nothing in these experiments or condensed matter physics in general depend on it.


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.675...

https://www.informationphilosopher.com/solutions/scientists/...

"Ensemble interpretations of quantum theory contend that the wave function describes an ensemble of identically prepared systems. They are thus in contrast to “orthodox” or “Copenhagen” interpretations, in which the wave function provides as complete a description as is possible of an individual system."

Bell experiments seems to almost everyone to rule it out, while myself unfortunately, as i really want to wholeheartedly jump on magical bandwagon of superposition and quantum computing, see gigantic holes in those experiments which allow all those other, compatible among themselves interpretations - local realism, pilot wave and ensemble - in and actually pretty much rule Copenhagen out.


I don't think most critics are doubting quantum mechanics. The question is whether quantum computers can reasonably (as an engineering challenge, as a factor of cost, etc) be scaled and can be adapted to take on important real-world problems better than classical computer systems in practice.

We are getting more and more proof points and eliminating a lot of the doubt but this has not been shown yet.


Actually, we had people who claimed for about four decades that it is fundamentally impossible to have quantum speed up, basically equating it to a perpetual motion machine. We still have such famous people around (who aren't physicists, of course), now a loud minority, and "quantum supremacy" was coined because of/for them.

What you're describing is mainly the new generation of people who grew up hearing about quantum computers on the news about experimental realization of small-scale (a few qubits) quantum computers.


Eh, I haven't really heard them. I'm 40 and have followed this from near the beginning (starting with reading Science News in the late 80's-- even then the criticism was pretty muted).

I am not positive we are going to get quantum computers with error correction on boolean qubits that can do all the meaningful tasks we hope quantum computers can do. I think it's more likely than not, but it is not close to happening and may never happen. I am not even 100% certain (but it is very very likely) that it is physically realizable.

In my view, this current milestone is kind of contrived.

And even if we do, it's not clear what subset of tasks currently performed on classical computers will be superseded by quantum computation. That's perhaps one of the biggest problems: normal computing has had a whole lot of use cases to pay the research and capital costs.


Well, I'm envious that you didn't have to deal with those people.

I am involved on the theory side of the implementation of different kinds of solid state qubits, so you may say I'm biased, but the question really isn't whether we will ever get it or not, the question is when. We already have had exponential growth in single qubit coherence times in the past decade, we have very good entangling gates, and there isn't any fundamental reason why the number of qubits can't be increased. It's not like there is an invisible great barrier ahead of us, and nothing in the physics of these devices say we can't.

By the way, they aren't using quantum error correction methods right now, basically because it's not worth it: you need a lot of physical qubits to encode a high quality logical qubits.


Right-- so if we need 10-1000 qubits per high-quality error corrected boolean-ish qubit and 20,000 error-corrected qubits to really tackle interesting real world problems, we need 3-5 orders of magnitude improvement.

The fact that we have exponential improvements to parts of the process is nice. But exponential improvement doesn't continue forever. Extrapolating from the current status 5 orders of magnitude out seems reckless.


I don't know what exactly your belief is rooted on (clearly not physics), but of course you're free to put your money wherever you want, just be aware that:

- Qubits are not boolean. Classical error correction does not work in quantum computers. You need quantum error correction.

- The exponential improvements are in single qubit coherence times, and a stagnation in those improvements doesn't prevent an increase in the number of qubits. (it'd be nice to have, though, because then eventually we wouldn't even need error correction)

- No, it doesn't take 1000 or even 100 physical qubits to have a high fidelity qubit (it can actually be 1:1 with dynamical error correction) or 20000 error corrected qubits to tackle real world problems (Shor's algorithm isn't the only interesting thing out there, simulating some quantum systems requires far less qubits and probably more interesting for people in natural sciences)

- Even if we were 3-5 orders of magnitudes away in # of qubits, it's still a matter of time, and there is still nothing fundamental in physics or material science that prevents having millions of physical qubits.

- Now I feel like I'm dealing with one of those non-physicists people again who claim quantum computers are a pipe dream, only this time instead of a faulty physical argument, it has no real technical basis at all, so this is where I'd like to stop. Enjoy the rest of your day


Exact quote from the paper: "algorithm becomes exponentially more computationally expensive with increasing circuit depth". See also figure 4b, where circuit depth scaling is graphed.


That sentence actually reads "Schrödinger–Feynman algorithm becomes exponentially more computationally expensive with increasing circuit depth" which is true (because the paths in a path integral in a discrete setting would grow combinatorically, but don't have to sort to path integrals to approximate the unitary in a "quick" and dirt way, which clearly doesn't scale well --in fact, if you avoid such "clever" tricks [which is only beneficial in some limited regime] and do it in the naive way, it will scale linearly). It's not the only game you can play on a classical computer, as IBM points out (for which the upfront cost is much higher).

Figure 4b is about error estimation, They use XEB which is exponentially faster than, say doing full quantum process tomography, which is also true. That's the whole reasoning behind XEB, which gives far less information about the error channels, but you still have a fair estimate on the overall fidelity.

None of these have anything to do with the complexity of the actual computation done on the quantum computer though.


Indeed these don't have anything to do with quantum computers, but it does have something to do with quantum supremacy, because quantum supremacy is a claim about both quantum computers and classical computers.

Google chose an algorithm exponential in circuit depth as the best classical algorithm in order to establish quantum supremacy. IBM demonstrated (as you agree) it is in fact not the best classical algorithm. IBM is entirely correct to point this out.


Again, no.

It is a trivial fact that on a classical computer, you can simulate a quantum computer in time that grows linearly in circuit depth, in principle (as in the case of the "naive" way I mentioned above). No one in doing quantum algrothims ever claimed this was the case, and claiming otherwise is just a silly mistake.

Preskill coined the word "quantum supremacy", not Martinis. Even if someone from Martinis' lab misspoke, you can't pin it on Preskill.

Take Shor's algorithm, which has been the poster boy of quantum computing for decades. It gives exponential speed up in the number of qubits, not circuit depth.

See other examples here: https://quantumalgorithmzoo.org/ The complexity is in the number of qubits, "cirucit depth" is a non-concept in quantum algorithms.

No one cares about the circuit depth in quantum algorithms, because in principle, you can always reduce the circuit depth to 1 on a quantum computer: quantum computers aren't necessarily made of basic logic gates, you can implement any unitary in by for example smoothly pulsing the fields in a correct way in a single go.

"Depth" is a concept which usually comes into play when you try to measure the quality of the implementation of some given gate U. You repeat U many many times say M, and see how some measured quantity decays as a function of M, which gives a measure of the fidelity. But this has nothing to do with the quantum algorithm's complexity, it's just for "benchmarking" a physical implementation of the gate.


Thank you. I am not sure why people are talking about 2.5 days, since the key claim is "linearly scaling". If simulation is linearly scaling in this regime, the entire proof is indeed questionable.


IBMs response seems strong regarding "linear scaling", but could they give a complexity-theoretic argument rather than an empirical one?


I think IBM didn't elaborate because it is kind of obvious if you are into this, and if you are not, linearly scaling graph from 10 to 30 is more than good enough. But since we have niche audience here, let's elaborate.

Quantum supremacy is O(n) claim. Then what is n? The answer is that there are two parameters, not one. In Google paper, n is number of qubits and m is circuit depth. n is well known to be difficult to increase. m is not easy either, because if your qubit isn't stable enough, you can't run deep circuit. What Google did is to run n=53 and m=20.

Then, why do you need n=53 and m=20? After all, you could see whether it's exponential by, say, trying n and m from 10 to 15, it doesn't need to take days and years. The answer is that there is time-space tradeoff available and if your n is constant, exp(n) space (but still constant space, since n is constant) poly(m) time algorithm is available, and if your m is constant, exp(m) space (but still constant space, since m is constant) poly(n) time algorithm is available. So if you want to show exponential speedup, you need to be able to exclude these algorithms by increasing n or m enough such that exp(n) or exp(m) space is not realizable. Google chose n=53 such that exp(n) is not realizable, and ran scaling experiment on m.

This is what they mean whey they say in the paper "Up to 43 qubits, we use a Schrödinger algorithm, which simulates the evolution of the full quantum state... Above this size, there is not enough random access memory (RAM) to store the quantum state". Now what IBM is saying is becoming clear. There is not enough RAM, but there is enough disk, so exp(n) where n=53 is realizable, and simulation runs linear in m. It's not a new algorithm, it's exactly the same algorithm Google ran up to n=43. So 43 qubits clearly can't demonstrate quantum supremacy. For the same reason, 53 qubits can't either.


Thanks for this clear explanation of the issue! This looks to me like a convincing rebuttal of the specific claim Google made about classical runtime at this particular size, but does it rule out Google claiming supremacy by using the problem constrained such that n == m? Or would Google's device have exponential runtime growth in that variant?


Thanks for the breakdown! But so in other words, it’s still not all that far off that (realistic amounts of) disk storage can’t contain the state and scaling reverts to how they assert it.


Indeed. It's just that 2^53 bytes (about 10 petabytes) of storage is in fact available, so you need a few more qubits to exclude that.


It's not quite that efficient... IBM's arxiv paper says "64 PiB of disk space are required for 53-qubit circuits, and 128 PiB for 54-qubit circuits".

Summit has 250 PiB of total storage currently, so it would seem that a 55-qubit simulation is almost within reach by IBM's method as well.


Thanks a lot, that was insightful!


But from the Google blog it sounds like the “problem” they chose boils down to “emulating a quantum computer”. Which doesn’t sound like it’s exactly in the spirit of the test, since the quantum computer can obviously emulate itself—and with zero wasted operations!


It's exactly the spirit of the test; the missing element is that the circuit they're emulating can scale in qubits. So the question is, if you add one cubit, what happens? Can classical computers keep up? The tests were run on a range of cubits with 50+ being the largest number, which is where the 10,000 year claim comes from. Anything further than that just isn't feasible to compute on classical computers, because they don't scale like that.

So why is this in the spirit of the test? Because this means that there are some problems that can only be efficiently solved by quantum computers. So this establishes "supremacy" in the sense that while a quantum computer can efficiently solve any classical computing problem, a classical computer cannot solve any quantum computing problem.

The distance between having this proof-of-concept and having meaningful speedups on real problems that cannot be matched by classical computers is very large; all this tells you is that it's possible, and looking more might not be a waste.


> Disclosure: I work at Google, but hahaha, no, I'm not cool enough to work on this.

I often wonder why people feel compelled to write this. There nothing in your profile or in your post history to prove unequivocally you do or don’t work anywhere in particular so why even mention it?

Edit; responders are citing ethics and company policy; but does anyone really think vague hand wavey speculation on a public message board is relavant to anthing?

Sure if you work in the ads group and you posted "wow, our division is in trouble, competitor Z is really killing it, and our quarter earnings are going to miss big time!" Yeah that matters, but GP? Really?


First, it’s a professional ethics requirement. If there is the possibility of a conflict of interest in our statement, we need to disclose it to avoid misleading anyone.

In addition, there are plenty of ways to figure out where an HN member is employed besides looking at their post history. It’s not necessarily found in publicly available data, but it can be found.

Also, you have to account for future acts. Disclosing early can help insure against accusations about misconduct if you were to out yourself — or someone were to out you — later.


Company policy. This prevents headlines like "Google had employees secretly astroturf to give traction to their quantum supremacy claims" if someone decides to truly dig. I theoretically could pretend I don't work here, but I find life is easier if I just follow reasonable rules.

(I did forget to mention I didn't speak for Google, but it should be pretty obvious from context)


To disclose possible bias, in this case maybe parent feels like their opinion might be biased toward Google's claim.


I agree. Possibly to just give a bit more weight to what they say. So far, I've mostly seen this with Google employees here. Can't remember seeing anyone else mentioning this except when promoting a product.

1. Regarding policy, I think a simple disclosure like this is my personal opinion not that of my employer would be good enough. 2. Indicate possible bias: nope, one should always understand any opinion is biased due to many different reasons. So, unless this is a rigorous analysis of something, this is quite uncalled for.


> According to IBM, "Quantum Supremacy" means accomplishing something useful to the society, that simply wouldn't be possible to do otherwise. Google's article doesn't show any sign of that, IBM claim.

I thought that the whole point was to show that what you'd done definitely wasn't just build a complex classical processor. The usefulness of the calculation is besides the point.


Both of you are slightly off.

Quantum Supremacy is more about where quantum computers are clearly superior to non-quantum computers to solve a problem.


It's not about doing useful work it's about finding an actual workload that cannot be solved faster classically.


Will just point out it's 25 years since Deep Blue and 10 since Watson/Jeopardy. And boy were lot of claims made back then. Good ol'IBM has yet to deliver anything.


Regardless of what they say, Quantum computer solved it in 200 seconds and IBM states that their super-computer solves it in 2.5 days (Google says 10,000 years.) Even if we take IBM's claim at face value, 200 seconds vs 2.5 DAYS is still a lot of improvement, semantics aside. Not to mention that quantum computers have just started


No, 2.5 days is irrelevant. This is about proving exponential speedup, but IBM is claiming linear scaling, so it is a serious challenge.


They say linear in time for increasing circuit depth not increasing qubits. They also don’t address the classical algorithm being exponential in memory as well.


What if it is linear but angle is so steep that you require all the sands in the world to achieve parity with a classical computer, does it still count?


Of course. Quantum supremacy is a claim about computational complexity. 200 seconds vs 2.5 days is 1000x slowdown. In practical terms, x^10 is even worse than 1000x (linear), but if quantum computer could be simulated in x^10, that would definitely disprove quantum supremacy.


I think in that case you would probably be able to formulate the problem in a different way to show that the angle arises through an exponential process (or more) and that the problem isn't truly linear.


Maybe but 1) that's hard to prove and 2) it's hard to achieve unless you're looking at some sub-problem where the quantum and classical algorithm actually do scale differently.


One way of viewing Google's achievement is as a result that cannot be explained without invoking parallel universes. That's pretty amazing.

I posted an article explaining more: https://news.ycombinator.com/item?id=21337739


All of that is true, but according to IBM's own definition of "quantum advantage" I think Google at least achieved that.

All in all, it seems more of a battle of semantics, and the two companies talking past each other. It's kind of how everyone seems to interpret Moore's Law to mean "any progress in computational performance whatsoever" these days, as opposed to the original definition of "transistors doubling in the same space every 18-24 months," and then using that to "prove" that "Moore's Law is still alive." (it's not, and hasn't been for a long time).

I think in the end what's important here is that it was proven that a quantum computer can do "something" (yes, even anything at all is a big milestone) significantly faster than a classical system.

Because this is what matters in the end that quantum computers can do some tasks significantly faster than classical systems. They don't have to do only tasks that are "impossible" for classical systems to be useful. It's also why we use GPUs, FPGAs, and ASICs for some tasks, instead of just using CPUs for everything.

I do agree that it was in Google's interest not to work so hard on optimizing the classical system for the simulation, to make its quantum computer look that much more impressive. But if the quantum computer is still faster, then it doesn't really matter.


No, if your standard is "faster", even D-Wave achieved that. Quantum supremacy is about exponential speedup, and in light of IBM's claim, Google's claim of quantum supremacy is very much in doubt.


I haven't processed the blog, but considering IBMs resources wouldn't it be simpler to refute it by actually doing it in 2.5 days?

Or is there some catch that undermines that headline claim?


That's 2.5 days on the 13 MW Summit supercomputer, for a single run. Presumably you would need multiple runs to get good statistics.


No, the classical simulation should give you the whole wavefunction, from which you can get the full probability distribution. One run should be enough.


Does it scale linearly? Is it 5 days on a half as good super computer? They just need to show that it can be done in any reasonable finite amount of time, not 2.5 days.


The main bottleneck is memory/storage capacity, not speed. Beyond that (if you have the requisite storage capacity and bandwidth), yes: if you have half the FLOPS and double the time, you get the same result.


Is this the case? I thought that parallel speedup is not linear. so it might actually not be 2x slower but maybe 1.5x or something depending on IPC overhead and all that


I think the quote from IBM means it'd take 2.5 days to run. That doesn't include actually implementing it.


That might be a pretty expensive run on a supercomputer


The margin between 200 seconds and 2.5 days still exists


As far as I understand, the 2.5 days classical simulation gives you the total wavefunction, from which you can read the probability distribution. You only need to run it once. It's not clear to me whether the 200 seconds quantum computation is for getting (or rather sampling) the probability distribution, or for just one measurement. My point is, the classical simulation actually gives you much more data than the quantum computation -- you get the actual full probabilty distribution, not just an approximated sample of it (guess that's why IBM claims "higher fidelity").


I thought quantum supremacy was about having a quantum computer capable of doing more operations per second than any binary computer can do

Do you think it wouldn't be possible to simply upgrade the supercomputer to shave some time off those 2.5 days ?

To me it seems clear that everyone gullible already bought into the Cloud, ML/AI, CI/CD buzzwords and now they need a new buzzword to market.


Quantum supremacy is about scaling (big-O notation) - taking a classically O(2^N) problem and making it polynomial. The empirical experiments are just to show that the scaling works, and the uncertainty comes from the fact that we still can't fit very large N problems into current quantum computers. Effectively, they don't have enough RAM to even load the problem (i.e. enough available quantum states).


That's not quantum supremacy. It's about doing something on a quantum computer that would be infeasible on a standard computer. This is because in some scenarios a single calculation by a quantum computer could be equivalent to a vast number (many trillions) of calculations made by a standard computer.


> more operations per second than any binary computer can do

> doing something on a quantum computer that would be infeasible on a standard computer

Sorry, but I fail to see how these two are different.


They use different "algorithms", so solving a problem can be done in very different ways for a quantum and a classical computer. Comparing the number of operations per second does not make sense because the advantage of quantum algorithms lies in the fact, that they can achieve the same results with (exponentially) fewer operations.


We can keep moving the goalposts and I can rephrase my claim to:

> it's a state when a quantum computer can achieve results that would require more operations on a binary computer than a binary computer has proven to be able to do

but the point will still stand.

We can't say we achieved quantum supremacy for this one thing because binary computers still have supremacy over everyting else.

I guess we can agree here that quantum supremacy was definitely not achieved since we are not clear on the definition of said quantum supremacy.


No, quantum supremacy is about quantum computers being better at one thing. Then there will be a reason to use them. It's like making a screwdriver when you already have a hammer. Sometime a hammer and nails is better and sometimes a screwdriver and screws is better. It's about picking the right tools for the job.


It isn't about them being better in the sense of faster than a classical computer, but rather scaling. Quantum computing is attractive because we expect that some operations scale much better with size on one compared to classical computing. ie if for a task the quantum algorithm is O(n) while the classical version is O(n!)


I think it's about running algorithms with big-O complexities lower than can be ran on a classical computer, not actually more ops.


I think that’s where the

> and with far greater fidelity

comes into play.


A physical system will always be the fastest implementation of a computer simulating its own physics.

Some overhead is expected when using different hardware to simulate it.

What matters is asymptotic computational and memory complexity.


Yes but nobody cares.

The QC isn't cheap or available enough to be relevant is your can compute it reasonably well in a few days.


I think IBM is full of shit and badly needs this PR boost that is about to be snatched away by Google.


I don't think I need to invoke superposition to conceive of them both being full of shit at the same time.


do you have any substance to back this claim up


There's also a separate thread discussing IBM's post: https://news.ycombinator.com/item?id=21333105.


There seems to be a lot of confusion here about the complexity, for example calling the simulation here "in the linear regime" due to IBMs result. This is inaccurate.

A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.

For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.

Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.

Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.

As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.

So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.


I don't understand nearly enough of this stuff, but this section sounds... weird:

Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bitstrings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bitstrings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow.

It sounds like they're saying "simulating this physical process accurately is extremely computationally expensive, but just running the physical process in the real world is MUCH faster". But that's true of basically any physical process!

Simulating turning on a single lamp in a room and figuring out how much light falls on what walls is an enormously computationally expensive process, which can take immense amount of time to figure out (depending on your desired level of accuracy). Movie studios, CG artists and game companies build entire render farms to solve this problem. But the physical process itself (just turning on the light) happens instantly. Just replace "turning on the light" with "running a quantum circuit".


The lamp is not a programmable light-processor that can compute arbitrary computation using light, but Sycamore is (though it's very small).

The trouble is we don't know many (any?) interesting algorithms to run on such processors. We can factor prime numbers using Shor's algorithm, but the numbers we can factor with 54 qubits are so small that our best non-quantum algorithms on our best non-quantum computers far outperform them. No supremacy is demonstrated. It's only when one takes problems that are ideally suited to run on this processor that it outperforms classical computers.

Even with the deck stacked this way, it's taken until 2019 for it to be done for the first time! It's a very, very early milestone.


That does make sense, but given that the task is not solving anything "outside" the physical process itself, it still feels unfulfilling. Using my (maybe wrongheaded) analogy: it feels like "we built a lamp to simulate what happens when we turn on a lamp".

Quantum "supremacy" only feels like a real thing when a quantum computer can solve a problem that isn't directly tied to the simulation of itself. That's my feeling, anyway.

BTW: this is not to disparage the work itself. This seems like great research, and great progress in this field. Good on Google and the researchers!


I agree that the term "quantum supremacy" is overly grandiose, but it was clearly defined before Google published this work.

I just want to emphasise that there were people who believed that even this small step was impossible. Prominent quantum-computing skeptic Gil Kalai believes that scalable quantum computing is impossible in principle, and believes that this paper didn't achieve what it claims, but he says that his view would be refuted if they did! (This is in contrast to IBM, who isn't criticizing the quantum side of the experiment, but the comparison to classical computing; and to the Internet commentariat who are saying that even if they did everything they claim, it's not interesting).


> That does make sense, but given that the task is not solving anything "outside" the physical process itself, it still feels unfulfilling. Using my (maybe wrongheaded) analogy: it feels like "we built a lamp to simulate what happens when we turn on a lamp".

While that's true, there are a great many physics researchers who are extremely interested in quantum effects. A computer that quickly simulates quantum circuitry, if only for study / practice, is still extremely useful.

In effect: a lamp isn't very important because most engineers and researchers already knows how a lamp works. But quantum circuits are itself an interesting, barely-studied, field with few experts truly understanding it yet.


Exactly. If we could design some cool lens-refraction based punch card, such that when you shone your desk lamp through it, it solved some very hard problem, then that would also be an incredibly cool solution.


Funnily enough, something similar has been done with neural networks "physical-ized" in special glass panes.

https://www.osapublishing.org/prj/abstract.cfm?uri=prj-7-8-8...


This is the most amazing thing I’ve seen all day!

Has something like this been actually demonstrated beyond the theoretical possibility?


Can Sycamore compute an arbitrary computation? That's the first I've heard this claim. They say it's "programmable" but I read that as some kind of weak configurability rather than "this is a quantum computer capable of computing any (54 qubit) quantum algorithm."


I don't know. I simply made the opposite assumption from you.

EDIT: it looks like it only has a single, weird, type of gate. But I guess the connections between gates are configurable? Still different to a lamp!


AFAIK, the main difference between measuring (running) their device and measuring, say, the electron density of a molecule is supposed to be that their chip is easily programmable (in the computer science sense).


Pretty much. Which is why it is beyond silly we aren't using DNA more. Its a ternary encoded program. We have trillions of species. Ffs, lets train a generative model on the distribution of DNAs among all possible life. Then grow it.

GANs for generating random organisms anyone? Human 2.0 can probably come about after some of the feature vectors are figured out via PCA on the activations. Just boost feature vector for intelligence.

Anyways, yeah! Real matter is already programmable. As programmers its easy to forget that physics says this whole thing is all a monism of information and computation. Code and data. Yin and yang. Even the prime numbers come about as data because of self modifying code. P(0)=2 but all future primes are defined by not being divisible by P(x<n) - think snake eating its own tail and yet growing while doing so.

In fact, the most beautiful part about the prime numbers is that they are defined sequentially yet alternative constructions exist which are `atomic`. It is odd that something defined in such a sequential process would form a beautiful structure that can be reduced to direct construction. Relatedly, we see this all the time with recursive functions being turned into procedural ones or functional style data pipleline transformations.

The ordering is actually irrelevant to the chaos that the primes display between multiplication and addition. That is, if you defined the first prime as 3 you would yield an entirely different set of primes. For example, 4 would be prime. Beurling figured this out and a lot more through what he called Generalized Primes [1].

[1] https://bookstore.ams.org/surv-213


Last line of the paper is destined to resonate into history: "We’re only one creative algorithm away from valuable near-term applications"

Reminds me of the last line from Watson and Cricks 1953 Structure of DNA paper: "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material" ;)

I bring up genomics specifically because the main profit driver appears to be biotech. Predicting near-instantaneously whether regulatory proteins bind to a target genome site could revolutionize drug discovery.

And while full "programmatic" control of transcription factors may be dozens of years away. In the near term, annealing based ML can speed up analysis of genome-wide datasets

Unconventional machine learning of genome-wide human cancer data

https://arxiv.org/abs/1909.06206


Relevant blog post about the leaked preview by Scott Aaronson: https://www.scottaaronson.com/blog/?p=4317


Waybackmachine Link, as the site seems to be down: https://web.archive.org/web/20191015112241/https://www.scott...


Do I understand correctly that the experiment itself has to do with simulating quantum circuits using a classical computer, and showing that a real quantum computer can produce the result much faster than a classical computer simulating a quantum one?

It’s hard to parse what the actual thing being computed is.


That is my interpretation but I need a more detailed paper because everything is quite vague. Based on what I read, the claim isn't that great (yet). Everyone is gunning to be the historical first here.

"Hey, we have a new fluid flow simulation processor that reaches CFD supremacy over classical computers... we pour water across the surface of things and the results are indistinguishable to reality compared to the days, weeks, months it would take to simulate and still wouldn't reach the same resolution of accuracy on a classical computer."


It's not quite as bad as an analog water simulator made of water. They have built a programmable machine that can run any quantum problem it can load into its qubits. The problem is, it has so few qubits that it can only load tiny problems. The claim is that it can still beat classical computers on some subset of these tiny problems.


Is it correct that the claim “that it can still best classical computers on some subset of these tiny problems” based on classical computers simulating quantum circuitry to compute those problems?

I’m wondering if there is some other way the problem could be computed without having to simulate qubits.


A lot of work has gone into looking for classical algorithms for these sampling problems since the mid-2000s, and we have found some speedups, but everything still suggests that they are truly exponential on classical hardware. The classical algorithms for these problems don't involve modeling qubits directly, but they do involve sampling exponentially many states and averaging over the results, which is a lot like modeling qubits.


It sounds like they're computing the most probable eigenstate of the system. Doing so on a classical computer requires doing a bunch of inefficient N-body quantum mechanics with randomized interaction terms. Doing so on a quantum computer is basically just making a real-live quantum measurement.

By taking an ensemble of real-live measurements, you can infer what the most probable eigenstate is, which is the "output" of the program.


I'm confused as well. This is what their youtube video [1] claims, but to me it seems obvious that a computer simulating another computer would be slower.

[1] https://www.youtube.com/watch?v=-ZNEzzDcllU


For classical computers, one computer simulating another is only polynomially slower. The claim here is that any classical computer will be exponentially slower at simulating this problem. The experiment is just to show that the scaling is really polynomial vs exponential, and the uncertainty comes from the fact that we still can't fit very large N problems into current quantum computers. For small N, the two curves could look similar.


I think the goal here is to show that the quantum computer is not just a complicated classical computer. To do this, you design a problem that should be "easy" on your quantum computer but exceptionally hard (practically impossible) on a classical computer.

Compare this to, for example, D-Wave. As far as I understand their quantum computer they could show it was faster but it was "X times faster" rather than scaling in a completely different way.


That's what I can't quite get my head around. A computation is a physical process that produces a desired output. Any physical system can evolve very quickly to a state that is incredibly difficult to compute- the problem is to harness that system to make it produce something useful.


Cirq has most of the experiments that were used to demonstrate quantum supremacy[0]. Here is an example of cross-entropy benchmarking; the tool used to benchmark this processor: https://github.com/quantumlib/Cirq/blob/master/examples/cros...

[0]: https://github.com/quantumlib/Cirq/tree/master/cirq/experime...


Associated video: https://www.youtube.com/watch?v=-ZNEzzDcllU [4:42 mins]



I just posted this on reddit, but I'm curious about Hacker News' take on it, so apologies for the cross-posting:

Can I ask an honest question? Shouldn't people be scared shitless of quantum computing?

If a single organization develops the ability to factor prime numbers at unheard-of speeds, doesn't this put in jeopardy the mathematical backbone of all encryption, and therefore don't they pose a threat to all financial institutions? Don't they then have the power to hold the world at ransom?

I don't mean this as a conspiracy theory -- I don't think an actual "event" is even necessary to pose such a threat. If it's proven that theoretically sound classical encryption can be theoretically cracked in record time by quantum computers, I am afraid of what will happen to the world financial markets due to panic. Nevermind the first time it is actually used to steal billions of dollars, or some other large-scale event.

I'm no expert on quantum computing, please enlightenment if indeed Shor's algorithm does not pose a threat to privacy or to the financial markets. It would help me sleep at night. For now, I do think the idea is not totally unfounded [1] and therefore quantum computing, for all the positives (eg fast solves for hard optimisation problems, which would have massive applicability), scares the living daylights out of me.

[1] https://web.archive.org/web/20121115112940/http://people.ccm...


> If a single organization develops the ability to factor prime numbers at unheard-of speeds, doesn't this put in jeopardy the mathematical backbone of all encryption, and therefore don't they pose a threat to all financial institutions?

Two things:

1. It's extremely unlikely that this will happen overnight, and practical factoring is still pretty far away compared to today's quantum computing state.

2. There's work done on developing post-quantum-cryptography (which is in essence all cryptography that we believe cannot be broken by a QC). NIST, the US standardization organization, is currently running a competition for future standards. It has some challenges, but it's doable.

That still leaves the issue that past-time communication may be decrypted in the future. But other than that - we will almost certainly have alternatives when QC factoring becomes real.


Very interesting, thanks, I didn't know it was an active research area, although that completely makes sense.

> That still leaves the issue that past-time communication may be decrypted in the future.

Honestly I didn't even think of this. Pretty freaky nonetheless.


Indeed. The best time to stop using RSA was decades ago, but now's also a good time. It's speculated that one of the reasons the NSA is storing so much encrypted data is that they think the contents will still be worth knowing many years from now when they can finally decrypt it.


"The best time to stop using RSA was decades ago..."

There are arguments against using RSA, but I don't see how Quantum Computing is one of those. The common elliptic curve alternatives are even more vulnerable to QC than RSA, while the new QC-resistant algorithms aren't accepted yet.

So if QC is your concern, you have algorithms worse than RSA on one corner, and not-entirely-tested algorithms on the other.


There's an argument to be made that maybe we'd be using Ntru by now if its inventors hadn't decided to file patents for it. Lucky for us the patents are expired now.

Patents are the most reliable way to make sure your crypto isn't used widely for 20 years...


Not just crypto. Arithmetic coding, known to have better compression than Huffman, is almost unused due to patents.


Anybody understand the details of how they compute the F_XEB fidelity estimator?

I'm very perplexed that the main article states that "𝑃(𝑥_𝑖) is the probability of bitstring 𝑥_𝑖 computed for the ideal quantum circuit", but Supplementary Information section IV.C seems to argue that if the "qubits are in the maximally mixed state" (i.e. the quantum computer doesn't work) then "the estimator [𝐹_𝑋𝐸𝐵] yields zero fidelity" since 𝑃(𝑥_𝑖)=1/2^𝑛 for every 𝑖 in this case. To me this doesn't make sense, since P(x_i)=1/2^n is clearly the probability mass function of the empirical non-ideal quantum circuit output distribution (and not the ideal one).

I've posted a question on the QC stack exchange: https://quantumcomputing.stackexchange.com/questions/8427/qu...


Turns out the reasoning in supplementary information IV.C is kind of flawed, but the results follow trivially from the definition of expectation and probability mass function. Wrote it up here: https://quantumcomputing.stackexchange.com/a/8565/8704


I've never ever seen such a brutal sentence in science before:

"In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be utterly devoid of scientific interest or merit."

Yikes.


"Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot." From: https://en.wikipedia.org/wiki/Quantum_supremacy

IBMs responce: "We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced." From: https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...


> same task can be performed on a classical system in 2.5 days and with far greater fidelity.

Currently at a much greater cost in time and space, which is why they are saying "can be performed" not "has been performed".

> we expect that with additional refinements the classical cost of the simulation can be further reduced

Any such refinements would be certainly interesting to see, as they would probably be very hard won.


> practically

or efficiently are terms of trade for computer scientists and mean that there's a polynomial upper bound.

On the other hand IBM also claims (from the graphs? not sure if they have anything more to back up the claim) that their scheme "scales approximately linearly with the circuit depth" which would likely make the experiment meaningless.

The entire post is a bit odd though since it starts out with a serious discussion of computer science topics and then swerves into criticism of how the media misuses terms.


Worst case scenario, my password is going to be about 24 characters long, and we’ll all be switching to 4096-bit encryption, right? Is this technology really going to functionally change encryption or is it just going to be like the switch to 64-bit computing, mostly a side-story?


Archived copy, which can be read without JS enabled:

https://archive.is/Mly19


Linear Cross-Entropy Benchmark is the framework used to try and estimate the accuracy of the random-circuit sampling -- to try and determine how likely it is that the random circuits are being executed correctly on the quantum computer.

From Scott Aaron's blog post about this paper: https://www.scottaaronson.com/blog/?p=4372 -- he shares some thoughts on whether this benchmark is itself easy or hard to simulate on a classical computer.

> Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t spoofing Google’s benchmark be classically easy?

He goes on to supply an argument that its a hard test to spoof classically.

I'm wondering though -- has anyone addresses how hard it would be to spoof this benchmark using a collection of smaller (potentially non-scalable) quantum computers ...? E.g -- if it were possible to spoof the benchmark with a smaller quantum computer -- would that undermine the argument in the paper that this benchmark is evidence of quantum computing scaleability ...?


LOL.

I wrote about half of the wikipedia article that Google is using to explain how their quantum computer works (https://en.wikipedia.org/wiki/Quantum_logic_gate)

Should probably had done that using a pseudonym, and not just anonymously using my IP address.

But at least the IP address I used for the last weekends work is my homes static IP, that almost never change.. But still. I need an account at wikipedia!


I'm a little worried about something in quantum computing.

If I treat my quantum computer as just a channel for sending a message to myself I think I can invoke the channel capacity theorem. Furthermore, I think that if I try to read an entangled state, it's as if I am reading the entire state all at once, so the maximum number of bits I should be able to read is bounded by the logarithm of the ratio of the signal to the noise, where noise scales with temperature. That number doesn't scale well. It suggests that if I want to read an additional bit from a quantum computation, I need to drop the temperature by half.

Furthermore, I introduce heat to my quantum computer when I input the state. I must construct the input to my quantum computer from a uniform mixture by the no cloning principle. This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light, This gives me an upper bound on my temperature of O(1/t^3) assuming extremely optimal heat dissipation. This means that the time required to cool my state to the point where I can read n entangled bits is exponential in n, right?


No. I think this is where your logic is off: "Furthermore, I think that if I try to read an entangled state, it's as if I am reading the entire state all at once"

You can't read the entire state. A quantum channel where you send n-bits lets you read n-bits, not exponential in n-bits.

There is indeed reduction of signal as you increase temperature. Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)

Also this: "This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.


>You can't read the entire state. A quantum channel where you send n-bits lets you read n-bits, not exponential in n-bits.

An entangled state is different, Since I can read all of the bits outside of each other's light cones, they cannot causally influence each other, which is why I said as if instead. There is no such thing as "all at once", but I can properly read each qubit before all of the others, as far as that qubit is concerned and it would still obey the correlation. The reading of an entangled quantum state is indeed a single measurement.

>Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)

2) I don't think that quantum error correction ultimately solves the problem of heat. It tries to solve the problem of single bit/sign flips from not doing your computation in a closed system. My argument assumes that the only things that exist in the entire universe are the computer and the heat.

"This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.

3) you're misinterpreting my statement of the problem. I don't have to get each additional qubit just as cold as the original. Every qubit I add requires me to get the entire quantum state to half the temperature that I had previously. The geometry of your refrigerator here only matters in that it's finite dimensional.


I've just skimmed the Wikipedia article about quantum computing... and am I wrong in thinking that it is not a big deal from a purely practical perspective? Some cryptography would have to be updated and a few specialized problems would become feasible.

To be honest, after the hype I was expecting there would be more applications.


This breakthrough is only a big deal from a practical perspective in that it very strongly suggests that further research on engineering large quantum computers will provide the exponential performance improvements predicted by quantum complexity theory. That is to say that this is a strong “go” signal in a go/no-go test.

In terms of where I am personally excited about quantum computing, it has the potential to greatly speed up simulations of chemical and material science properties that are currently hard to simulate accurately at all. This will be great for, for example, semiconductor development and medicine. These applications are a ways off though, but if we weren’t hitting milestones like today, it would suggest they were not worth vigorously pursuing.

Disclaimer: I work at Google, not on quantum computers. Not an expert in quantum computation.


For practical purposes, it's even more than that. Current quantum computers are not useful for anything. Not even for breaking cryptography.


Optimising neural networks has great potential for quantum computers.


Wouldn't it make more sense to optimize memristive technologies?


I see it very similar to 20th century space tech. Yes, they are technically providing something unthinkable otherwise. But what is their real utility? Most if not all the biggest civil utilities like remote sensing, radio communication and navigation are quite well doable also without them. They fail to deliver the biggest dream of trans-galactic travel and there is no even roadmap to it. It does not help much our everyday ground-based transportation. So in this century we start to think: maybe this is just big waste of energy, which now will need to be spent to resolve way bigger real issues. Like to keep our current planet inhabitable.


Scott Aaronson's post on the latest announcement. Also covers the IBM and Gil Kalai objections.

https://www.scottaaronson.com/blog/?p=4372


I kind of thought of Civilization: Beyond Earth when first heard about the term “Quantum SUPREMACY”.

https://civilization.fandom.com/wiki/Supremacy_(CivBE)


There was some criticism of an early leaked draft (e.g. [0]), can anyone comment if the issues raised have been addressed?

[0] https://news.ycombinator.com/item?id=21167368


The most important update is this, which was not in the draft: "The datasets generated and analysed for this study are available at our public Dryad repository".

In coming days, I am sure many will perform the independent analysis of these datasets.


Is there any site where I can learn about quantum computing without feeling like a dumbass?



Useful, thanks. Does anyone know if the authors will do a q&a or ama?


For people more plugged in to quantum, I have a genuine question. Where did the term "supremacy regime" come from?


Looks like I've missed the boat on this one. :)


Didnt IBM just dispute these claims?


Yes, but a lot of people consider the disputes correct, yet irrelevant to the main point. Moreover since their argument is that the computation could in fact be executed on current day's hardware, but they have not been able to do it themselves (as it would be prohibitively expensive, which is kind of confirming Google's point).

See e.g. https://www.scottaaronson.com/blog/?p=4372


OK but `npm install` will still take a century to install dependencies.


It's not "computing" if it's not Turing complete


Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.

HFS!


Sounds like that’s something that would be really hard to prove :/.


You can make a problem that is hard to compute but easy to verify. You can also go by known big-oh computation complexity estimates how long it would take you on a classical computer. Not too hard to prove.


Sounds more like it's easy to prove but we just don't care enough/don't have the pacience required.


Another article said IBM scientists were not impressed and estimated it would only take 2.5 years. But still, right?


IBM is claiming 2.5 days not years, apparently.


days, comma, one of those sigh


How fucked is https and cryptography if this turns out to be true?


This is nowhere close to breaking PKI algorithms. The quantum algorithm that can break PKI (Shor’s algorithm) will require processors of hundreds or possibly thousands of qubits. This 53-qubit processor is the current state of the art, and there are huge engineering and interference challenges to overcome as the qubit count increases.

As for where we are with Shor’s algorithm, it works by figuring out the prime factors of an integer (which is how RSA keys are constructed). In 2001, IBM factored the number 15 (into 3 x 5) using a quantum computer with 7 qubits. In 2012, they were able to factor 21 into 3 x 7, which is currently the largest number factored using Shor’s algorithm.

Given that cryptographic keys run into the hundreds and thousands of bits, Shor’s algorithm isn’t going to be a threat anytime soon. But note, too, that it’s focused on RSA, which is the original PKI algorithm. Newer algorithms exist which aren’t threatened by Shor’s algorithm, and work is already underway to develop new quantum-resistant PKI algorithms.

We’ll hear about it when low-bit RSA is easily cracked. And for several years it will be good enough to just increase bit lengths to stay ahead of it. Given all of that, I’d expect it’ll be 20-30 years before quantum computers are a threat to today’s PKI. And by then, PKI will be 20-30 years on from what it is now.


It is also worth noting that breaking RSA with Shor's algorithm requires thousands of _error-corrected_ qubits. This corresponds to millions of physical qubits with typical fidelities and nobody knows how to scale a system to that size.


not


Without having read the paper, is the claim that it would take 10,000 years to emulate the results on a classical computer?


I wouldn't say "emulate", because a classical computer wouldn't be trying to solve it the same way as the quantum computer. The point (which would be amazing if true) is that a classical supercomputer would take 10,000 years to solve the same problem, no matter what method/algorithm it used.


Of course "any algorithm" part can't be proved, because that is equivalent to proving P vs PSPACE. (PSPACE is as powerful as any quantum computer.) Just like everything in computational complexity, proof is under "standard assumptions".


For those that downvoted me, Scott Aaronson clarifies here that they do indeed mean simulate on a classical computer:

https://www.scottaaronson.com/blog/?p=4372


It's in the second paragraph.

"Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output."


I read the article, just not the actual paywalled paper, this statement says absolutely nothing. Here's a computation takes me less time than a supercomputer will time to calculate classically:

10e10000000000 + 10e10000000000 = 2*10e10000000000


I do remain skeptical.


Google is sending the computer in question and a team of their best scientists to convince you. Hang tight!


Thanks for elaborating.


We can't give everyone a quantum computer(yet)

But could we use this quantum processor to aid the rapid design of far more powerful classical computers?


Current quantum processors are not useful for anything, so it can't aid design of classical computers (among other things).


Designing algorithms and implementing them on quantum hardware is not straightforward.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: