
Quantum computers: amazing progress, but probably false supremacy claims - tolmasky
https://gilkalai.wordpress.com/2019/09/23/quantum-computers-amazing-progress-google-ibm-and-extraordinary-but-probably-false-supremacy-claims-google/
======
lawrenceyan
The usage of the word “supremacy” is probably the primary thing that rubs
people the wrong way. When I first heard of it myself, it did feel a little
strange.

Doing some research into it though, it seems like the phrase and usage of
quantum supremacy is pretty well defined and accepted by and large within the
physics community. Supposedly the coiner of the term believed “quantum
advantage” wouldn’t emphasize the point enough.

The basic components of achieving what the term defines though is relatively
straightforward. It’s basically just about definitively showcasing a problem
where using a classical computer would take super polynomial time, but with a
quantum computer ends up taking some significantly lesser time complexity.
Google’s Quantum AI Lab achieved exactly that based on the results provided in
the paper released by NASA. Granted, the problem in question is extremely
specific, but that doesn’t invalidate it as being an admissible problem. The
author raises points in contention, but they largely just seem like nitpicks
to me.

~~~
thethirdone
> The author raises points in contention, but they largely just seem like
> nitpicks to me.

It doesn't seem just like nitpicks to me. The core issue the author raises is
the noise in the final output. If the quantum computer can only produce
significantly noisy data (ie not in accordance with the theoretical
distribution) whereas the conventional computer produces noiseless data, then
that isn't a clear case of quantum supremacy.

~~~
lawrenceyan
We can look directly to the paper itself to see how it addresses the issues of
error uncertainty that the author premises his blogpost around:

“This fidelity should be resolvable with a few million measurements, since the
uncertainty on FXEB is 1/√Ns, where Ns is the number of samples. Our model
assumes that entangling larger and larger systems does not introduce
additional error sources beyond the errors we measure at the single and two-
qubit level — in the next section we will see how well this hypothesis holds.

FIDELITY ESTIMATION IN THE SUPREMACY REGIME

The gate sequence for our pseudo-random quantum circuit generation is shown in
Fig.3. One cycle of the algorithm consists of applying single-qubit gates
chose randomly from {√X,√Y,√W} on all qubits, followed by two-qubit gates on
pairs of qubits. The sequences of gates which form the “supremacy circuits”
are designed to minimize the circuit depth required to create a highly
entangled state, which ensures computational complexity and classical
hardness. While we cannot compute FXEB in the supremacy regime, we can
estimate it using three variations to reduce the complexity of the circuits.
In “patch circuits”, we remove a slice of two-qubit gates (a small fraction of
the total number of two-qubit gates), splitting the circuit into two spatially
isolated, non-interacting patches of qubits. We then compute the total
fidelity as the product of the patch fidelities, each of which can be easily
calculated. In “elided circuits”, we remove only a fraction of the initial
two-qubit gates along the slice, allowing for entanglement between patches,
which more closely mimics the full experiment while still maintaining
simulation feasibility. Finally, we can also run full “verification circuits”
with the same gate counts as our supremacy circuits, but with a different
pattern for the sequence of two-qubit gates which is much easier to simulate
classically [29]. Comparison between these variations allows tracking of the
system fidelity as we approach the supremacy regime. We first check that the
patch and elided versions of the verification circuits produce the same
fidelity as the full verification circuits up to 53 qubits, as shown in
Fig.4a. For each data point, we typically collect Ns=5×10^6 total samples over
ten circuit instances, where instances differ only in the choices of single-
qubit gates in each cycle. We also show predicted FXEB values computed by
multiplying the no-error probabilities of single- and two-qubit gates and
measurement [29]. Patch, elided, and predicted fidelities all show good
agreement with the fidelities of the corresponding full circuits, despite the
vast differences in computational complexity and entanglement. This gives us
confidence that elided circuits can be used to accurately estimate the
fidelity of more complex circuits. We proceed now to benchmark our most
computationally difficult circuits. In Fig.4b, we show the measured FXEB for
53-qubit patch and elided versions of the full supremacy circuits with
increasing depth. For the largest circuit with 53 qubits and 20 cycles, we
collected Ns=30×10^6 samples over 10 circuit instances, obtaining
FXEB=(2.24±0.21)×10^−3 for the elided circuits. With 5σ confidence, we assert
that the average fidelity of running these circuits on the quantum processor
is greater than at least 0.1%. The full data for Fig.4b should have similar
fidelities, but are only archived since the simulation times (red numbers)
take too long. It is thus in the quantum supremacy regime.”

Links to the figures can be found by viewing the paper directly here. If you
have the time, I highly recommend reading the paper in full regardless:
[https://www.docdroid.net/file/download/h9oBikj/quantum-
supre...](https://www.docdroid.net/file/download/h9oBikj/quantum-supremacy-
using-a-programmable-superconducting-processor.pdf)

~~~
computerex
> We can look directly to the paper itself to see how it addresses the issues
> of error uncertainty that the author premises his blogpost around

Can you demonstrate how the author "premises his blogpost around" the "issues
of error uncertainty"? I don't think that was the main premise of the paper,
at all.

~~~
lawrenceyan
I’m unsure of what you’re trying to say here. The issues that Kalai poses as
to the validity of the experiments done by Google’s Quantum AI Lab is that the
difference between the ideal distribution D and sampled distribution D’ is
meaningfully different enough from each other, that significant results cannot
be obtained from the experiment in comparing performance to classical
simulations.

The excerpt I posted above from the actual paper directly addresses the points
made by Kalai, and provides reasoning and analysis for determining the 5 sigma
confidence of their results.

This is again, why in my original post, I said I believed Kalai to be
nitpicking, primarily because the additional statistical testing he proposed
should be done, while surely never a bad thing wouldn’t do anything to change
the ultimate confidence determinations and results. Kalai of course believes
that the testing was insufficient. The easiest thing to do resolve such an
issue is to perform the additional work that Kalai asks for in order to
appease his suspicions. I have personally no problem with that. It’s never a
bad thing to do more tests.

~~~
computerex
I think you have serious conceptual holes in your understanding of the post.

> The issues that Kalai poses as to the validity of the experiments done by
> Google’s Quantum AI Lab is that the difference between the ideal
> distribution D and sampled distribution D’ is meaningfully different enough
> from each other, that significant results cannot be obtained from the
> experiment in comparing performance to classical simulations.

This is categorically false. Can you quote the passage(s) that lead you to
this conclusion?

~~~
lawrenceyan
> By creating a 0-1 distribution we mean sampling sufficiently many times from
> that distribution D so it allows us to show that the sampled distribution is
> close enough to D. Because of the imperfection (noise) of qubits and gates
> (and perhaps some additional sources of noise) we actually do not sample
> from D but from another distribution D’. However if D’ is close enough to D,
> the conclusion that classical computers cannot efficiently sample according
> to D’ is plausible.

~~~
computerex
The passage you quoted is under the:

> Achieving quantum supremacy via sampling

Headline and is simply describing some technical details of the experiment.
This is _not_ the author bringing any points of contention, there is no
disagreement here at all, this is merely defining the criteria for how quantum
supremacy is defined.

I highlighted his points of contention as a response to your original comment
that said he is nitpicking, here:
[https://news.ycombinator.com/item?id=21168813](https://news.ycombinator.com/item?id=21168813)

His main point of contention seems to be:

They need to understand the D' distribution more by running the experiment on
lower qubit configurations, comparing the experimentally sampled distributions
with one another across qubit configurations and across multiple runs of the
same qubit configurations. As it is, he says that they may not have even
sampled from D'. The burden of proof is on the experimenters to quantitatively
show that they did.

There were other issues raised, like Google not being quantitative enough with
their claims of the gains achieved in their _supremacy statement_.

He also brings up a more general issue with quantum computing in correlated
errors, which are described in more detail in his paper here:
[http://www.ma.huji.ac.il/~kalai/Qitamar.pdf](http://www.ma.huji.ac.il/~kalai/Qitamar.pdf)

But it boils down to that qubit logic gates experience positively correlated
errors, which unless corrected with quantum fault tolerance will have an
impact on any result.

I hope this clears up some misconceptions. In general it is a good idea to use
the principle of charity and try to address the best possible interpretation
of someone's argument. This is true even more so when commenting on someone
who is literally close to the top in their field.

------
kevinwang
Discussion around the matching bullish take:
[https://news.ycombinator.com/item?id=21053405](https://news.ycombinator.com/item?id=21053405)

~~~
mlthoughts2018
What do you mean by “matching”? The bullish take is just a _different_ take
that presumably must develop answers to the points of this post to remain a
viable belief option.

It actually strikes me somewhat as editorializing to place this link here with
the wording you chose.

If anything in your linked discussion actually addresses the substantive
points of this post, why not link to those items specifically? What would a
generic link to discussion on it that’s not tied to this post’s claims be
contributing? Why would it matter if that post was “bullish”? If it adds some
context related to this post, what is that context?

~~~
antonvs
An obvious interpretation of "matching" here is "as bullish as the OP take is
bearish."

------
sriku
I've had my eyes on Adrian Thompson's genetically evolved FPGA circuits for a
while now - they do amazing things very economically and exploit analog
circuit properties. So I've always wondered what if we make unreliable but
super tiny atomic level programmable gates where we know the unreliability
stems from quantum fluctuations, and then evolve circuits over millions of
generations (A.T. ran thousands) to see if they manage to start exploiting the
quantum effects.

PS: I don't have the expertise to make a strong argument here, but it seems
like an intriguing idea. Anything fundamentally against it?

~~~
Nokinside
'Quantum effects' are already exploited in analog components. TFET's work by
modulating quantum tunneling. Zener reverse breakdown in Zener diode is also
quantum effect. Esaki diode uses quantum effects.

Unfortunately just because the single component relies on quantum effects does
not have anything to do with quantum speedup in computation.

Quantum computation exploits entanglement in larger scale than normal. The
whole computational state must be entangled quantum state. Separated quantum
effects result just classical computer. The quantum circuit must be carefully
arranged so that the interference pattern yields the result you want.

~~~
sriku
Well, trivially that's a yes. However, the "specification level breach"
property of A.T.'s work is what intrigued me - i.e. the circuits are specified
to do a digital task, but work in an analog manner - constructing antennas and
receivers - to the extent that the same digital circuit wouldn't work when
written to another FPGA.

Also, entanglement doesn't need to be perfect. You can have 1% entanglement
too and have that propagate over time and operations. The question is whether
an evolved circuit can figure out pathways to use that little bit of
entanglement in ways that our understanding doesn't quite admit .. in much the
same way as a digital FPGA designer wouldn't think about using not-gates as
antennae.

An unreliable circuit achieved this way would also be interesting I think.

~~~
Nokinside
Entalgement is very fragile. You can't keep even "little bit" of entanglement
in any normal temperature.

------
gregfjohnson
As far as QC scalability, the thing I wonder about is the cost of maintaining
full entanglement of N qubits as N grows large.

The debbie-downer perspective would be that for each additional qubit you add,
you effectively double the cost of isolation from the environment, quantum
error correction schemes, etc.

So, while compute power for quantum algorithms grows exponentially in N, so
would the cost of operating the machine.

Do people who work in QC see this as a concern? Are there scientific arguments
or engineering insights that lessen or obviate this concern?

For your enjoyment and amusement, here is a QC-related show-HN!

"An elementary proof of a key lemma in Shor's quantum factoring algorithm":
[http://gregfjohnson.com/qft.html](http://gregfjohnson.com/qft.html)

~~~
johncolanduoni
This is the primary concern and barrier to most realizations of quantum
computers including Google’s most recent efforts, though there are exceptions
like linear optical quantum computing which have different problems. So for
people working in QC this is in fact the perspective, though they are mostly
optimistic that we can continue to engineer/discover better solutions to this
problem.

------
undersuit
My level of expertise on quantum computing is low.

The announce of the imminent advent of the Quantum Computer seems to be
recurring every few months these last few years because it's a moving target.

HN recently featured an article about a supposed quantum-only algorithm being
applied to normal computing. Algorithms are a vastly larger space to explore
than general computing paradigms.

Computer hardware is still advancing forward in performance even though
single-threaded performance is stalling in relation.

We're moving away from general computing, this is always happening, but the
current environments are pushing us back towards transputers. There's a place
for special purpose quantum hardware, but right now the money is in making the
most general quantum cpus possible(D-wave being an exception because annealing
has many uses where it excels.)

~~~
the8472
> The announce of the imminent advent of the Quantum Computer seems to be
> recurring every few months these last few years because it's a moving
> target.

That's mostly misreporting by the media and perhaps d-wave overselling their
capabilities. If you actually listen to experts they're far more guarded in
their statements.

~~~
johncolanduoni
Yes, there has been a lot of animosity in the quantum information community
for a long time against D-Wave for this very reason, as researchers are
worried about them over-promising and ushering in an analogue of the AI winter
as a result.

------
known
Is Quantum computing immune to
[https://news.ycombinator.com/item?id=14018450](https://news.ycombinator.com/item?id=14018450)

------
rolltiide
Am I understanding correctly that the whole idea is to get a bunch of qubits
in one machine and then have them evaluate every possibility in a super dumb
way but count on them to get the desired result faster than a transister based
machine that knows what its looking for?

And that we currently just can’t coordinate that many qubits at once and also
cant keep them cool long enough to function??

This seems kind of ridiculous idk...

~~~
johncolanduoni
This is not the case and is a constant frustration researchers have with how
quantum computing is represented in the media. So much so that Scott Aaronson
(who works on computational complexity theory, particularly as it relates to
quantum computing) has a statement telling you that’s not the case in the
heading of his blog:
[https://www.scottaaronson.com/blog/](https://www.scottaaronson.com/blog/)

~~~
rolltiide
I had to skim but that articles seems more of a gripe with “p-bits” being an
even dumber concoction of standard computers

I didn't see how it dissected my understanding

edit: oh I see I had read the first post on that blog thinking it was what the
attention was supposed to be on

~~~
fhars
The link was not to the article, as the post you replied to clearly stated.

------
tolmasky
I attempted to put the complete title but it was 35 characters too long. I
have no personal take on this and am not attempting to editorialize the title,
if the mods have a different opinion on what the proper shortening of the
original title should be (or if they have the power to put the actual complete
title) then I am all for it.

Here is the complete title: Quantum computers: amazing progress (Google &
IBM), and extraordinary but probably false supremacy claims (Google).

------
quanticitya
If quantum supremacy was not possible, wouldn't that mean that something is
wrong with our physics understanding?

So when people say quantum supremacy is impossible, do they say that the
device itself is extremely complicated to build (like an earth to moon
elevator for example), or that quantum supremacy isn't allowed not even in
principle?

~~~
doubleunplussed
Not necessarily. I believe it's the case that for every quantum algorithm that
is exponentially better than the best known classical algorithm for solving
the same problem, it is not _proven_ that an equally good (up to polynomial
overhead) classical algorithm does not exist.

Therefore, one way quantum computers could fail to be exponentially better
than classical ones, without us having to revise physics, is that there are as
of yet unknown classical algorithms that would erase the apparent difference
between the two classes. You would have quantum computers, but not quantum
supremacy.

I believe that there are quantum algorithms that are provably better than any
classical one, known or unknown, but I think these only provide at most
polynomial speedup, not exponential. So you might still call it quantum
supremacy to have algorithms that are merely polynomially better.

I've heard it called "Aaronson's trilemma", the fact that at least one of
these three things must be true:

* Quantum computers are not possible even in principle (new physics required, since current physics says they are), or

* The extended Chuch-Turing thesis is incorrect (because quantum supremacy implies not all computers are within polynomial overhead of each other), or

* There exist polynomial time classical algorithms for factoring and discrete logarithms.

~~~
mlthoughts2018
I remember seeing a lecture by Aaronson where he described the idea of a
“relativity computer” where you set a computer running on some exponential
task, but then hop in a spaceship and fly off at nearly the speed of light,
and come back to find the answer in an amount of time only polynomial for you.

He “debunked” this by mentioning to do it you’d potentially need an
exponentially large supply of fuel to accelerate yourself to a speed
exponentially close to the speed of light as the problem to solve grows more
complex.

I wonder if something similar could be true about quantum computing. Maybe it
is theoretically possible to solve problems polynomially that can only be
solved exponentially in a classical computer. But maybe it requires
“exponentially more” of some resource, like a power supply to power the
physical devices needed to do an amount of error correction to make it
physically realizable as the problem size grows.

Would it be possible for the Church-Turing thesis to be false _mathematically_
, but for the amount of quantum error correction to require an amount of
physical resources that grow prohibitively and prevent it from being
practically possible (like the fuel limitation prevents the otherwise
physically possible relativity computer)?

~~~
greatquux
The universe may not see any difference between the two states!

------
stephc_int13
My level of expertise on quantum computing is very low.

The announce of the imminent advent of the Quantum Computer seems to be
recurring every few months since at least 20 years, at this point I don't care
anymore.

~~~
shroedingaway
As a licensed quantum computologist (um... not really, but I do work for a
major QC effort)... your skepticism is not misplaced.

I can only speak for my place of work (not publicly) and pass along
scuttlebutt... but, my understanding is that it's largely the same everywhere.
Some efforts have tight budgets, some have billions backing them; but QC
research is hugely expensive with more unknowns than knowns. People giving out
the cash want results sooner than later. At my workplace, the researchers are
in a continual pitched battle with management to keep a rein on marketing,
limit expectations, etc., and it's a major distraction. When management
misunderstands an early result, and oversells it to their superiors (if not
the press!), suddenly they think it's appropriate to make "Quantum Supremacy"
a task with a 6-month deadline.

It all makes me want to crawl into a box and take a nap.

~~~
AlexCoventry
That sounds demoralizing and corrosive. What keeps you there?

~~~
shroedingaway
I fired my therapist for asking that too often. Not really. But that is
uncomfortably close to the truth.

------
egdod
> As you know, I expect that quantum supremacy cannot be achieved at all.

There’s the source of his skepticism, not some specific flaw he’s identified
in these new results.

~~~
platz
Except for the part where discusses at length his criticisms with the paper
itself, which apparently you don't wish to comment on.

