Hacker News new | past | comments | ask | show | jobs | submit login
Scott’s Supreme Quantum Supremacy FAQ (scottaaronson.com)
603 points by xmmrm 22 days ago | hide | past | web | favorite | 208 comments

If it goes well, the history of quantum computing will be divided up in to three eras: the era of twisty philosophical arguments that it's working ("the molecule is simulating itself"), the era of academic arguments that it's working ("we can solve this one carefully constructed problem") and the era of practical arguments ("Amazon is selling QC time for $20/kilogate-bit, what do you mean it's not possible?"). Quantum supremacy marks the transition from the first era to the second.

The third stage is the hardest. Thermonuclear energy production has been sitting in "theoretically possible, but economically unviable" for decades, tons of technologies like supersonic commercial flight has been achieved and then scrapped because of practical concerns, many other technologies struggle with passing economic viability barrier. Taking that into account, getting from 2nd to 3rd stage would be the hardest task. Comparing to classic computers, their practical usability has been clear since Babbage time, so far it's not the case for QC.

> Comparing to classic computers, their practical usability has been clear since Babbage time

Is this true? I thought that in the 40s-50s there was some argument over whether practical computers could really be built given that vacuum tubes were so unreliable. Von Neumann wrote gave a series of lectures in 1952 (eventually transcribed into an article) showing how it could be done: http://arep.med.harvard.edu/gmc/Von_Neumann_1956ro.pdf , and the argument became more or less moot once transistors arrived).

There's perhaps an analogy here to be made with quantum error correction, but that seems to be a lot harder...

The first transistors are 1947. We don't use that sort of transistor today, but unlike the vacuum tube it would have been obvious that a computer built from these transistors was viable. You can't get to the iPhone from there, but room size machines that do arbitrary Turing computation are only expensive. So that means it's in the realm of supersonic flight. If you must do it you can, but maybe you'll decide it's too expensive.

By 1950 then, the only question is whether we should, not whether we can.

Check out this one: https://en.wikipedia.org/wiki/Tabulating_machine Yes, not exactly a computer, definitely not a von Neumann one, but close enough to see how such thing would be useful. Also according to Wikipedia, this is the first thing to ever be called "super computing" device. That's what I mean - there's a long history of continuosly improving practical devices that led to the modern computer. For QC, we don't have that history.

Depends on what you mean by "computer", but being able to reliably calculate even just simple arithmetic has clear benefits. In the 1800s there would be countless clerks doing some sort of bookkeeping who'd benefit from it.

I suppose double entry would be your error correction. That also benefits from having a calculating machine.

Supersonic commercial flight is one of the most interesting. I think two strange things happened. First we got the internet - and mobile phones and video conferences and this has eroded the value of rapid physical presence. Second, and I think this is more important, people have become more willing to sacrifice their time for their career/money/firm/dream vs. family/happiness/now.

An acquaintance flew Concorde a lot in the 80's, the reason - "I could get to Heathrow in the morning, fly to New York, do a meeting, fly back and get home the next morning". Another friend is a exec at a bank now and goes on month long odysseys to the USA, Asia and Australia. The contrast in perspective and commitment is striking, I don't think that Concorde would matter at all to these folks.

Could the improvements in Business and First Class travel, combined with the increased use of private jets (also NetJets) have contributed to the downfall of the Concorde?

The more comfortable you are in the flight, the less the actual number of hours flown bothers you.

Yes - especially in flight calling, in flight working (ppt!) and in flight internet. Flight time != downtime for these folks anymore - which means that the per hour saved calculations are not going to convince finance.

Here is the issue explained in 10-minute youtube format by Wendover Productions: https://www.youtube.com/watch?v=n1QEj09Pe6k

Also the fact that airports are still a big time sink even with all the priority passes in the world.

No amount of money makes getting from your front door to the airport not be a massive pain potentially taking over an hour if you have some bad luck

Passenger queues are not an issue when you are departing from a GAT in your private jet. ;)

Traffic to the airport is.

If you can serve ads any faster with it, you bet it's gonna get deployed to production faster than Nuclear Fusion ever had hope to be :-P

quantum computing dual-state ads doing A/B testing simultaneously... I don't know if any of that just made sense, but I bet you could get some VC funding if you pitched it!

I don't think it's true for quantum computers. Folding proteins and stuff is good use-case of the cause but the real drive is the military. Same as for gen-AI. If you don't r&d this but Chine does - you're fucked. So the only way to do not become in the disadvantageous position - is to reason it yourself.

> the real drive is the military

Although military investment has driven many different technologies over the years, ‘the military’ is actually a complex mix of the Armed forces themselves, government acquisition organisations, and prime contractors and their supply chains. The resultant acquisition processes of many countries are glacially slow, with sometimes decadal timescales from requirements definition to delivery of full operational capability. Military acquisition is not agile, except when urgent operational requirements force rapid tech change by taking acquisition shortcuts.

The military what? Comparing again to computing engines, even when computers were not at all like they are now, there were always lots of people queueing around them with workloads they want them to do now - and which computers could help them with as they are, not 200 years in the future. For QC, I do not see many users with workloads that can be served right now. I don't say it shouldn't be developed because of that - lots of things took time to bring to maturity - but we should be ready for it taking a long time to get to real usefullness.

If the military is paying for it then it's already in stage 3. Just because something isn't readily available to the public doesn't mean it's a failure - I can't buy a rocket but the various space programs still do lots of cool things and they have money to pay for it.

Military pays for a lot of stuff that then is not going anywhere. Just as other venture funds do. You can't win without taking some risks. But that doesn't mean everything the army pays for automatically is a great thing.

So relatively we will all be about the same, but in absolute terms we will all be worse off?

Where have I heard this before...

From what I've parsed of the paper, this appears to fall firmly in the first category. This is a quantum computer simulating a quantum circuit; it's like saying that swirly water in a sink is achieving fluid dynamics supremacy, because we can't compute the results as fast as the water moves. Q12 touches on this.

I'd be really interested to hear what Gil Kalai has to say on this particular subject; he's generally pretty fair minded while being of a QC skeptic. He has a response up [1] but I haven't read through it yet.

[1] https://gilkalai.wordpress.com/2019/09/23/quantum-computers-...

There is market for two, maybe three quantum computers on earth.

640 qubits ought to be enough for anybody

There was a market for two, maybe three classical computers until they became smaller, cheaper, and more general purpose.

I believe GP is referring to Thomas Watson's famous quote from the 1940s: "There is a world market for maybe five computers."

That's the joke.

Ken Olsen founder and CEO of Digital Equipment Corporation (DEC) in 1977: "There is no reason for any individual to have a computer in his home.”

There's plenty of active research into quantum computing meets machine learning. Any real progress here would create quite a market.

> Amazon is selling QC time for $20/kilogate-bit

Except that you'd only be able to pay with physical cash.

If all goes well, Google may be the last company. They'll just gobble up everybody else and become Skynet.

Maybe. I’m pretty sure this FAQ is as much about quantum computing skepticism as it is disbelief that a giant corporation is really as totally responsible for some invention as its brand management would lead headlines to believe.

What are the theoretical models for the energy cost of computing on a qubit? I'll be excited for QC when there is known way (even with some handwaving and future-tech plans) to compute a non-trivial result for a reasonable sum, such as "crack someone's private RSA key for under $10M of compute cost"

> when there is known way (even with some handwaving and future-tech plans) to compute a non-trivial result for a reasonable sum, such as "crack someone's private RSA key for under $10M of compute cost"

If you're willing to admit handwaving and future-tech plans, then you can be excited now. It's estimated that a few million physical qubits (corresponding to few thousand logical qubits) will be necessary to crack an RSA key. It's at least a decade away, maybe several, but few experts believe there are hard barriers to number of qubits or minimal cost.

> but few experts believe there are hard barriers to number of qubits or minimal cost.

both of them are very flexible, but also definitively non independent.

This paper doesn't address the main road block to practical prime factorization, because it chooses computations which aren't fatally compromised by decoherence of the qubit representation. So we're about as far from useful quantum computation as ever, although the paper does speculate that maybe we can find an innovative quantum-computation algorithm which is robust to that decoherence, and yet is useful in some way.

This depends on your definition of "trivial". But if we restrict it to factoring, no, there is no known path to QC factoring at the moment, none at all.

Factoring has already been done. For the size number factored it did use much more energy than it would have taken on a classical device, but it did utilize quantum effects/Shor’s algorithm:


I don't understand. Factoring composite numbers into their primes is definitely a predicted use for QC in the future. Are you just saying the architecture of the machine powerful enough to do that is still uncertain?

I think it is analogous to fusion. Fusion definitely works (the Sun and hydrogen bomb) and power generation is a predicted use of fusion in the future. That doesn't mean we have known path to fusion for power generation.

To extend the analogy, quantum supremacy is like fusion for neutron source. There are commercial fusion devices to be used as neutron source.

> That doesn't mean we have known path to fusion for power generation.

Perhaps this was just a poor example, but we absolutely have a path to fusion power generation. ITER [0][2] is under construction now, and is expected to be capable of 10x power returns. DEMO [1] should have 25x.

[0] https://en.wikipedia.org/wiki/ITER

[1] https://en.wikipedia.org/wiki/DEMOnstration_Power_Station

[2] https://media4.s-nbcnews.com/j/newscms/2017_52/2273651/17122... (worth a click)

From the ITER wiki link:

> ITER Project was initiated in 1988.

> The expected cost of ITER has risen from US$5 billion to US$20 billion, and the timeline for operation at full power was moved from the original estimate of 2016 to 2027.

> A technical concern is that the 14 MeV neutrons produced by the fusion reactions will damage the materials from which the reactor is built

... definitely not clear to me that this is a "clear path forward" to fusion power, but I was encouraged to learn that a project of this sort was underway.

> ITER Project was initiated in 1988.

> The expected cost of ITER has risen from US$5 billion to US$20 billion

So factor 4 in 30 years. That's approximately inflation, no?

I did not think of that. An excellent point.

I'm betting Commonwealth Fusion will beat ITER to commercial energy production. Throwing stronger magnets at the problem seems like a promising way to achieve stable fusion.


Directly from ITER page:


"ITER will not capture the energy it produces as electricity, but—as first of all fusion experiments in history to produce net energy gain—it will prepare the way for the machine that can."

Let me repeat, the project will not produce any usable electricity. It's still an experiment.

And the start of the experiment is at the moment planned for 2025. The ITER project started in 2007. That's how much the preparations "just" for the experiment take, even if the previous experiments were done for decades.

Also from their FAQ:


"one of the missions for the later stages of ITER operation is to demonstrate the feasibility of one or more concepts of tritium production through the Test Blanket Module (TBM) program."

Namely, it's an experiment that "in the later stages" should manage to give results that would allow the development of the technology for tritium breeding. Without tritium breeding fusion can't be used commercially.

The path for experiments is known, but it's still far from confirmed that the desired results are achievable, as we'll need a lot of new developments which we don't have at the moment for that.

Also DEMO can be fully developed only once ITER succeeds, it needs the results from ITER.

It's simply very hard, and achievable as an experiment. The still open question is if its really commercially viable, in the sense, if the "hard" stuff can become manageable enough to be useful.

Everything hinges on what you mean by the very vague term "known path". Every technology more than a few years out requires currently unknown problems to be solved. Even, say, the next generation of processors will require many engineer-years to solve currently unsolved problems, and the generation after that is pretty opaque right now. Still, we can have confidence that they will be developed.

Right but in the foreseeable future we might be able to factor numbers as large as like 24. Not 24 digits. 24 the number.

We still don’t know if the third era is possible.

Science : welcome!

I really enjoy seeing comments on here that fit the 3rd kind.

"Thing is ridiculous, that could never work because of reasons A, B and C."

"What are you talking about, here is project that does thing, it functions perfectly."

Nothing like reality proving someones baseless naysaying wrong immediately.

Except reality has so far proved that it is possible to use QC for solving one QC-specific task that is hard to solve classically, that's it. That's not what "naysaing" was about - the "naysaying" was about the fact we have no way to use QC for known classical tasks, and the path to such juicy targets as breaking commercial public key encryption is entirely unclear. Virtually nobody says it would never ever happen - but so far it has not happened and will not happen soon, and reality has not proven anything contradicting that yet. When, in 2038 or in 2083, the reality finally gets there, then your condescending attitude towards the naysayers would finally be completely warranted. But not yet.

I've been waiting for Scott Aaronson to put all of this into perspective since the first leaks about Google's quantum supremacy started appearing in popular media.

He has exceeded my expectations with this post, which cuts through all the hype to communicate exactly what the results of this experiment mean for the field. It's worth reading and sharing.

On second reading, I have but one trivial gripe:

"Enormity" implies moral reprobation, so it's a poor way of describing the significance of a computational discovery.

Isn't it just

Enormous : enormity :: huge : hugeness?

Both enormous and enormity come from the same etymological roots, enormis "irregular, huge", carrying a connotation of abnormal or irregular, in a negative or bad sense. Enormity specifically came to mean "extreme wickedness" in English, though that meaning is increasingly obscured by usage to mean simply "very big".

Most discerning usage would be that both me "very large scale", but that enormity preserves the meaning of bad at a very large scale.

That's become something of a losing battle as the synonymous usage has become an enormity.



Not quite.

Enormous : enormousness :: huge : hugeness.

Enormity = Immense scale of evil (e.g., the "enormity of the holocaust")

It can be used in that way, but the neutral usage is valid too. I'd even argue the "evil" undertones of the usage you describe borders on archaic.

"Enormousness" isn't a word in common usage that I'm aware of.

I have no dog in the fight, and wish "enormity" had never developed a normative undertone, but I strongly disagree that said usage is anything close to archaic.

If you Google "enormity," the dictionary definitions it displays before the results are: 1.the great or extreme scale, seriousness, or extent of something perceived as bad or morally wrong. "a thorough search disclosed the full enormity of the crime" 2. a grave crime or sin. "the enormities of the regime"

Merriam-Webster claims this is not the exclusive usage, and that enormity can mean "immensity" without normative implications when the size is unexpected. But the very example it cites, from Steinbeck, involves the "enormity" of a situation in which a fire was started.

That said, I agree that "enormousness" is an awkward word, which I do not use. I'm left to ponder the enormity of my own pedantry.

As a native English speaker, I can't say I've found this to be the case. "Enormity" does tend to be used for dramatic effect, most often on moral issues, but I don't think that makes Scott wrong to use it here.

I don't know if I've seen "enormousness" before this thread.

Since enormous is from Latin, stems tend to be Latin. `ness` generally only is morphologically productive with Germanic roots, kindness, happiness, etc.

When I visited Iceland, I remember a sign in English that said a cliff was insafe [sic]. `in` being a Latin morpheme, and safe being Germanic.

Ah, I never realized in/un would be used with corresponding Latin/Germanic words.

But then, it seems there are quite a few un+latin (unreal, unbalanced, unadulterated, uncertain etc), even if I can't think of in+Germanic.

Good counter examples. Etymonline will break roots down for you.


Like most things in linguistics, the 'rules' are more a rule of thumb than the mathimatical sort.

Germanic pre/post fixes seem like they stick to pre-inkhorn roots better.


Thanks for the enlightening nitpick, this is one of those terrible/terrific things about English that I love/hate.

Preface: I know nothing about quantum computing.

What exactly is a qubit? I'm not asking what does it mean, because I know there's superpositions and all that jazz, but as in...like, in an electronic circuit, what is a qubit? Is it made out of logic gates? Which ones?

If we can make one qubit, can't we just make a bunch of them by copy and pasting circuits similar to how we used vacuum tubes in the 60s and 70s? How come our current limit is only around 54 or so?

Are qubits, and quantum computers by extension, not even electronic circuits? If so...what the hell are they?

Essentially, they use superconducting electronics to create a quantum circuit. It's not the same logic gates as a traditional computer chip and not based on the same physics. The details are difficult & messy.

There are several reasons it doesn't scale easily to more qubits, but you can imagine that you don't want the chip to be large (must be cooled to 25mK!) but the qubits should be spaced quite far apart so they don't influence each other. Also, it's not a very simple circuit, so the layout of the transmission lines ("wire on a chip") becomes difficult to manage. The last problem also scales badly with the number of qubits (the middle qubit becomes progressively harder to reach).

There is a paragraph in the (leaked) paper that describes their chip:

> In a superconducting circuit, conduction electrons condense into a macroscopic quantum state, such that currents and voltages behave quantum mechanically [2, 30]. Our processor uses transmon qubits [6], which can be thought of as nonlinear superconducting resonators at 5 to 7 GHz. The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic flux control to tune the frequency. Each qubit is connected to a linear resonator used to read out the qubit state [5].

If you want a physical picture, check out [6]: https://arxiv.org/abs/cond-mat/0703002

edit: source [6] is more appropriate and open to access

Holy hell.

> > In a superconducting circuit, conduction electrons condense into a macroscopic quantum state, such that currents and voltages behave quantum mechanically [2, 30]. Our processor uses transmon qubits [6], which can be thought of as nonlinear superconducting resonators at 5 to 7 GHz. The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic flux control to tune the frequency. Each qubit is connected to a linear resonator used to read out the qubit state [5].

I understand most of the words in isolation – and I know that they are valid, even if combining them in a useful manner to understand exactly what they are describing is eluding me.

But if there was ever a paragraph that sounded like pure technobabble, this is it. Replace the technobabble found in the star trek matter transporter with quantum lingo, and it would sound very similar.

"For a number of years now, work has been proceeding in order to bring perfection to the crudely conceived idea of a transmission that would not only supply inverse reactive current for use in unilateral phase detractors, but would also be capable of automatically synchronizing cardinal grammeters..."

Honestly, this is how I read every quantum thing. Not because I don't believe it's real, it's just so far outside of my area of education that it might as well be describing the turbo encabulator.

So I'm not super familiar with quantum computation, but I did do my undergrad research in QM (specifically, how chaotic behavior depends on the scale of nonlinear quantum systems) and I can take some informed guesses about what these words mean. It's actually super cool!

In a superconducting circuit

A circuit is a loop of something. Probably a solid material, like a metal or carbon, though it could be something more exotic. A superconducting circuit means the electrons in that material move without any resistance. This tells us the circuit is probably very cold--superconductors tend to break down at warm temperatures, like the ones in your house.

conduction electrons

Conductors have electrons in them. Some are "stuck" to atoms, others get to move around. Conduction electrons are the ones that move.

condense into a macroscopic quantum state

Macroscopic means "big", and for QM, "big" means, like, more than a handful of atoms or particles. At least as far as QM is concerned, everything--rocks, electrons, photons, people, etc., has a quantum state, but we use the phrase "quantum state" to mean a state that's, like, WEIRDLY QUANTUM. For instance, a pencil sitting on your desk is normal. A pencil that's like, half on your desk and half on mine is "quantum". Condensing means the electrons are going to change from doing normal individual electron things into acting like some sort of Big But Weirdly Quantum system, likely as a group. Like a crowd becoming a flash mob, they might do some sort of synchronized dance, only except the dance involves, say, every dancer doing two or three or ten dance moves at the same time.

such that currents and voltages behave quantum mechanically [2, 30].

Specifically, we're gonna be able to see quantum effects like superposition in Big Things like "current" and "voltage". The circuit might be in a combination of 3 volts and 5 volts at the same time. Also some of those voltages might be partly real and partly imaginary. Long story.

Our processor uses transmon

What the fuck is a transmon? I had to look this one up; it's a way of making these qubits less sensitive to voltage fluctuations.

qubits [6]

Qubits are quantum bits. A bit can be either 0 or 1. A qubit can be 0 or 1 or (and this is the quantum part) any state in between. Let's call the 0 state |0>, and the 1 state |1>. A qubit can be |1>, but it could also be (1/sqrt(2) |0>) + (1/sqrt(2) |1>). We call that a "cat" state, incidentally, because it's "half 1, half 0"--like Schroedinger's Cat, half alive and half dead. Again, the coefficients here are, in general, complex numbers, but we're gonna gloss over that.

which can be thought of as nonlinear

Nonlinear means they don't respond linearly to some input. Ever had someone do a series of small, mildly annoying things, and at some point you snapped and yelled at them? That's called "going nonlinear".

superconducting resonators

Oscillators are things that vibrate, like strings. Resonators have preferred frequencies to vibrate at. I don't exactly know what this means in this context, though. I'm guessing the circuit has some preferred frequencies it really likes to oscillate at.

at 5 to 7 GHz.

Voltages or currents or whatever are gonna go back and forth 5-7 billion times a second. That's about the same frequency as wifi signals, or microwaves.

The qubit is encoded

A qubit is an abstract thing on a whiteboard. There lots of ways we could actually make a thing that looks like a qubit. "Encoded", here, means "turned into an actual machine you can build in a lab".

as the two lowest quantum eigenstates of the resonant circuit.

An eigenstate, loosely speaking, is a state that has nothing in common with any other eigenstate. For instance, if we wanted to measure a particle's position on a line, we could take x=0 as one eigenstate, x=1 as another, x=2.5 as yet another, etc etc. An infinite number of eigenstates. Quantum systems can be in any (well, normalized) sum of eigenstates. My cat loves being inside and outside at the same time, so they're always trying to occupy 0.2|x=0> + 0.6|x=4> + 0.2|x=5>.

An operator is a thing you can do to a quantum state. Think of operators like functions on values, if you're a programmer, or like matrices that can be applied to state vectors, if you know linear algebra. For instance, I might have a measurement operator, which I use to look at my cat. There's also a special operator called the Hamiltonian, which (loosely) tells you what a state will look like after an infinitely small step in time.

Each operators has associated eigenstates, and those eigenstates have a magic property: if you apply that operator to one of its eigenstates, you get back the exact same state, times some complex number, which we call an eigenvalue. This means eigenstates for the Hamiltonian are, in a sense, stable in time. When we talk about the eigenstates of a system, we usually mean the eigenstates of the Hamiltonian. They could also be talking about measurement eigenstates--I'm not sure.

For the Hamiltonian, eigenvalues are, for Really Fucking Cool Reasons, energies. When we talk about "the two lowest quantum eigenstates", we mean the two states with the lowest energy. So maybe the circuit's eigenstates are, I dunno, 5 Ghz, 6 Ghz, 7 Ghz, etc. We'd take 5 and 6 as our |1> and |0> states.

Each transmon has two controls

A control is a thing we can use to change the transmon.

a microwave drive

Something like the microwave in your kitchen, but very small, and probably expensive.

to excite the qubit

This probably means changing the qubit from |0> to |1>. Microwaves carry energy, right? That's how they heat food. If they microwave the circuit at the right frequency, that microwave energy probably helps it jump from a lower frequency/energy to a higher one.

and a magnetic flux control

This feels like something specific to transmons. Flux has to do with the density of stuff moving through a surface. Magnetic flux probably has to do with how strong and close field lines are in some part of the transmon machinery.

to tune the frequency.

How fast the circuit wobbles depends on a magnetic field, I guess?

Each qubit is connected to a linear resonator

Huh. So we've got nonlinear resonators (the qubits) connected to linear resonators (some sort of measurement device?)

used to read out the qubit state

We need a way to actually look at the qubits, and I guess the linear resonator does that. I assume that the linear resonator is isolated from the qubit during computation, and once the computation is over, it gets connected somehow, and vibrates at the same frequency as the qubit. That process probably "spreads out" the quantum state of the system, pushing it REAL CLOSE to an actual eigenstate of the measurement system, which looks like a probabilistic measurement of the actual qubit state.

Like... my cat could be 3/4 inside and 1/4 outside, so long as the room is really dark. If I turn on the light, suddenly my cat is coupled to a MUCH BIGGER system--the room, and that "quantum" state gets diffused into that larger system, in what looks like a measurement like "cat definitely inside". I don't know a simple way to explain decoherence, haha, but if you like math, try Percival's "Quantum State Diffusion".

Hope this helps, and I also hope I got at least some of this right. Maybe someone with a better/more recent command of QM can step in here.

Very good explanation. Even if it would not be 100% correct, and I can not say yes or no, it gives a good overall introduction to the concepts involved.

I'm not a quantum physicist either, but I did study quantum mechanics in college for a bit!

> A qubit can be |1>, but it could also be (1/sqrt(2) |0>) + (1/sqrt(2) |1>). We call that a "cat" state, incidentally, because it's "half 1, half 0"--like Schroedinger's Cat, half alive and half dead. Again, the coefficients here are, in general, complex numbers, but we're gonna gloss over that.

In case anyone is interested in not glossing over this part, this [0] lecture by Scott Aaronson is an excellent introduction to the crazy world of complex probability amplitudes. It doesn't assume much more than some basic linear algebra, and does a good job of developing at least a little bit of an intuition for some of the concepts in Aphyr's comment.

[0] https://www.scottaaronson.com/democritus/lec9.html

At the extreme, it approaches art: https://www.reddit.com/r/VXJunkies/

We do want the chip to be "large" eventually. The scalability doesn't have anything with those, however. Refrigerators are pretty large and these devices are really really tiny; also qubits shouldn't be spaced "far" apart, this would kill all the (controllable) couplings.

I was a little too fast and loose in my previous post. You are correct that we want a large [number of qubits] on a chip eventually. The chips are tiny and the fridges large enough for now (50 qubits). It's not clear to me that they can handle thousands of qubits as imagined. The cryostat will undoubtedly be able to house the chip, but the extra electronics / cables must run in there as well. With 1000 qubits and 2 cables per qubit this will be a major challenge.

You are correct about the coupling of the qubits, I was simplifying too strongly there.

I do stand by my scaling point (center qubits harder to reach) but I'm open for counter arguments.


There is a scalability problem but due to different reasons. See my other comment in this thread for details.

Classical circuitry is an issue, but not as much as you think. What happened is Martinis' group and others moved forward with a quick & dirty design which worked well for their device but can't be scaled (they basically didn't have the expertise like silicon people had). Nevertheless, it's not a fundamental problem, the circuitry for silicon based spin qubits never had this problem for example and xmons won't either, they just keep reiterating as the number of qubits increase, it's the least of their worries regarding scalability. There are far bigger problems regarding the scalability.

I agree on your other points and trust that you are more qualified to judge what the biggest obstacles are.

Do you have a paper I can look into that goes into the chip architectures in more detail (not specifically for this new device)? Otherwise I'll await the science / nature paper of this demonstration.

The classical circuit is mostly outside the cryo I assume since it's GHz's and LNAs are available. Do you know if the microwave readout signal is frequency multiplexed to reduce cables?

When you say chip architecture, I assume you mean how to assemble together qubits like an integrated circuit. Here's one proposal for silicon based qubits: https://www.nature.com/articles/s41467-017-01905-6

Microwave signals are typically used for control rather than readout. I don't know if this experiment does it or not, and I am not an experimentalist. Multiplexing is more typically required for reducing timing errors of simultaneously driven signals (for better synchronization) and it really depends on the device and the mode of operation, plus whether the experiment they're doing needs it or not. The same experimental group sometimes do it one experiment and not do it in another, despite using the same device.

Thanks for the reference and your great insights. On reading the new Google paper more closely, they do FM the readout.

Short answer, a qubit is a unit vector in a 2D complex Hilbert space. Now, that doesn't actually say much about why we care or how they're useful. In practical terms, you can think of qubits as complex unit vectors along two axes, with one axis corresponding to |0⟩ (the zero qubit) and one axis corresponding to |1⟩, or the one qubit. So for example, you could have a qubit called |+⟩, which is just shorthand for (|0⟩ + |1⟩)/sqrt(2).

Measuring a qubit in a basis collapses it to one of the basis vectors (e.g Schrödinger's cat must be alive or dead once we open the box) with probability equal to its inner product with that basis vector. This is why we need a Hilbert space and not just any old vector space.

Finally, to answer your question about gates, a quantum gate is basically a unitary matrix, i.e. a matrix that preserves the norm of its inputs. You can feed qubits into these matrices by themselves or, more often, many at once, by using something called the tensor product of the qubits - this is where the math gets slightly more involved.

The long and short of it is that we can induce correlation patterns between qubits using these gates (aka quantum entanglement) and orchestrate circuits of interference patterns where the wrong answers cancel each other out and the right answer gets reinforced so that we measure it at the end - unfortunately, this is where my knowledge breaks down as a beginner. My apologies if I accidentally handwaved anything important but hopefully you get the gist.

This is like saying that voltage is a scalar real number, which explains nothing about the physics of electromagnetism, and in my opinion is probably the worst way to explain physics of anything. It especially tells nothing useful given the question is about the stuff that a qubit is made of (which is different from "how do you mathematically represent an ideal qubit on paper?").

And going beyond that, as Peres puts it, "Quantum phenomena do not occur in a Hilbert space. They occur in a laboratory."

> Schrödinger's cat must be alive or dead once we open the box

I recently had a conversation about this in another thread. It seems to me, and nobody tried to convince me otherwise, that the Cat would be the Observer. Therefore it would be dead, NOT dead AND alive, as soon as it observes the poisonous gas in its box.

So this is a little nuanced. It's true that quantum systems collapse on "observation", but that doesn't actually mean "observation by a sentient entity". It could really just be any interaction with the outside environment. (This is why qubits have to be kept incredibly well-isolated.) We don't really fully know exactly how this collapse works, and related speculation is generally classified under the measurement problem [1]. But it's true that one of the key points of Schrödinger when proposing his thought experiment was that the notion of measurement or observation was not fully defined under the Copenhagen interpretation.

Side note, it's not that the cat is observing poisonous gas, but rather, that a Geiger counter is set to detect whether a radioactive atom decays or not and triggers the release of some poisonous gas if so. So, classically, Schrödinger's cat would be either alive or dead 50% of the time, not 100% dead. There are plenty of alternative ways to reconcile this classical view with quantum mechanics. Perhaps the simplest and most well-known is the many-worlds interpretation [2], which states that both events occur, just in different timelines, and we don't know what timeline we ended up in until we open the box. (Of course, it is ridiculous to speculate as to which timeline "we" end up in before the experiment is carried out, because the people in both timelines would still be "us" - this can get awkward to think about.)

[1] https://en.wikipedia.org/wiki/Measurement_problem

[2] https://en.wikipedia.org/wiki/Many-worlds_interpretation

That version of the though experiment is known as "quantum suicide": https://en.wikipedia.org/wiki/Quantum_suicide_and_immortalit...

The point of the thought experiment was to highlight the measurement problem in QM interpretations and the difficulty in defining the observer.

sure but that’s still all one quantum system from the perspective of any other observer outside of the box, including other cats.

You can do it with electron spin or photon polarization or by any number of properties, but it's the state of a fundamental particle.

It's an entirely new type of computing apparatus, using the fundamental state of particles. No electron circuits, those won't work.

And it's expensive because of the above. These things are massive, have to be kept at cyrogenic temperature, and isolation gets harder the more particles you have.

I thought a qubit could also be implemented with current going around a superconductor loop. Is that incorrect?

You can implement them like that. Anything that has a quantum state that doesn’t decohere too fast will work. The hard part thus far has been that almost everything does decohere too fast.

No, that's indeed a quite popular approach.

Extremely simplified:

>> in an electronic circuit, what is a qubit?

It's not an electronic circuit, it literally is an electron or a photon. And it's using that particle's properties to do "superposition and all that jazz".

You can't put too many next to each others because they start interacting together because that's what electrons do when they get close.

Quantum computing for the very curious [1] has been very useful to me, for getting at least a basic idea as to what exactly a quantum computer is, what a qubit is, and how they are manipulated.

[1] https://quantum.country/qcvc

Physicists here. In practice, a qubit is a two-level physical system. It can be spin state of an electron, polarization of a photon, lowest two energy states of an atom or an electron in quantum dot/well trap potential, the charge state (called charge qubit) -- whether you have 0 or 1 electrons in it, etc etc (if you have 3 levels, it's called qutrit, and for d levels qudit). This experiment uses charge qubits (a special variant which has some robustness against charge noise by design [by operating at a voltage level which is insensitive to 1st order fluctuations in the electric field, called "sweet spot"], called transmon).

The main problem is achieving full control of these systems, which is extremely hard, because there are certain things (some random/stochastic) that you can't control at all and you have to fight+race against their influence:

- qubits are tiny, and the energy splitting between these two states are typically minuscule: this means even a small vibration from a sneeze miles away can make the qubit flip.

- qubits do not live in vacuum, they are typically hosted in solid-state systems and the qubits are coupled to their hosting environment, which have their own moving parts (two-level fluctuators which lead to charge noise, phonons which also couple to electrons typically via spin-orbit coupling, spinful defects in the material which have their own dynamics, etc etc) that you can't really control. it's extremely difficult to achieve full control of a qubit in the presence of things that you can't control and random in nature.

- if the qubit is the lowest two-levels of a system with higher energy levels, one also needs to worry leakage errors to those higher states

- there are ways of suppressing the influence of such unwanted interactions (dynamically corrected gates + quantum error correction codes) given that their strength is below certain thresholds. going below those thresholds is again an enormous engineering/material science problem (extremely low temperatures, isolation from vibrations, low/high-pass filters for the classical circuitry which is used to control/drive the qubits via electric/magnetic fields, design of the device itself which typically hosts two-level fluctuators, etc etc). this problem becomes harder in general as you increase the number of qubits though.

- to do anything non-trivial, you need to have more than one-qubit and have controllable couplings between those (so you can't put them apart too far which makes it impossible to couple them). this doesn't work perfectly in practice, you can't completely control or turn off their couplings (a problem called cross-talk) which again leads to errors. so it doesn't work quite like modular classical circuit elements which you can "copy & paste" because the abstractions from the low-level, nitty-gritty physics of the underlying material fail for all these qubits.

And there's the pesky issue of readout errors, which tends to bad.

Beautifully explained, thank you!

I'm in the same boat. A qubit can have more than the two states that a transistor can have, got it.

Okay, now what can we do with that?

"crack encryption by simulating a state!" yeah but what? is that something I should be concerned about now? "hahaha no no no silly normie we'd need two thousand qubits for that, this machine only has 53!" oooookay, and you did that number in your head, how??? "we just solved the first unsolvable problem that a mere bit bound supercomputer couldn't solve, look at this math formula!" but that didn't explain "we are celebrating, are you not celebrating"

There just seems to be a lack of non-introductory but non-PhD level information. Where is the "explain it like I've been accepted into college at all".

Since we're looking at Scott Aaronson, you might want to check out "Quantum Computing Since Democritus". It gives a good explanation of the math behind qubits and how they can be used. Best intro I know of.

thank you for that and not trying to explain it in an additional convoluted way

Schrodinger's cat in unopened box is 1 qubit = it's alive and dead at the same time. When the box is opened to observe the result, the quantum state "decoheres" - decays to 1 bit result.

Now imagine 53 such boxes, interconnected by quantum gates. The 53 qubits combined are in all of 2^53 states at once. The gates can be set up such that some combinations like "cat 1 alive", "cat 2 dead", etc. are much more likely result than others, after the boxes are opened. And all this computation is done in one step, whereby the classical computer must do 2^53 steps to get the same result.

To have 53 cats all undisturbed in these dead/alive states so that computation is done without errors is very technically challenging :)

I don’t think the cat thing helps to explain this, especially with the finality association of “dead”.

It can't really be explained, we kind of accept it works like it has been in both states at once until the box was opened. Similarly, we don't really know how to explain how particles travel by both slits at once in the double-slit experiment.

Idk, it seems to me like https://www.smbc-comics.com/comic/the-talk-3 is a pretty good explanation.

“A new ontological category”.

What’s the problem?

satire and educational

this has been the best thing I've seen so far

Here .. finding the circuit that saves more cats from death is already a useful thing. Of course, i dont see an obvious algorithm to search for that circuit so it would have to be brute force. but now this computer makes even brute force possible

Do you know linear algebra?

If you have a collection of n qubits, the state of those corresponds to a unit vector in a 2^n dimensional space over the field of complex numbers.

You can do certain linear operations (iirc, only unitary ones. I wouldn’t claim all unitary ones either) to change the state. You can also do a measurement, which has a random outcome, following the Born rule.

So, it isn’t really just “each of them has more than 2 possible states”. That can be the case with something classical, and has been done before (there have been ternary computers. Binary computers won out. Ternary computers (or quaternary, etc. etc.) wouldn’t be particularly special.)

You can’t just think of each of the qubits as having a state always entirely independent of the rest of the qubits.

> What exactly is a qubit?

A bit is like a boolean type, has the values of true and false. Or you treat those values as 0 or 1, then gather a bunch of bits to build useful numbers.

A qubit is like a pair<number, number> such that these numbers MUST satisfy the following constraints:

    pair.left^2 + pair.right^2 = 1
    pair.left and pair.right can be any complex number
Why such a composite type with weird constraints you may ask? Because that's how properties of really small particles behave in the real world. So the hope is, maybe if we can build our software using this weird data type called qubit, we can implement computation on quantum hardware without abstracting every problem using a dump type like a boolean or its aliases/collections.

Remember that classical computers use a clock to flip bits over time.

A similar quantum computer would manipulate qubits instead.

A bit has the storage capacity of 2 distinct bits of information.

A qubit has the storage capacity of 2 complex numbers, which corresponds to 4 floats, which is at least 16*8 bits of information if we are conservative about our assumptions.

> Is it made out of logic gates?

Kind of. Most of the current logic gates are built with semiconductors. It means by applying different voltages/currents/flux etc to different parts of a solid material, we can alter what we measure in some other part of the same material.

A quantum logic gate uses the same principle but in order to achieve the desired speed and storage advantages, it uses an object with a measurable property that at least approximately behaves like a quantum mechanical object. Common semiconductors are too crowded of atoms in terms of their body parts to make good quantum materials. They touch to each other and are almost always exposed to air. Their running temperatures are undesirably high.

A really dark (literally without a single photon) vacuum chamber that holds a really small amount of floating matter in the middle, frozen with lasers up to 0.000...1 Kelvin would make a good example of a stable but expensive qubit. We can measure this qubit by destroying its state, i.e by applying a magnetic field and measuring the emitted photon's location, polarization or frequency. The problem is, copy pasting this device to build a circuit is really hard due to logistics and auxiliary machinery required to keep all the state stable.

> How come our current limit is only around 54 or so?

That's not a fundamental limit but an engineering one, due the issues I mentioned above. The bigger the device gets, the harder to maintain its stable state. The method currently used for reaching this limit really looks like what is used in the 60s. History is repeating itself with a small twist, the running temperature is extremely low this time. Not liquid nitrogen low, but compressing atoms by sniping them from distance via laser on multiple directions low.

There is also one more problem that is unique to quantum computers. You have to measure the same qubit multiple times to be able to read those complex numbers since their values are determined statistically. You either represent a qubit with multiple qubit like devices or you use the same device to try your measurement repeatedly. Each approach comes with its own drawbacks.

> Are qubits, and quantum computers by extension, not even electronic circuits?

Even today, non quantum circuits are sometimes non electronic in some of their parts. Fiber optics, supersonic emitters/receivers and photoelectric sensors are good examples.

To this day, it is not clear whether the first consumer quantum CPU will be entirely electronic or not.

>A bit has the storage capacity of 2 distinct bits of information. A qubit has the storage capacity of 2 complex numbers, which corresponds to 4 floats, which is at least 16*8 bits of information if we are conservative about our assumptions.

This whole explanation makes qubits sound just like a combo-pack of bits, or like something you'd find in an analog computer. They're much more powerful than that because they can be put into superpositions and entangled together while you continue to do calculations with them. Using superpositions and entanglement across qubits can let you solve some problems with lower-complexity algorithms than you could with classical computers.

It's like saying a car is defined by four tires being in close proximity, while skipping the fact that the usefulness of a car comes from the tires being connected together and steered which lets you do things that no amount of unconnected tires could do.

And also, qubits can't be used as compressed data storage. Despite the fact that it takes two real numbers to specify the state of a qubit, you only get one bit out when you measure it, with the rest of the information being destroyed.

The whole point of Quantum Supremacy test is to show that qubits can in fact store distributions in their state which is another way of saying some carefully crafted equations are storable inside qubits as long as you are interested in an approximate and numeric output. One can argue a MIDI audio or a SVG graphic is exactly that, a carefully crafted equation which even in its lossy form (mp3, jpg) is able to convey meaning.

> A qubit has the storage capacity of 2 complex numbers, which corresponds to 4 floats, which is at least 16*8 bits of information if we are conservative about our assumptions.

The fourth float would seem to be almost totally determined by the first three. If I'm visualizing correctly, there are at most 2 values it could possibly be.

Yes, the constraints reduce the capacity from the ideal limit of 4 real numbers.

(a + bi) type of complex numbers are indistinguishable from (b + ai) ones. Deciding the value of b immediately reduces the number of possible values for a to 2, which is a and -a respectively. Luckily, the real number line is continuous. We can accept such sacrifices without too much precision loss.

Also I should note that my calculations assumed a pair of maximally entangled particles per quantum circuit element since that's the most straightforward way to harvest quantum information with minimum number of objects.

> Kind of. Most of the current logic gates are built with semiconductors. It means by applying different voltages/currents/flux etc to different parts of a solid material, we can alter what we measure in some other part of the same material.

This kinda sounds like translinear circuits, except a couple orders of magnitude more boutique.

for example an electron or a polarized photon. they can take the value 0 and 1 when measured but when not measured they can be in superposition states. Quantum physics forbids copying a qubit to create another, but you can initialize them en masse to to be in a superposition state. Those things are too tiny to be easily manipulated so 53 is quite an accomplishment.

>>If we can make one qubit, can't we just make a bunch of them by copy and pasting circuits similar to how we used vacuum tubes in the 60s and 70s? How come our current limit is only around 54 or so?

>Quantum physics forbids copying a qubit to create another, but you can initialize them en masse to to be in a superposition state. Those things are too tiny to be easily manipulated so 53 is quite an accomplishment.

Quantum physics forbids copying the value of a qubit, but the poster was asking why we couldn't just make more of the device that implements a qubit. The big issue is that you want the qubits to be entangled together and it's hard to prevent decoherence as you make a larger device with more qubits.

> If we can make one qubit, can't we just make a bunch of them by copy and pasting circuits similar to how we used vacuum tubes in the 60s and 70s? How come our current limit is only around 54 or so?

This is a very good question! And as far as I can tell, no one has actually answered it yet.

In classical physics, which suffices to explain circuits made of vacuum tubes, the state of a system is fully captured by the states of its parts. Like, if you want to know the state of three bits, I just have to tell you what the first bit is (0 or 1), and the second bit, and the third one. Basically everything we interact with has this property: if you fully describe the state of each part of a thing, you have described the state of the whole thing.

But quantum mechanics is... weirder than this. In quantum mechanics, to describe the state of a system you have to give one complex number per classical state, such that the sum of the squares of the absolute values of the complex numbers adds up to 1. These complex numbers roughly correspond to the "probability" that the system is in that state (but not quite, it's more complicated than that).

So in quantum mechanics, to describe the state of three bits, you have to give eight complex numbers n1,n2,...,n8, one for each of the classical states of the bits 000, 001, 010, 011, 100, 101, 110, 111, where the sum of the squares of the absolutely values of n1,...,n8 add up to 1. That's a lot more information than 3 bits. (Imaging if you had 54 bits... you'd need 2^54 ~= 10^17 complex numbers to describe them.)

Technically, everything in the whole world, including you, is described by the laws of quantum mechanics. So why don't we see weird quantum effects all of the time? Quantum systems are very fragile: whenever they interact with the outside world (say a photon from the air bounces off of something in the system), the system "collapses", and then behaves as classical physics would predict. (Note that this is in accordance with the quantum prediction. The system goes from "only describable using difficult quantum mechanics" to "describable using quantum mechanics, but it'll just say the same thing as classical physics, and classical physics is simpler so you should just use that".)

So here's what a qbit is: it's just a regular bit that has been so insulated from the outside world that classical physics doesn't suffice to describe it. You won't find qbits on a regular circuit board, though, because they'll interact with the circuit board in any way whatsoever and then you're done.

And this is why making a 54-qbit quantum computer is so hard. You need to keep all of the qbits isolated, because if any of them interact with the outside world (think the air in the room, or a single photon, or the substrate that the qbits are on), then the whole system "collapses".

> If we can make one qubit, can't we just make a bunch of them by copy and pasting circuits similar to how we used vacuum tubes in the 60s and 70s? How come our current limit is only around 54 or so?

(I'm not an expert in this area, so the following may not be incomplete or limited to only certain kinds of quantum computation, or worse).

The kinds of computation that qubits can beat a classical computer on require that the qubits be entangled. If the qubits are not entangled, you can't do better than regular bits.

Briefly, if two (or more) quantum systems are entangled, and you make certain measurements that have a random outcome on one of the systems, and then measure the same property on the other system(s), there will be correlations that you would not get if the systems were not entangled. The entangled systems act is if whenever you measure one and it randomly chooses a value for the property you measured, that result is somehow communicated to the other entangled systems, and they make sure them that when they are measured they will give results with the appropriate correlation.

You might think that this could be explained if the systems had some internal variables that were set when they became entangled that determined what "random" values they would pick when later measured, but there are experiments that have shown that this is not so. The systems are truly making their random choice at the time of measurement as far as we can determine.

This happens even if after you entangled the systems, you separate them by a great distance--so far that between the time you do the measurements on the separate systems there is no time for any communication between the systems (or, rather, no time for any communication limited by the speed of light--I believe there have been experiments showing that IF there is communication, it is at more than 10000 times the speed of light). (This communication, or whatever it is, cannot be used to send messages faster than light. All it can do is make the correlations work out for entangled systems).

Anyway, the thing about entangled systems is that as soon as you make a measurement of the entangled property, you lose the entanglement. Your particle that had entangled spin, say, with another particle and that was 50/50 whether it was spin up or spin down becomes, once you actually measure spin, a non-entangled particle whose spin is a concrete value, either up or down. Measure it again, and you get the same value.

When I say "you make a measurement", I don't specifically mean you, or any other human, or any human instruments. For purposes of quantum mechanics, a measurement is anything that makes the system reveal a value. So if you have a particle with entangled spin with some other particle, and some random passing particle happens to interact with yours in a way that depends on the spin of your particle--that's a measurement and you've lost your entanglement. (Your particle might now be entangled with that random passing particle, but it is no longer entangled with the particle you intended it to be entangled with).

If you are trying to do a quantum computation on 50 qubits, you have to get them all entangled, and then you have to keep them from interacting with anything that might inadvertently do a measurements long enough for them to do their quantum computation, where you can then measure the result (which finally ends the entanglement).

This turns out to be hard, because there are a lot of things in the universe that want to effectively do a measurement. Any random particle bumping into one of yours with too much energy can do it. Some random passing electromagnetic wave can do it. The more things you need to entangled and keep entangled, the hard this is, and 50ish is the current limit.

TL;DR: A quantum computer is a device that uses the measurement of quantum properties to do computation. There are many ways to implement one depending on the type of entangled particles being used, from crystals to make entangled photons and superconducting mounds to entangle electrons. This is the same for binary computers, which can made from electrical devices (transistors), values with air pressure or balls falling down wooden ramps.

Longer version

An observation about quantum behavior is that there are only certain properties that you can measure for the really, really small. These include things like mass (total energy), charge (intrinsic amount of electromagnetic strength), and spin (willingness to change direction in the presence of an electromagnetic field). It turns out that when you measure these properties, the measurements behave in non-intuitive ways.

The act of measuring the spin of a particle (which could be in any direction) is really the act of asking, "is this particle aligned with my detector?" The result will always be either aligned up or aligned down. It will be randomly about 50/50 up and down, also. This is not that surprising because the spin must align one direction or the other. The crazy part comes with the fact that you can entangle two particles.

Entangle particles can be sent off through different detectors and one thing will always be true: while any particular outcome is random, the detectors will always generated opposite results. The temptation is to say, "well, they were generated from the same source, so they have just opposite starting positions." Long story short, this has been proven not possible. Instead, there is some fundamental behavior is quantum mechanics that says that there are certain types of activities with entangles particles that have a correlation that is true as long as the entangle particles are not disrupted. In this case, particles sent to separate detectors will always have opposite results.

A quantum computer uses these correlation truths about measurements to do computations. A qubit is the concept of a quantum bit: an entity that represents one unit of entanglement. Just like a bit, quantum computers have many different ways to implement entanglement. You can entangle photons, electrons, and whole atoms. Each of these systems require specific implementations to achieve, like like electronic or mechanical computers.

Remember that one detail about "if they are not disrupted?" Yeah, turns out that it takes a hell of a lot to create an environment that doesn't destroy the coherence of the entanglement. You have to design something that allows you to setup the particles into starting state, be able to hold those particles in an entangled state with no disruptions and have a detector to determine the final state. Quantum mechanically, these are generally opposite goals.

Just like a bit, a qubit is an abstract representation of information, which can be physically instantiated in different ways.

"[T]hose believing that a QC can manipulate or maintain huge objects "free of cost" (i.e., at unit cost) should provide a convincing explanation to this fantastic speculation. Being skeptic of this (rather over-simplified and counter-intuitive) speculation seems to be the default and natural position." (From http://www.wisdom.weizmann.ac.il/~oded/on-qc.html )

>Q12. Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?

>Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.

>In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because nature is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”

This is the core of it to me. It's a question of 'some people’s definitions of "quantum computational supremacy."' Many people say that the definition here is a crappy one, and sure this shows a form of it, but not the kind to justify the hype. Sure, it's fully programmable, but not so programmable as to do anything that anyone cares about (even a teeny tiny few-bit version of something people care about) better than we can otherwise.

To appeal back to his analogy to the wright brothers, it's like they carved a frisbee from a stick while working towards airplanes. It's amazing that they carved a log into a neat shape you can throw further than another log, and the hype train is saying that it's fully carve-able, so it counts as "airplane-supremacy," but that's a crappy definition and not what we're waiting for.

At the time of the Wright brothers, which was incidentally before Frisbees were invented, there were some naysayers that thought heavier than air flight was simply impossible. A Frisbee, which can be chucked much further than a balloon, would for them serve as actual proof of "wooden supremacy."

Interesting. Any idea how they responded to the obvious retorts about heavier than air birds?

If they weigh the same as a duck then that should explain things, the birds are witches

It's a simple question of weight ratios!


I mostly agree with this. If you are interested in Extended Church-Turing Thesis, this event is very important. If you are interested in factoring, this is a non-event. I think most people are interested in factoring.

I don't think it's a non-event even if you're only interested in factoring.

This proves the underlying principle of a quantum speedup is a physical reality. It might be something people already take for granted if they're interested in factoring, but it's a crucial prerequisite that was not yet irrefutable.

Regarding Lukin’s experiment - which is very interesting and high quality - in most instances it can be simulated by existing classical methods (matrix product states / DMRG being a key example). This does not detract from the importance of realizing the same system experimentally at all. But it does prevent it from realizing a gap between classical and quantum computing.

You can do small chemical simulations easily on a processor like this.

Really? I haven't seen any molecular simulation proposal that can be run without quantum error correction. I think if there is such a thing it would count as a breakthrough by itself. Any reference?

There are many, many experiments done simulating molecules, from hydrogen dimers to water. Just one example of many:


That paper is about using a classical computer to simulate a quantum computing algorithm.

I'm pretty sure OP is talking about an actual, physical, exists-in-reality usage of a quantum computer to do anything meaningful, not theory. There's lot of theory.

> But others, including my good friend Gil Kalai, are on record, right here on this blog predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.

Here is Gil Kalai's post from yesterday: "Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google)." https://gilkalai.wordpress.com/2019/09/23/quantum-computers-...

Note also Kalai's comment on this very blog post:

> Scott is correct that inability to achieve quantum supremacy is quite central to my argument (since 2014), so naturally I don’t expect that the recent claims by the Google team will stand. Of course, if these claims (or any other quantum supremacy claim) are correct then this would defeat my theory. It goes without saying that the claims are so fantastic that also responsible believers in quantum computers should examine these specific claims (like an alleged NP=! P proof) carefully and skeptically. Also, it would be nice to hear some details about the precise claims and methodology of the Google team.


I think we need to comes to grips with a hard truth about the reality of academic life. Once you invest decades of your life into a research subject, if it turns out the entire thing is never going to work, there are major social and financial pressures to deceive the public about the true nature of the subject.

I saw this happen with string theory first hand, and my experience with string theory was a major factor that led to changing paths to pure mathematics and computer science.

As someone who has spent a lot of time researching QC, I do not believe it will ever be possible to build a practical quantum computer. We have been over this so many times on this site. Here is a good link from a serious professional who takes the same position [1]

In fact, my personal opinion is that quantum computers are functionally, a hoax, the main purpose of which is to generate hype, secure research grants, ensure career stability for academics, and give science reporters something to write about to get clicks while deceiving the public.

[1] https://www.quantamagazine.org/gil-kalais-argument-against-q...

> Once you invest decades of your life into a research subject, if it turns out the entire thing is never going to work, there are major social and financial pressures to deceive the public about the true nature of the subject.

You're implying that Scott Aaronson is being deceptive.

I think that needs better evidence. The story you linked to is two years old and it's about QC skeptic Gil Kalai.

Scott addresses him directly in his post:

> If quantum supremacy was achieved, what would it mean for the QC skeptics?

> I wouldn’t want to be them right now! They could of course retreat to the position that of course quantum supremacy is possible (who ever claimed that it wasn’t??), that the real issue has always been quantum error-correction. And indeed, some of them have consistently maintained that position all along. But others, including my good friend Gil Kalai, are on record, right here on this blog predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.

I am not very familiar with Scott Aaronson and am just making a general observation about the subject of QC, I have no idea what his motivations or intentions are. If Scott believes QC will day be practical and I don't, only time can tell who is right and who is wrong. I have seen this kind of goal post moving in string theory, and its been going on for 20 years with QC in a strikingly similar way imo.

There is no way to argue against this kind of goal post moving style of debate because even if another 20 years go by and QC still don't exist on a practical level, the goal posts will just keep getting moved.

I am extremely confident that this will continue until the public gets bored of hearing about it.

I think Scott is agnostic about whether QC will be practical or not. To quote the original article, "we have no idea how long it will take".

>"we have no idea how long it will take"

IMO, this is a reasonable position to take. However, we have to accept that even if a universal QC is possible, "how long" could conceivably be centuries.

Our perspective with regard to technology has been significantly biased by our experience with Moore's Law/Dennard Scaling. The speed of progress in ICs from 1960 to 2010 (and especially 1980-2000) is an outlier in the history of science and technology, yet we base expectations in everything from AI to QC on it.

I'm not a physicist. In my current mental picture, I also feel the exponential speed-up is a hoax.

Yet I have no trouble believing in this quantum supremacy at the current scale.

The way I see it is a quantum computer is "running" an algorithm like Toshiba's "simulated bifurcation". It's running it at a frequency much higher than current computers.

If I had to take a guess it is like running classically at the Bohr frequency of an electron ~10^16 Hz. So it's kind of like trying every combination with some heuristics. Currently we are tapping into this 6 additional order of magnitudes of computation, which allows a brute force search to appear instant. But once we have used this 10^6 speed factor the quantum computer will miss a lot of the solutions and you will have to increase the sampling time exponentially to find your solution. We are currently in the exponential part of the S curve.

You've got to understand that those machines are expensive to build, so putting a cap on the performance won't help you raise funds. But these machines are beautiful. They are some marvel of engineering. There are plenty of things to discover. They need to get built.

They will bring plenty of useful techniques in the how to manipulate and build very small things, and more importantly how to compute without releasing so much heat. But those advances will probably get locked into a few private companies by monopolizing the few big names that can lend the credibility required to raise funds.

It should be mentioned that IBM managed to simulate a 56 qubit version of the same problem Google describes in the paper (see https://pastebin.com/RfUMXJZE) on a classical super computer:

- https://www.ibm.com/blogs/research/2017/10/quantum-computing...

- https://www.newscientist.com/article/2151032-googles-quantum...

So technically Google cannot claim to reach quantum supremancy with only 54 qubits as described in the paper.

I have the greatest respect for Scott, but I do think he’s being a bit too enthusiastic here in comparison with the D-wave. At the very least I think he should have included this question in his list:

Q: Why can the D-wave not be used to illustrate “quantum supremacy” in a similar way?

(As I understand it the D-wave can sample from the solutions to ”ising model-like” problems, which I assume would be extremely difficult for a classical computer to do (but probably possible to verify).)

In principle, there is no reason why D-wave also can't achieve quantum supremacy. It is just that D-wave hasn't, so far.

As for Ising model, D-wave didn't outperform classical algorithm after classical algorithm was tuned (it was a new problem, so existing classical algorithm wasn't the best possible), see https://arxiv.org/abs/1401.1084, also there are reasons to suspect why Ising model will not provide any quantum speedup, see https://arxiv.org/abs/1411.5693.

Well D-wave is evaluated on an optimization task. This Google thing isn’t even trying to solve a real problem. What says you can’t get some very hard to replicate random bits out of a D-wave?

If D-Wave could achieve quantum supremacy, they definitely would, and would heavily publicize it. They haven't, which gives us strong evidence that they can't right now.

Do you retract the claim Ising model problems are "extremely difficult" for classical computers? I replied with practical and theoretical considerations why it is not so.

I made no claim at all, I just stated an assumption that motivated my question. But you also misrepresent my assumption. My assumption was that the D-wave could sample from the solutions to “ising model-like problems” much more efficiently than a classical computer. A quick glance at your references gave me the impression they where about finding a single ground state / global minima of the hamiltonian. That’s a very different problem.

Unfortunately, the same complexity considerations do not hold for Ising problems. While many NP-hard problems can be formulated in Ising form, it is often not hard to get a “pretty good” solutions to these problems. DWave and collaborators have spent a decade trying to come up with exactly the same thing as demonstrated here — namely, a problem specifically designed to demonstrate quantum advantage of any sort — and as of now did not succeed.

To answer your question specifically: DWave does not allow the same level of control over qubits.

I’m aware of D-wave’s struggles, but my impression was that they had failed at finding a “deterministic” problem where they could show quantum advantage. I’ve never heard a claim that whatever probability distribution the D-wave can sample from a classical computer can also sample from. I’d love to read about it if you have a reference!

My understanding is that it is exactly the case D-wave probability distributions can be sampled classically.

Another reason I feel this is oversold: This quantum “computer” cannot run Shores factorization algorithm. But if it could it would only be able to factor integers up to 2^53. The time required to factor a 2^60 to 2^80 integer on a classical computer is measured in milliseconds [1]...

Quantum supremacy in any reasonable sense of the word supremacy is a long way off.

1. https://hal.inria.fr/file/index/docid/451604/filename/smalli...

You may not agree with the authors' or Aaronsons' definition of quantum supremacy, that's ok. I think his definition is very reasonable. Aaronson argues that this experiment will prove quantum computing supremacy in great length and defined it in the first entry of the FAQ as follows:

> [quantum computing supremacy] term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers

.. and continues to explain why this setup does exactly that.

Yea... I’m not suggesting of course that Scott is wrong about this being an illustration of “quantum supremacy”... (Did you think I was...?)

But comparing it to manned flight or the first nuclear reactor... In those two cases there was a clear path to something very useful. I’m my mind this experiment changes very little as to how probable it is that we will soon have useful quantum computers, or even that we will ever have them. I guess that’s another question for Scott’s list:

Q: If what we care about is computing solutions to difficult real world problems, in what way is this a meaningful milestone?

Well I don't want to answer for him but you could make the same point with the airplane:

The first flight was 2 people in a plane that lasted very short. I could have ridden my horse more quickly over that distance! How is this a meaningful milestone?!

Well.. it demonstrates something that cannot be done on a horse (classical computer) but that can be done with a plane (quantum computer).

Is the application useful today? Probably not. But that shouldn't discourage anyone.

> But comparing it to manned flight or the first nuclear reactor (...) this experiment changes very little as to how probable it is that we will soon have useful quantum computers, or even that we will ever have them.

Not a lot of people directly own & operate their own airplane, let alone a nuclear reactor, and yet both have had a significant impact on modern life. As for this experiment & QC, only time will tell of course.

I think you misinterpreted "we will ever have them" part. bjornsing is not talking about whether there will be personal quantum computers. It is about whether there will be useful quantum computers at all. That is still open, for example because of issues related to quantum error correction.

> This experiment changes very little as to how probable it is that we will soon have useful quantum computers, or even that we will ever have them.

I completely agree with this statement. This is mainly about Extended Church-Turing Thesis.

The claim here is that they're about to demonstrate QC on a specific hard sampling problem , where one small quantum computer will produce a result in minutes that takes a months on a big cluster of classical computers.

That is still clearly quantum supremacy, even if Shor's algorithm is not yet implementable.

I've read almost all comments here and, although I've grown up with lots of technology, this is the kind of stuff that will turn me into my parents/grandparents. It's like when I was teaching them how to operate the VCR years ago.

Your analogy explains itself away - maybe you knew how everything inside a vcr worked but I didn't, I just knew what the buttons did and how to clean the head. I'm sure we will figure our way around these quantum knobs as well!

(Seriously though, why the hell did those vcrs have such a large green board??)

Most major breakthroughs were pretty low tech in origin, and the high tech was all about optimizing the idea. I think we get fundamental research backwards nowadays.

At the very least, we shouldn't be comparing quantum computing to air flight. The Wright brothers were basically hobbyists, incomparable to the global leading technology corporations in the history of human civilization.

This seems more comparable to the moon landing or LHC Higgs-Boson discovery. Challenging to pull off once, and the next steps seem to increase exponentially in man-power and cost.

Those haven't really brought us much benefit. The benefit to effort relationship seems to be inverse.

I think the proper analogy is controlled nuclear fusion, which google is also dabbling with.

So it looks like QC is good for speeding up simulations, and with far less overhead for ECC and such than it needs for things like factoring. So is simulation the QC "killer app"?

Does it only do certain kinds of simulations well, i.e. simulations of quantum systems, or can that be generalized? Can it be applied to economic, environmental, traffic, etc as well?

>Can it be applied to economic, environmental, traffic, etc as well?

Our current understanding is that quantum computers won't offer a speedup for the simulation of nonquantum systems. The only simulations they'll be faster for are systems for which quantum effects are important.

Of course it's possible that someone will discover an algorithm that gives quantum computers an exponential speedup in the simulation of any system. But I think that's pretty unlikely because it would imply that quantum computers were exponentially faster than classical computers computers for every problem, because you could just use your quantum computer to simulate a classical one.

> Our current understanding is that quantum computers won't offer a speedup for the simulation of nonquantum systems.

The speedup from grover's algorithm is essentially universal it's just not an exponential speedup. So for example in your non-quantum simulation task, you want to search for an input to a cellular automata that makes it spell your name after 5000 timestemps. You can use grover's algorithm to find that input with a sqrt() the number of simulation runs that a classical machine would need (and perhaps fewer, since there are likely multiple solutions).

To me that's one of the most interesting things about quantum computation: There is a very broad class of things where it offers a modest speedup (sqrt) and a narrow (but useful!) class of things where it offers an exponential speedup. But if you imagine almost any kind of 'superpowered quantum computation'-- something that is a little more powerful than the laws of physics as we know them allow-- you almost always get an exponential speedup for everything (or even more absurd results, like every problem being solvable in constant time). QC has this interesting property of being just strictly more powerful than classical computing, but without being magic-instant-computing.

[Which is also why many of the common incorrect descriptions of quantum computing are sad, stuff like "testing all values in paralle"-- if it worked like that description it would be magic instant computing.]

> The speedup from grover's algorithm is essentially universal it's just not an exponential speedup. So for example in your non-quantum simulation task, you want to search for an input to a cellular automata that makes it spell your name after 5000 timestemps. You can use grover's algorithm to find that input with a sqrt() the number of simulation runs that a classical machine would need (and perhaps fewer, since there are likely multiple solutions).

I'd view this as using fewer simulations, rather than doing the simulations faster.

Incidentally, I happen to be the author of some software for finding CA predecessors: https://github.com/OscarCunningham/logic-life-search/. Good luck getting it to do 5000 or even sqrt(5000) generations though! It's SAT solver based, so as soon as someone does invent a quantum SAT solver it'll plug right in.

> [Which is also why many of the common incorrect descriptions of quantum computing are sad, stuff like "testing all values in paralle"-- if it worked like that description it would be magic instant computing.]

To be fair, many quantum algorithms do begin by preparing a superposition of all possible intputs and then applying the unitary corresponding to some classical function. The difficulty arises when you have to extract information from the resulting output superposition.

There is no known useful application of current quantum computing (so-called NISQ, for Noisy Intermediate-Scale Quantum). It is an open question whether there is any, see https://arxiv.org/abs/1801.00862.

What's e.g. China and Russia doing in this space? Or the US Gov't for that matter? I can't imagine that national governments are just waiting for Google and IBM to do the research and publish their results. Is it possible/likely that NSA or some other national equivalent is way beyond these results already?

I don't know what the NSA spends on R&D, but I have to imagine they don't have a multi-billion dollar budget for secret R&D facilities and talent. It would seem more likely they are monitoring developments in public and private research and responding opportunistically. I'm just making assumptions here, but also I don't think somebody with clearance could correct me here in public :P

The NSA has the money and a history of massive secret projects. There are more mathematicians at the NSA than in all of academic cryptography. However, my general impression is that most people don't think the NSA has a secret QC project because there hasn't been a noticeable disappearance of the best young experimental QC researchers from the pipeline like there has been for cryptography. It's the promising people quietly leaving that is hard to hide; massive construction projects are comparatively much easier to hide.

A recent NYT opinion piece from the general counsel of the NSA discusses many of these issues, including quantum computing: https://www.nytimes.com/2019/09/10/opinion/nsa-privacy.html

Thanks much. For others, here are the relevant bits I could find (my emphasis):

> ...It is by no means assured that our national security sector will be able to attract on a sufficient scale the scarce engineering, mathematical and scientific talent that would supply the necessary expertise. That challenge will require investment, enlightened strategic management and an innovative approach to luring a different type of expert out of the private sector into government. Meeting this challenge will require a greater reliance in general on the private sector, since government alone does not possess the requisite expertise. A large portion of the intelligence community’s experts on the military capabilities and plans of Russia and China joined government during the Reagan administration; other experts on counterterrorism and new technology burnished their technical skills following the Sept. 11 attacks. Many of those experts are nearing retirement or have already left to join an attractive private sector. With millennials believing that technology in the private sector now allows them to help change the world — previously the idea of a mission had been largely the province of public service — it is not clear that the intelligence community will be able to attract and retain the necessary talent needed to make sense of how our adversaries will make use of the new technology...

> ... the government no longer possesses the lead in complex technology, at least in many areas relevant to national security. Arguably, the most powerful computing and sophisticated algorithm development now occurs not in the Pentagon or the N.S.A. but in university research labs and in the Googles and Amazons of the commercial world. (To be sure, the government still maintains its superiority in important areas ranging from nuclear energy to cryptography.)...

> ... our national security agencies for the first time must amass the talent and systems to understand not simply a military challenge but also challenges across a broad range of technology and global finance issues. The capacity for such understanding currently resides principally in the private sector and our universities, not the federal government.

I'd think the opposite. I would imagine they have orders of magnitude more to spend on crypto research than a private search engine company (which is why I asked the question originally). But really I have no idea. (If they did then why aren't there job ads etc). And you're right, probably nobody who does have an idea would be able to say.

There are a lot of research teams working on this in China/Europe/North America/Australia (present at most big conferences in the field). Not much in Russia from what I have seen (not counting well known Russian scientists that work in other countries).

Much needed perspective in the age of overly dramatic headlines and straight up misinformation.

I posted this on scott's blog, still awaiting moderation:

"I’m trying to understand the chain of inference from Google’s leaked result of quantum supremacy to theoretical computer-science “hardness” of the computation.

Computing the exact probabilities of a random quantum circuit is proven hard, but computing the exact probability of a random algorithm is also an open problem, so what you really care about is approximation of computing the probability (up to epsilon), or even weaker, just sampling from a probability (also up to epsilon under some metric of comparing distributions).

Their computer implements Random Circuit Sampling, and their cited “theoretical” hardness results are your paper from 2017, of QUATH => HOG, and as far as I understood from your paper, proving that approximation / sampling from a (random) quantum computer/circuit is hard is still an open problem (Am I wrong here? I’m not up to date with everything), and a difficult one. But you did make a compelling argument that even if QUATH was solvable, it will lead to new insights.

Their actual benchmark uses cross-entropy benchmarking, called xeb in their paper, and defined as 2^n* [P(x_i)] _i-1 (Can’t type brackets). I could not find any ‘theoretical hardness’ paper at all using this benchmark, some results talk about different XEB using log but they prove these are not strong enough and can be reached classically.

Are there any hardness results for their benchmark? I wonder why they would use that instead of the proven HOG, even for smaller input sizes, I would see more value in a benchmark which has theoretical roots. I feel intuitively like their benchmark is much more similar to the log variation of XEB than to HOG, but didn’t think it through completely.

As for their gates, I understand they are not general random quantum gates, but instead they have a variation of iSWAP and controlled phase CZ. The hardness results that do exist also don’t address the limited gates, in how it changes the distribution of random circuits. Are those two gates at least universal, so that we have hope that this could be proved? Their statement was “but reliably generating a target unitary remains an active area of research”, so I assume they aren’t universal. I would love to see some heuristic argument as to why those two in particular are hard, especially given results like Gottesman–Knill theorem, it seems like some surprising gates do have classical simulation. iSWAP is just swap and phase gates, and CZ is phase gate only on the 11 state. Doesn’t feel like it could be universal to me but maybe I’m wrong.

I probably missed some things, so I’d love it if you could point to papers filling the gaps between their results and a real theoretical statement. I don’t have the intuition to tell which gaps are important and which are not. I also don’t know which papers/results already exist and I can’t really search as I am not an academic, and don’t have full access to many papers, and you probably know the state of the art results. "

- Cross-entropy is used for roughly estimating the fidelity (see arXiv:1608.00263), which is independent of the problem they're solving. You're confusing it with the overall distribution of the readout states, which is the hard problem.

- CZ is a perfect entangler. No, iSWAP and CZ do not form a universal set. They typically combine virtual Z rotations (which has a continuous angle parameter) with an additional one-qubit gate, such as a X or Y gate (or their square root, which they mention in the paper) which gives a universal set. Although they don't make use of continuous Z rotations in that particular experiment, the device is capable of doing that, and is a (noisy) universal quantum computer.

Practically speaking, how long until the Quantum Jubilee, when (much of) encryption is broken?

Where does this put Rigetti Computing?

Most things that get years of speculation and press never come to fruition

Like flying machines, speculated about from DaVincis time?

Can I use it for my new ruby on rails webapp?

Google achieved this some few days ago and I’m very much impressed. They indicated the quantum machine could process data in 3mins and that same data will take 20years for the most advance Computer to process successfully. Impressive

I recently told a friend, that the perfect quantum computer application would be a machine that can apply all possible operations simultaneously and consistently filter out one or more of the methods which results in a valid computer program producing a predefined result.

My cat can behave as a cat would be expected to behave. And if we verify the measurements of her behaviour using a classical computing cluster - to make sure her behaviour really falls within the distribution of expected cat behaviour - thats a very complicated calculation that will take many processor-days. But my cat can just do that stuff in real time. Has my cat achieved Quantum Supremacy, and is there a trophy or something I can pick up somewhere for that?

That sounds like question 12 from the FAQ ;)

I didn't really understand the answer though. The computer is programmable, but the way supremacy is demonstrated is orthogonal to its programmability - it is still demonstrating superiority on just the one problem of simulating itself starting from any initial condition, no?

> it is still demonstrating superiority on just the one problem of simulating itself starting from any initial condition, no?

It's the "any initial condition" part that is hard. You cannot put a cat, or even a small molecule, in any one of it's possible initial states, let it evolve, measure the outcome, and get reliable answers. (If you could, that would make it a computer! That's all a computer does, assuming the initial states and evolution are rich enough to, e.g., be Turing complete.)

> The computer is programmable, but the way supremacy is demonstrated is orthogonal to its programmability

The fact that the problem posed by the challenger C is "run an arbitrary quantum circuit" is in fact directly connected to the programmability.

Ah yes, indeed. So it hinges on whether my cat is programmable. Well as it happens my cat has a number N of programmable modes:

1 - cat having its ears tickled

2 - cat watching something move under a sheet that might be a mouse

3 - cat waiting for tin of cat food to be opened

4 - cat wanting to get through a door

etc etc up to 'N'

So a “challenger” generates a random number C between 1 and N, and the challenger then sends C to me and my cat, and I apply the appropriate 'input' to my cat, tickling her ears or openeing a tin of food or whatever, to get her into the correct mode. We then measure her behaviour and then fire up our cluster of computers and run the simulation and then wait for a few months for the numbers to get crunched to verify if the cats behaviour was within expected probability distribution for C.

"Programmable" in this context would mean that you can encode complex computational problems into your cat's behavior. The idea is to distinguish a cat that can only calculate cat behavior (which is of course a very easy problem) from a cat that could eventually be engineered to calculate whatever you want.

If you can solve complex computational problems faster than a cat-sized computer by carefully arranging tins of cat food, it would definitely be fair to say your cat computational supremacy is a significant achievement.

Google's device is not encoding complex computational problems. Its just being told to arrange its qbits into a random series of gates. Could it do the Fizz Buzz algorithm? Or output the Fibonacci sequence? If not, then in what way is it programmable?

It's programmable in the same way an FPGA where you can specify how gates are connected is.

Note that the "random series of gates" is generated on a classical computer, and then the quantum processor is set to that configuration. It's not that you turn on the quantum processor and whatever random uninitialized state is it in.

> Its just being told to arrange its qbits into a random series of gates.

It's not told to arrange its qubits into a random series of gates. It is told to arrange them in a specific order that was chosen at random.

It is programmable. But it is as if you had to program malbolge[0], except worse so due to the hardware constraints and lack of error correction.

[0] https://en.wikipedia.org/wiki/Malbolge

What's a classical computer, if not silicon arranged into a random series of gates?

I'm not sure how to evaluate the question of whether it could do the Fizz Buzz algorithm. Could it run fizzbuzz.c? No, it doesn't have an OS. Could it perform a sequence of operations isomorphic to "count to 100 by 3s and 5s"? It sounds like Aaronson's answer would be yes, but I think you'll be skeptical (and I am too) about whether that really means anything.

A stone tablet could "perform" a sequence of operations isomorphic to "count to 100 by 3s and 5s". But "count to N by 3s and 5s" for any N : 0 < N < 2^32. Well that'd require a lot of stone.

Who even asked these questions?

I question this. All of this.

Literally people on HN have been asking many of these questions for years whenever QC is discussed.

I believe Feynman originally asked the QC question many many years ago. What an exciting milestone and a great FAQ.

"Always naysaying! Everything I create!"

> It’s like, if you believed that useful air travel was fundamentally impossible

Uh, there are birds. Literally everyone thought useful air travel was possible, and not only possible, but so easy that a Darwinian process was able to produce it, not once, but literally thousands of times, in thousands of ways.


But looking at the actual "experiment", I don't count that as computation in any meaningful sense, and morally equivalent to the following:

Set up a digital camera and point it at a scene. Take a picture. Now take millions of additional pictures, without moving the camera.

Measure the values at each pixel. See how they correspond to "amplitudes"? That's our computation. <= (This is the part of the article that should raise eyebrows...)

Article (smugly): How about you simulate (render) the scene using an actual computer? Measure the amplitudes of the resulting millions of images. OMG, that took you so long!!! Loser.

Are we being punk'd here?

The digital camera is the quantum computer. The "scene" is the random initial state C. The scene is then translated into a renderable scene for the classical computing version, and rendered with an unbiased physically based renderer, which produces the same result.

I fail to see how any of this is even remotely exciting, much less interesting. A camera is not a computer, no matter how many measurements you make with it, nor how "random" the scene you are taking pictures of is.

And yes, simulating reality takes more cycles than just measuring whatever happened with a camera. Photos are "faster" to get than the equivalently rendered scene—news at 11!

Again: Are we being punk'd here?

I have no comment on the QC claims, but regarding the flight analogy, it seems you're being disingenuous: https://www.xaprb.com/blog/flight-is-impossible/

Curious: Did you Google that just now, in order to comment here?

You wouldn't be any less wrong if he did, I don't really see how that's relevant.

You can find other contemporary counter-examples too, for instance nuclear fusion. We know it's possible, it exists in nature, we managed to do it under very special conditions but it's still unclear whether it's actually possible to build a competitive, safe commercial nuclear fusion reactor.

Yes I did. Did you invent a historical "fact" to comment here?

Perhaps it will make more sense to explain how a D-Wave machine works, as (a) they actually exist and you can use one today for free online, and (b) it's way simpler than a gate model QC is.

Imagine a bunch of magnets. Imagine forcing them into a "frustrated" configuration; maybe you have them all on a grid, and you have servomotors that can rotate them to face any way you like. The servomotors are strong, so counteracting the magnetic forces is easy for them. You design an appropriately frustrated configuration, and then release all of the magnets at once. What configuration do they rotate into?

A quantum annealer is conceptually similar. Each qubit, on a regular, patterned graph, has connections to its neighbours. You can leave these alone (no corellation) or tune them up to +/- 1, corresponding to "must be the same as this other qubit" and "must be different than this other qubit". You can also bias each individual qubit to be a 0 or a 1.

Then, you let it go, and it anneals, and you observe the result. Your goal is to get to the _lowest energy state_ possible: the least possible frustration remaining.

In our magnet example, it would be as few magnets as possible wanting to move - if you poked them with your finger they'd want to go back into their current state. You could imagine that your magnets might not get down to their absolute lowest energy state; maybe it would take too much energy to flip from their starting state to that lower state. In a quantum system, because of tunneling, the system can reach these lower ground states. Rather than being in a fixed position the way our magnets were, qubits are in a quantum superposition, so they can reach a lower energy state without having to climb up that energy hill. Or so we're lead to believe by the numbers, anyway; I'm not a physicist.

Now, if you can map some useful computational question onto the original configuration of qubits that is answered by the ending position, you've got yourself a useful quantum computer. This is the hard part! The key is to use optimization algorithms where a lower energy state = a more optimized result. If you can do this, there's a ton of employment waiting for you.

Then, if you want "quantum supremacy", it's matter of providing more optimized answers in less time, particularly as the problem scales up in complexity. There does indeed appear to be a crossover point coming in a decade or so, at least for the small class of real-world problems that the Ising Hamiltonian works for.

> Now, if you can map some useful computational question onto the original configuration of qubits that is answered by the ending position, you've got yourself a useful quantum computer. This is the hard part!

Yes, I agree. But this has not been demonstrated. What's being demonstrated (apparently) is that measuring a quote-unquote "quantum computer" doing whatever it does naturally is easier than simulating said quantum computer classically. Well, yeah. Duh.

That's the same thing as "rendering" a scene with an unbiased renderer vs. setting up that same scene in reality and using a camera. No one in their right mind would point to the camera and say they'd create a next-gen, heretofor impossibly fast unbiased rendering algorithm.

Technically, the digital camera is "computing" the same result—but no one would call it that, and IMO, the same is true of what is being discussed in the FAQ. It's literally NOT COMPUTATION, which brings us back to your line:

> Now, if you can map some useful computational question onto the original configuration of qubits that is answered by the ending position, you've got yourself a useful quantum computer. This is the hard part!

It's not only the hard part, it's the only part that matters. Until then, you have AT BEST a "quantum camera". Potentially useful, perhaps—but it's not a computer, or computation.

Anyway, thanks for responding. Much better than drive-by downvoters probably hoping to get PhDs in this stuff.

Also, my intent is not to belittle Google's engineering effort. In the same way that I wouldn't belittle Sony for making 24mmx36mm backlit CMOS sensors. It's impressive! Good for them. But it's not computation, and it definitely doesn't establish some kind of "quantum computing supremacy" (since no meaningful computation is being done). When they stop handwaving about mapping actual computation problems to the scene they've set up and are measuring, then I'll get excited. Maybe it's doable, maybe not. But a quantum camera, AT BEST, is one step along the path...

I think your camera example is a false equivalence that makes this seem as if it's not a computation. The camera is not running the same algorithm as the renderer and so you're comparing different things.

The experiment used a classical computer to randomly generate a circuit C, told the quantum computer to execute it, and recorded the result. Then they repeated this but executed each circuit in the most optimal way a classical computer can. Finally they compared the distributions to verify that the quantum computer's results matched the correct classically computed results.

This proves that the quantum computer is able to generate that distribution faster than a classical computer and isn't just doing some other spurious process that happens to be faster.

Yes, this random distribution isn't very useful and so this result is only really interesting as an experimental verification of theory. However, it is a computation that benefits from quantum speedup in real life!

Hopefully soon, algorithms will be found to generate more useful distributions (as it seems for the time being sampling is the only application of this type of QC that is practically doable). For example, Aaronson mentions that generating verifiably random bits is not much more difficult than the noise generated in this experiment and could have an impact in a variety of cryptographic applications.

> The camera is not running the same algorithm as the renderer and so you're comparing different things.

[rewriting your words] The experiment used a classical computer to randomly generate a scene C, set that scene up in real life, and recorded the result. Then they repeated this but rendered the scene in the most optimal way a classical computer can. Finally they compared the results to verify that the scene in real life matched the correct classically computed results.

The scene + digital camera has the exact same role (and proof value) in my hypothetical experiment as the quantum computer + measurement device does in the Google experiment.

It's not the camera that "contains the circuit", it's the scene and the camera together that computes the same values (exactly, as it turns out) as the classically computed rendering algorithm of the same phenomena.

Call it "camera computing", write a paper in Nature, win Turing award. The hard part with camera computing is the same: How to map a computation onto the generated scene so that the measurement device (camera) gets a meaningful result faster than it can be simulated in the computer.


Look, I've worked on things for a long time only to discover in the end that there's nothing there. It sucks, I get it. Move on, try something else. There's nothing here.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact