Is this true? I thought that in the 40s-50s there was some argument over whether practical computers could really be built given that vacuum tubes were so unreliable. Von Neumann wrote gave a series of lectures in 1952 (eventually transcribed into an article) showing how it could be done: http://arep.med.harvard.edu/gmc/Von_Neumann_1956ro.pdf , and the argument became more or less moot once transistors arrived).
There's perhaps an analogy here to be made with quantum error correction, but that seems to be a lot harder...
By 1950 then, the only question is whether we should, not whether we can.
I suppose double entry would be your error correction. That also benefits from having a calculating machine.
An acquaintance flew Concorde a lot in the 80's, the reason - "I could get to Heathrow in the morning, fly to New York, do a meeting, fly back and get home the next morning". Another friend is a exec at a bank now and goes on month long odysseys to the USA, Asia and Australia. The contrast in perspective and commitment is striking, I don't think that Concorde would matter at all to these folks.
The more comfortable you are in the flight, the less the actual number of hours flown bothers you.
No amount of money makes getting from your front door to the airport not be a massive pain potentially taking over an hour if you have some bad luck
Although military investment has driven many different technologies over the years, ‘the military’ is actually a complex mix of the Armed forces themselves, government acquisition organisations, and prime contractors and their supply chains. The resultant acquisition processes of many countries are glacially slow, with sometimes decadal timescales from requirements definition to delivery of full operational capability. Military acquisition is not agile, except when urgent operational requirements force rapid tech change by taking acquisition shortcuts.
Where have I heard this before...
I'd be really interested to hear what Gil Kalai has to say on this particular subject; he's generally pretty fair minded while being of a QC skeptic. He has a response up  but I haven't read through it yet.
Except that you'd only be able to pay with physical cash.
If you're willing to admit handwaving and future-tech plans, then you can be excited now. It's estimated that a few million physical qubits (corresponding to few thousand logical qubits) will be necessary to crack an RSA key. It's at least a decade away, maybe several, but few experts believe there are hard barriers to number of qubits or minimal cost.
both of them are very flexible, but also definitively non independent.
To extend the analogy, quantum supremacy is like fusion for neutron source. There are commercial fusion devices to be used as neutron source.
Perhaps this was just a poor example, but we absolutely have a path to fusion power generation. ITER  is under construction now, and is expected to be capable of 10x power returns. DEMO  should have 25x.
 https://media4.s-nbcnews.com/j/newscms/2017_52/2273651/17122... (worth a click)
> ITER Project was initiated in 1988.
> The expected cost of ITER has risen from US$5 billion to US$20 billion, and the timeline for operation at full power was moved from the original estimate of 2016 to 2027.
> A technical concern is that the 14 MeV neutrons produced by the fusion reactions will damage the materials from which the reactor is built
... definitely not clear to me that this is a "clear path forward" to fusion power, but I was encouraged to learn that a project of this sort was underway.
> The expected cost of ITER has risen from US$5 billion to US$20 billion
So factor 4 in 30 years. That's approximately inflation, no?
"ITER will not capture the energy it produces as electricity, but—as first of all fusion experiments in history to produce net energy gain—it will prepare the way for the machine that can."
Let me repeat, the project will not produce any usable electricity. It's still an experiment.
And the start of the experiment is at the moment planned for 2025. The ITER project started in 2007. That's how much the preparations "just" for the experiment take, even if the previous experiments were done for decades.
Also from their FAQ:
"one of the missions for the later stages of ITER operation is to demonstrate the feasibility of one or more concepts of tritium production through the Test Blanket Module (TBM) program."
Namely, it's an experiment that "in the later stages" should manage to give results that would allow the development of the technology for tritium breeding. Without tritium breeding fusion can't be used commercially.
The path for experiments is known, but it's still far from confirmed that the desired results are achievable, as we'll need a lot of new developments which we don't have at the moment for that.
Also DEMO can be fully developed only once ITER succeeds, it needs the results from ITER.
It's simply very hard, and achievable as an experiment. The still open question is if its really commercially viable, in the sense, if the "hard" stuff can become manageable enough to be useful.
"Thing is ridiculous, that could never work because of reasons A, B and C."
"What are you talking about, here is project that does thing, it functions perfectly."
Nothing like reality proving someones baseless naysaying wrong immediately.
He has exceeded my expectations with this post, which cuts through all the hype to communicate exactly what the results of this experiment mean for the field. It's worth reading and sharing.
"Enormity" implies moral reprobation, so it's a poor way of describing the significance of a computational discovery.
Enormous : enormity :: huge : hugeness?
Most discerning usage would be that both me "very large scale", but that enormity preserves the meaning of bad at a very large scale.
That's become something of a losing battle as the synonymous usage has become an enormity.
Enormous : enormousness :: huge : hugeness.
Enormity = Immense scale of evil (e.g., the "enormity of the holocaust")
"Enormousness" isn't a word in common usage that I'm aware of.
If you Google "enormity," the dictionary definitions it displays before the results are:
1.the great or extreme scale, seriousness, or extent of something perceived as bad or morally wrong.
"a thorough search disclosed the full enormity of the crime"
a grave crime or sin.
"the enormities of the regime"
Merriam-Webster claims this is not the exclusive usage, and that enormity can mean "immensity" without normative implications when the size is unexpected. But the very example it cites, from Steinbeck, involves the "enormity" of a situation in which a fire was started.
That said, I agree that "enormousness" is an awkward word, which I do not use. I'm left to ponder the enormity of my own pedantry.
I don't know if I've seen "enormousness" before this thread.
When I visited Iceland, I remember a sign in English that said a cliff was insafe [sic]. `in` being a Latin morpheme, and safe being Germanic.
But then, it seems there are quite a few un+latin (unreal, unbalanced, unadulterated, uncertain etc), even if I can't think of in+Germanic.
Like most things in linguistics, the 'rules' are more a rule of thumb than the mathimatical sort.
Germanic pre/post fixes seem like they stick to pre-inkhorn roots better.
What exactly is a qubit? I'm not asking what does it mean, because I know there's superpositions and all that jazz, but as in...like, in an electronic circuit, what is a qubit? Is it made out of logic gates? Which ones?
If we can make one qubit, can't we just make a bunch of them by copy and pasting circuits similar to how we used vacuum tubes in the 60s and 70s? How come our current limit is only around 54 or so?
Are qubits, and quantum computers by extension, not even electronic circuits? If so...what the hell are they?
There are several reasons it doesn't scale easily to more qubits, but you can imagine that you don't want the chip to be large (must be cooled to 25mK!) but the qubits should be spaced quite far apart so they don't influence each other. Also, it's not a very simple circuit, so the layout of the transmission lines ("wire on a chip") becomes difficult to manage. The last problem also scales badly with the number of qubits (the middle qubit becomes progressively harder to reach).
There is a paragraph in the (leaked) paper that describes their chip:
> In a superconducting circuit, conduction electrons condense into a macroscopic quantum state, such that currents and voltages behave quantum mechanically [2, 30]. Our processor uses transmon qubits , which can be
thought of as nonlinear superconducting resonators at 5 to 7 GHz. The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic flux control to tune the frequency. Each qubit is connected to a linear resonator used to read out
the qubit state .
If you want a physical picture, check out : https://arxiv.org/abs/cond-mat/0703002
edit: source  is more appropriate and open to access
> > In a superconducting circuit, conduction electrons condense into a macroscopic quantum state, such that currents and voltages behave quantum mechanically [2, 30]. Our processor uses transmon qubits , which can be thought of as nonlinear superconducting resonators at 5 to 7 GHz. The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic flux control to tune the frequency. Each qubit is connected to a linear resonator used to read out the qubit state .
I understand most of the words in isolation – and I know that they are valid, even if combining them in a useful manner to understand exactly what they are describing is eluding me.
But if there was ever a paragraph that sounded like pure technobabble, this is it. Replace the technobabble found in the star trek matter transporter with quantum lingo, and it would sound very similar.
In a superconducting circuit
A circuit is a loop of something. Probably a solid material, like a metal or carbon, though it could be something more exotic. A superconducting circuit means the electrons in that material move without any resistance. This tells us the circuit is probably very cold--superconductors tend to break down at warm temperatures, like the ones in your house.
Conductors have electrons in them. Some are "stuck" to atoms, others get to move around. Conduction electrons are the ones that move.
condense into a macroscopic quantum state
Macroscopic means "big", and for QM, "big" means, like, more than a handful of atoms or particles. At least as far as QM is concerned, everything--rocks, electrons, photons, people, etc., has a quantum state, but we use the phrase "quantum state" to mean a state that's, like, WEIRDLY QUANTUM. For instance, a pencil sitting on your desk is normal. A pencil that's like, half on your desk and half on mine is "quantum". Condensing means the electrons are going to change from doing normal individual electron things into acting like some sort of Big But Weirdly Quantum system, likely as a group. Like a crowd becoming a flash mob, they might do some sort of synchronized dance, only except the dance involves, say, every dancer doing two or three or ten dance moves at the same time.
such that currents and voltages behave quantum mechanically [2, 30].
Specifically, we're gonna be able to see quantum effects like superposition in Big Things like "current" and "voltage". The circuit might be in a combination of 3 volts and 5 volts at the same time. Also some of those voltages might be partly real and partly imaginary. Long story.
Our processor uses transmon
What the fuck is a transmon? I had to look this one up; it's a way of making these qubits less sensitive to voltage fluctuations.
Qubits are quantum bits. A bit can be either 0 or 1. A qubit can be 0 or 1 or (and this is the quantum part) any state in between. Let's call the 0 state |0>, and the 1 state |1>. A qubit can be |1>, but it could also be (1/sqrt(2) |0>) + (1/sqrt(2) |1>). We call that a "cat" state, incidentally, because it's "half 1, half 0"--like Schroedinger's Cat, half alive and half dead. Again, the coefficients here are, in general, complex numbers, but we're gonna gloss over that.
which can be thought of as nonlinear
Nonlinear means they don't respond linearly to some input. Ever had someone do a series of small, mildly annoying things, and at some point you snapped and yelled at them? That's called "going nonlinear".
Oscillators are things that vibrate, like strings. Resonators have preferred frequencies to vibrate at. I don't exactly know what this means in this context, though. I'm guessing the circuit has some preferred frequencies it really likes to oscillate at.
at 5 to 7 GHz.
Voltages or currents or whatever are gonna go back and forth 5-7 billion times a second. That's about the same frequency as wifi signals, or microwaves.
The qubit is encoded
A qubit is an abstract thing on a whiteboard. There lots of ways we could actually make a thing that looks like a qubit. "Encoded", here, means "turned into an actual machine you can build in a lab".
as the two lowest quantum eigenstates of the resonant circuit.
An eigenstate, loosely speaking, is a state that has nothing in common with any other eigenstate. For instance, if we wanted to measure a particle's position on a line, we could take x=0 as one eigenstate, x=1 as another, x=2.5 as yet another, etc etc. An infinite number of eigenstates. Quantum systems can be in any (well, normalized) sum of eigenstates. My cat loves being inside and outside at the same time, so they're always trying to occupy 0.2|x=0> + 0.6|x=4> + 0.2|x=5>.
An operator is a thing you can do to a quantum state. Think of operators like functions on values, if you're a programmer, or like matrices that can be applied to state vectors, if you know linear algebra. For instance, I might have a measurement operator, which I use to look at my cat. There's also a special operator called the Hamiltonian, which (loosely) tells you what a state will look like after an infinitely small step in time.
Each operators has associated eigenstates, and those eigenstates have a magic property: if you apply that operator to one of its eigenstates, you get back the exact same state, times some complex number, which we call an eigenvalue. This means eigenstates for the Hamiltonian are, in a sense, stable in time. When we talk about the eigenstates of a system, we usually mean the eigenstates of the Hamiltonian. They could also be talking about measurement eigenstates--I'm not sure.
For the Hamiltonian, eigenvalues are, for Really Fucking Cool Reasons, energies. When we talk about "the two lowest quantum eigenstates", we mean the two states with the lowest energy. So maybe the circuit's eigenstates are, I dunno, 5 Ghz, 6 Ghz, 7 Ghz, etc. We'd take 5 and 6 as our |1> and |0> states.
Each transmon has two controls
A control is a thing we can use to change the transmon.
a microwave drive
Something like the microwave in your kitchen, but very small, and probably expensive.
to excite the qubit
This probably means changing the qubit from |0> to |1>. Microwaves carry energy, right? That's how they heat food. If they microwave the circuit at the right frequency, that microwave energy probably helps it jump from a lower frequency/energy to a higher one.
and a magnetic flux control
This feels like something specific to transmons. Flux has to do with the density of stuff moving through a surface. Magnetic flux probably has to do with how strong and close field lines are in some part of the transmon machinery.
to tune the frequency.
How fast the circuit wobbles depends on a magnetic field, I guess?
Each qubit is connected to a linear resonator
Huh. So we've got nonlinear resonators (the qubits) connected to linear resonators (some sort of measurement device?)
used to read out the qubit state
We need a way to actually look at the qubits, and I guess the linear resonator does that. I assume that the linear resonator is isolated from the qubit during computation, and once the computation is over, it gets connected somehow, and vibrates at the same frequency as the qubit. That process probably "spreads out" the quantum state of the system, pushing it REAL CLOSE to an actual eigenstate of the measurement system, which looks like a probabilistic measurement of the actual qubit state.
Like... my cat could be 3/4 inside and 1/4 outside, so long as the room is really dark. If I turn on the light, suddenly my cat is coupled to a MUCH BIGGER system--the room, and that "quantum" state gets diffused into that larger system, in what looks like a measurement like "cat definitely inside". I don't know a simple way to explain decoherence, haha, but if you like math, try Percival's "Quantum State Diffusion".
Hope this helps, and I also hope I got at least some of this right. Maybe someone with a better/more recent command of QM can step in here.
> A qubit can be |1>, but it could also be (1/sqrt(2) |0>) + (1/sqrt(2) |1>). We call that a "cat" state, incidentally, because it's "half 1, half 0"--like Schroedinger's Cat, half alive and half dead. Again, the coefficients here are, in general, complex numbers, but we're gonna gloss over that.
In case anyone is interested in not glossing over this part, this  lecture by Scott Aaronson is an excellent introduction to the crazy world of complex probability amplitudes. It doesn't assume much more than some basic linear algebra, and does a good job of developing at least a little bit of an intuition for some of the concepts in Aphyr's comment.
You are correct about the coupling of the qubits, I was simplifying too strongly there.
I do stand by my scaling point (center qubits harder to reach) but I'm open for counter arguments.
Classical circuitry is an issue, but not as much as you think. What happened is Martinis' group and others moved forward with a quick & dirty design which worked well for their device but can't be scaled (they basically didn't have the expertise like silicon people had). Nevertheless, it's not a fundamental problem, the circuitry for silicon based spin qubits never had this problem for example and xmons won't either, they just keep reiterating as the number of qubits increase, it's the least of their worries regarding scalability. There are far bigger problems regarding the scalability.
Do you have a paper I can look into that goes into the chip architectures in more detail (not specifically for this new device)? Otherwise I'll await the science / nature paper of this demonstration.
The classical circuit is mostly outside the cryo I assume since it's GHz's and LNAs are available. Do you know if the microwave readout signal is frequency multiplexed to reduce cables?
Microwave signals are typically used for control rather than readout. I don't know if this experiment does it or not, and I am not an experimentalist. Multiplexing is more typically required for reducing timing errors of simultaneously driven signals (for better synchronization) and it really depends on the device and the mode of operation, plus whether the experiment they're doing needs it or not. The same experimental group sometimes do it one experiment and not do it in another, despite using the same device.
Measuring a qubit in a basis collapses it to one of the basis vectors (e.g Schrödinger's cat must be alive or dead once we open the box) with probability equal to its inner product with that basis vector. This is why we need a Hilbert space and not just any old vector space.
Finally, to answer your question about gates, a quantum gate is basically a unitary matrix, i.e. a matrix that preserves the norm of its inputs. You can feed qubits into these matrices by themselves or, more often, many at once, by using something called the tensor product of the qubits - this is where the math gets slightly more involved.
The long and short of it is that we can induce correlation patterns between qubits using these gates (aka quantum entanglement) and orchestrate circuits of interference patterns where the wrong answers cancel each other out and the right answer gets reinforced so that we measure it at the end - unfortunately, this is where my knowledge breaks down as a beginner. My apologies if I accidentally handwaved anything important but hopefully you get the gist.
And going beyond that, as Peres puts it, "Quantum phenomena do not occur in a Hilbert space. They occur in a laboratory."
I recently had a conversation about this in another thread. It seems to me, and nobody tried to convince me otherwise, that the Cat would be the Observer. Therefore it would be dead, NOT dead AND alive, as soon as it observes the poisonous gas in its box.
Side note, it's not that the cat is observing poisonous gas, but rather, that a Geiger counter is set to detect whether a radioactive atom decays or not and triggers the release of some poisonous gas if so. So, classically, Schrödinger's cat would be either alive or dead 50% of the time, not 100% dead. There are plenty of alternative ways to reconcile this classical view with quantum mechanics. Perhaps the simplest and most well-known is the many-worlds interpretation , which states that both events occur, just in different timelines, and we don't know what timeline we ended up in until we open the box. (Of course, it is ridiculous to speculate as to which timeline "we" end up in before the experiment is carried out, because the people in both timelines would still be "us" - this can get awkward to think about.)
It's an entirely new type of computing apparatus, using the fundamental state of particles. No electron circuits, those won't work.
And it's expensive because of the above. These things are massive, have to be kept at cyrogenic temperature, and isolation gets harder the more particles you have.
>> in an electronic circuit, what is a qubit?
It's not an electronic circuit, it literally is an electron or a photon. And it's using that particle's properties to do "superposition and all that jazz".
You can't put too many next to each others because they start interacting together because that's what electrons do when they get close.
The main problem is achieving full control of these systems, which is extremely hard, because there are certain things (some random/stochastic) that you can't control at all and you have to fight+race against their influence:
- qubits are tiny, and the energy splitting between these two states are typically minuscule: this means even a small vibration from a sneeze miles away can make the qubit flip.
- qubits do not live in vacuum, they are typically hosted in solid-state systems and the qubits are coupled to their hosting environment, which have their own moving parts (two-level fluctuators which lead to charge noise, phonons which also couple to electrons typically via spin-orbit coupling, spinful defects in the material which have their own dynamics, etc etc) that you can't really control. it's extremely difficult to achieve full control of a qubit in the presence of things that you can't control and random in nature.
- if the qubit is the lowest two-levels of a system with higher energy levels, one also needs to worry leakage errors to those higher states
- there are ways of suppressing the influence of such unwanted interactions (dynamically corrected gates + quantum error correction codes) given that their strength is below certain thresholds. going below those thresholds is again an enormous engineering/material science problem (extremely low temperatures, isolation from vibrations, low/high-pass filters for the classical circuitry which is used to control/drive the qubits via electric/magnetic fields, design of the device itself which typically hosts two-level fluctuators, etc etc). this problem becomes harder in general as you increase the number of qubits though.
- to do anything non-trivial, you need to have more than one-qubit and have controllable couplings between those (so you can't put them apart too far which makes it impossible to couple them). this doesn't work perfectly in practice, you can't completely control or turn off their couplings (a problem called cross-talk) which again leads to errors. so it doesn't work quite like modular classical circuit elements which you can "copy & paste" because the abstractions from the low-level, nitty-gritty physics of the underlying material fail for all these qubits.
Okay, now what can we do with that?
"crack encryption by simulating a state!" yeah but what? is that something I should be concerned about now? "hahaha no no no silly normie we'd need two thousand qubits for that, this machine only has 53!" oooookay, and you did that number in your head, how??? "we just solved the first unsolvable problem that a mere bit bound supercomputer couldn't solve, look at this math formula!" but that didn't explain "we are celebrating, are you not celebrating"
There just seems to be a lack of non-introductory but non-PhD level information. Where is the "explain it like I've been accepted into college at all".
Now imagine 53 such boxes, interconnected by quantum gates. The 53 qubits combined are in all of 2^53 states at once. The gates can be set up such that some combinations like "cat 1 alive", "cat 2 dead", etc. are much more likely result than others, after the boxes are opened. And all this computation is done in one step, whereby the classical computer must do 2^53 steps to get the same result.
To have 53 cats all undisturbed in these dead/alive states so that computation is done without errors is very technically challenging :)
“A new ontological category”.
What’s the problem?
this has been the best thing I've seen so far
If you have a collection of n qubits, the state of those corresponds to a unit vector in a 2^n dimensional space over the field of complex numbers.
You can do certain linear operations (iirc, only unitary ones. I wouldn’t claim all unitary ones either) to change the state. You can also do a measurement, which has a random outcome, following the Born rule.
So, it isn’t really just “each of them has more than 2 possible states”. That can be the case with something classical, and has been done before (there have been ternary computers. Binary computers won out. Ternary computers (or quaternary, etc. etc.) wouldn’t be particularly special.)
You can’t just think of each of the qubits as having a state always entirely independent of the rest of the qubits.
A bit is like a boolean type, has the values of true and false. Or you treat those values as 0 or 1, then gather a bunch of bits to build useful numbers.
A qubit is like a pair<number, number> such that these numbers MUST satisfy the following constraints:
pair.left^2 + pair.right^2 = 1
pair.left and pair.right can be any complex number
Remember that classical computers use a clock to flip bits over time.
A similar quantum computer would manipulate qubits instead.
A bit has the storage capacity of 2 distinct bits of information.
A qubit has the storage capacity of 2 complex numbers, which corresponds to 4 floats, which is at least 16*8 bits of information if we are conservative about our assumptions.
> Is it made out of logic gates?
Kind of. Most of the current logic gates are built with semiconductors. It means by applying different voltages/currents/flux etc to different parts of a solid material, we can alter what we measure in some other part of the same material.
A quantum logic gate uses the same principle but in order to achieve the desired speed and storage advantages, it uses an object with a measurable property that at least approximately behaves like a quantum mechanical object. Common semiconductors are too crowded of atoms in terms of their body parts to make good quantum materials. They touch to each other and are almost always exposed to air. Their running temperatures are undesirably high.
A really dark (literally without a single photon) vacuum chamber that holds a really small amount of floating matter in the middle, frozen with lasers up to 0.000...1 Kelvin would make a good example of a stable but expensive qubit. We can measure this qubit by destroying its state, i.e by applying a magnetic field and measuring the emitted photon's location, polarization or frequency. The problem is, copy pasting this device to build a circuit is really hard due to logistics and auxiliary machinery required to keep all the state stable.
> How come our current limit is only around 54 or so?
That's not a fundamental limit but an engineering one, due the issues I mentioned above. The bigger the device gets, the harder to maintain its stable state. The method currently used for reaching this limit really looks like what is used in the 60s. History is repeating itself with a small twist, the running temperature is extremely low this time. Not liquid nitrogen low, but compressing atoms by sniping them from distance via laser on multiple directions low.
There is also one more problem that is unique to quantum computers. You have to measure the same qubit multiple times to be able to read those complex numbers since their values are determined statistically. You either represent a qubit with multiple qubit like devices or you use the same device to try your measurement repeatedly. Each approach comes with its own drawbacks.
> Are qubits, and quantum computers by extension, not even electronic circuits?
Even today, non quantum circuits are sometimes non electronic in some of their parts. Fiber optics, supersonic emitters/receivers and photoelectric sensors are good examples.
To this day, it is not clear whether the first consumer quantum CPU will be entirely electronic or not.
This whole explanation makes qubits sound just like a combo-pack of bits, or like something you'd find in an analog computer. They're much more powerful than that because they can be put into superpositions and entangled together while you continue to do calculations with them. Using superpositions and entanglement across qubits can let you solve some problems with lower-complexity algorithms than you could with classical computers.
It's like saying a car is defined by four tires being in close proximity, while skipping the fact that the usefulness of a car comes from the tires being connected together and steered which lets you do things that no amount of unconnected tires could do.
The fourth float would seem to be almost totally determined by the first three. If I'm visualizing correctly, there are at most 2 values it could possibly be.
(a + bi) type of complex numbers are indistinguishable from (b + ai) ones. Deciding the value of b immediately reduces the number of possible values for a to 2, which is a and -a respectively. Luckily, the real number line is continuous. We can accept such sacrifices without too much precision loss.
Also I should note that my calculations assumed a pair of maximally entangled particles per quantum circuit element since that's the most straightforward way to harvest quantum information with minimum number of objects.
This kinda sounds like translinear circuits, except a couple orders of magnitude more boutique.
>Quantum physics forbids copying a qubit to create another, but you can initialize them en masse to to be in a superposition state. Those things are too tiny to be easily manipulated so 53 is quite an accomplishment.
Quantum physics forbids copying the value of a qubit, but the poster was asking why we couldn't just make more of the device that implements a qubit. The big issue is that you want the qubits to be entangled together and it's hard to prevent decoherence as you make a larger device with more qubits.
This is a very good question! And as far as I can tell, no one has actually answered it yet.
In classical physics, which suffices to explain circuits made of vacuum tubes, the state of a system is fully captured by the states of its parts. Like, if you want to know the state of three bits, I just have to tell you what the first bit is (0 or 1), and the second bit, and the third one. Basically everything we interact with has this property: if you fully describe the state of each part of a thing, you have described the state of the whole thing.
But quantum mechanics is... weirder than this. In quantum mechanics, to describe the state of a system you have to give one complex number per classical state, such that the sum of the squares of the absolute values of the complex numbers adds up to 1. These complex numbers roughly correspond to the "probability" that the system is in that state (but not quite, it's more complicated than that).
So in quantum mechanics, to describe the state of three bits, you have to give eight complex numbers n1,n2,...,n8, one for each of the classical states of the bits 000, 001, 010, 011, 100, 101, 110, 111, where the sum of the squares of the absolutely values of n1,...,n8 add up to 1. That's a lot more information than 3 bits. (Imaging if you had 54 bits... you'd need 2^54 ~= 10^17 complex numbers to describe them.)
Technically, everything in the whole world, including you, is described by the laws of quantum mechanics. So why don't we see weird quantum effects all of the time? Quantum systems are very fragile: whenever they interact with the outside world (say a photon from the air bounces off of something in the system), the system "collapses", and then behaves as classical physics would predict. (Note that this is in accordance with the quantum prediction. The system goes from "only describable using difficult quantum mechanics" to "describable using quantum mechanics, but it'll just say the same thing as classical physics, and classical physics is simpler so you should just use that".)
So here's what a qbit is: it's just a regular bit that has been so insulated from the outside world that classical physics doesn't suffice to describe it. You won't find qbits on a regular circuit board, though, because they'll interact with the circuit board in any way whatsoever and then you're done.
And this is why making a 54-qbit quantum computer is so hard. You need to keep all of the qbits isolated, because if any of them interact with the outside world (think the air in the room, or a single photon, or the substrate that the qbits are on), then the whole system "collapses".
(I'm not an expert in this area, so the following may not be incomplete or limited to only certain kinds of quantum computation, or worse).
The kinds of computation that qubits can beat a classical computer on require that the qubits be entangled. If the qubits are not entangled, you can't do better than regular bits.
Briefly, if two (or more) quantum systems are entangled, and you make certain measurements that have a random outcome on one of the systems, and then measure the same property on the other system(s), there will be correlations that you would not get if the systems were not entangled. The entangled systems act is if whenever you measure one and it randomly chooses a value for the property you measured, that result is somehow communicated to the other entangled systems, and they make sure them that when they are measured they will give results with the appropriate correlation.
You might think that this could be explained if the systems had some internal variables that were set when they became entangled that determined what "random" values they would pick when later measured, but there are experiments that have shown that this is not so. The systems are truly making their random choice at the time of measurement as far as we can determine.
This happens even if after you entangled the systems, you separate them by a great distance--so far that between the time you do the measurements on the separate systems there is no time for any communication between the systems (or, rather, no time for any communication limited by the speed of light--I believe there have been experiments showing that IF there is communication, it is at more than 10000 times the speed of light). (This communication, or whatever it is, cannot be used to send messages faster than light. All it can do is make the correlations work out for entangled systems).
Anyway, the thing about entangled systems is that as soon as you make a measurement of the entangled property, you lose the entanglement. Your particle that had entangled spin, say, with another particle and that was 50/50 whether it was spin up or spin down becomes, once you actually measure spin, a non-entangled particle whose spin is a concrete value, either up or down. Measure it again, and you get the same value.
When I say "you make a measurement", I don't specifically mean you, or any other human, or any human instruments. For purposes of quantum mechanics, a measurement is anything that makes the system reveal a value. So if you have a particle with entangled spin with some other particle, and some random passing particle happens to interact with yours in a way that depends on the spin of your particle--that's a measurement and you've lost your entanglement. (Your particle might now be entangled with that random passing particle, but it is no longer entangled with the particle you intended it to be entangled with).
If you are trying to do a quantum computation on 50 qubits, you have to get them all entangled, and then you have to keep them from interacting with anything that might inadvertently do a measurements long enough for them to do their quantum computation, where you can then measure the result (which finally ends the entanglement).
This turns out to be hard, because there are a lot of things in the universe that want to effectively do a measurement. Any random particle bumping into one of yours with too much energy can do it. Some random passing electromagnetic wave can do it. The more things you need to entangled and keep entangled, the hard this is, and 50ish is the current limit.
An observation about quantum behavior is that there are only certain properties that you can measure for the really, really small. These include things like mass (total energy), charge (intrinsic amount of electromagnetic strength), and spin (willingness to change direction in the presence of an electromagnetic field). It turns out that when you measure these properties, the measurements behave in non-intuitive ways.
The act of measuring the spin of a particle (which could be in any direction) is really the act of asking, "is this particle aligned with my detector?" The result will always be either aligned up or aligned down. It will be randomly about 50/50 up and down, also. This is not that surprising because the spin must align one direction or the other. The crazy part comes with the fact that you can entangle two particles.
Entangle particles can be sent off through different detectors and one thing will always be true: while any particular outcome is random, the detectors will always generated opposite results. The temptation is to say, "well, they were generated from the same source, so they have just opposite starting positions." Long story short, this has been proven not possible. Instead, there is some fundamental behavior is quantum mechanics that says that there are certain types of activities with entangles particles that have a correlation that is true as long as the entangle particles are not disrupted. In this case, particles sent to separate detectors will always have opposite results.
A quantum computer uses these correlation truths about measurements to do computations. A qubit is the concept of a quantum bit: an entity that represents one unit of entanglement. Just like a bit, quantum computers have many different ways to implement entanglement. You can entangle photons, electrons, and whole atoms. Each of these systems require specific implementations to achieve, like like electronic or mechanical computers.
Remember that one detail about "if they are not disrupted?" Yeah, turns out that it takes a hell of a lot to create an environment that doesn't destroy the coherence of the entanglement. You have to design something that allows you to setup the particles into starting state, be able to hold those particles in an entangled state with no disruptions and have a detector to determine the final state. Quantum mechanically, these are generally opposite goals.
>Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.
>In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because nature is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”
This is the core of it to me. It's a question of 'some people’s definitions of "quantum computational supremacy."' Many people say that the definition here is a crappy one, and sure this shows a form of it, but not the kind to justify the hype. Sure, it's fully programmable, but not so programmable as to do anything that anyone cares about (even a teeny tiny few-bit version of something people care about) better than we can otherwise.
To appeal back to his analogy to the wright brothers, it's like they carved a frisbee from a stick while working towards airplanes. It's amazing that they carved a log into a neat shape you can throw further than another log, and the hype train is saying that it's fully carve-able, so it counts as "airplane-supremacy," but that's a crappy definition and not what we're waiting for.
This proves the underlying principle of a quantum speedup is a physical reality. It might be something people already take for granted if they're interested in factoring, but it's a crucial prerequisite that was not yet irrefutable.
I'm pretty sure OP is talking about an actual, physical, exists-in-reality usage of a quantum computer to do anything meaningful, not theory. There's lot of theory.
Here is Gil Kalai's post from yesterday: "Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google)." https://gilkalai.wordpress.com/2019/09/23/quantum-computers-...
> Scott is correct that inability to achieve quantum supremacy is quite central to my argument (since 2014), so naturally I don’t expect that the recent claims by the Google team will stand. Of course, if these claims (or any other quantum supremacy claim) are correct then this would defeat my theory. It goes without saying that the claims are so fantastic that also responsible believers in quantum computers should examine these specific claims (like an alleged NP=! P proof) carefully and skeptically. Also, it would be nice to hear some details about the precise claims and methodology of the Google team.
I saw this happen with string theory first hand, and my experience with string theory was a major factor that led to changing paths to pure mathematics and computer science.
As someone who has spent a lot of time researching QC, I do not believe it will ever be possible to build a practical quantum computer. We have been over this so many times on this site. Here is a good link from a serious professional who takes the same position 
In fact, my personal opinion is that quantum computers are functionally, a hoax, the main purpose of which is to generate hype, secure research grants, ensure career stability for academics, and give science reporters something to write about to get clicks while deceiving the public.
You're implying that Scott Aaronson is being deceptive.
I think that needs better evidence. The story you linked to is two years old and it's about QC skeptic Gil Kalai.
Scott addresses him directly in his post:
> If quantum supremacy was achieved, what would it mean for the QC skeptics?
> I wouldn’t want to be them right now! They could of course retreat to the position that of course quantum supremacy is possible (who ever claimed that it wasn’t??), that the real issue has always been quantum error-correction. And indeed, some of them have consistently maintained that position all along. But others, including my good friend Gil Kalai, are on record, right here on this blog predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.
There is no way to argue against this kind of goal post moving style of debate because even if another 20 years go by and QC still don't exist on a practical level, the goal posts will just keep getting moved.
I am extremely confident that this will continue until the public gets bored of hearing about it.
IMO, this is a reasonable position to take. However, we have to accept that even if a universal QC is possible, "how long" could conceivably be centuries.
Our perspective with regard to technology has been significantly biased by our experience with Moore's Law/Dennard Scaling. The speed of progress in ICs from 1960 to 2010 (and especially 1980-2000) is an outlier in the history of science and technology, yet we base expectations in everything from AI to QC on it.
Yet I have no trouble believing in this quantum supremacy at the current scale.
The way I see it is a quantum computer is "running" an algorithm like Toshiba's "simulated bifurcation". It's running it at a frequency much higher than current computers.
If I had to take a guess it is like running classically at the Bohr frequency of an electron ~10^16 Hz. So it's kind of like trying every combination with some heuristics. Currently we are tapping into this 6 additional order of magnitudes of computation, which allows a brute force search to appear instant. But once we have used this 10^6 speed factor the quantum computer will miss a lot of the solutions and you will have to increase the sampling time exponentially to find your solution. We are currently in the exponential part of the S curve.
You've got to understand that those machines are expensive to build, so putting a cap on the performance won't help you raise funds. But these machines are beautiful. They are some marvel of engineering. There are plenty of things to discover. They need to get built.
They will bring plenty of useful techniques in the how to manipulate and build very small things, and more importantly how to compute without releasing so much heat. But those advances will probably get locked into a few private companies by monopolizing the few big names that can lend the credibility required to raise funds.
So technically Google cannot claim to reach quantum supremancy with only 54 qubits as described in the paper.
Q: Why can the D-wave not be used to illustrate “quantum supremacy” in a similar way?
(As I understand it the D-wave can sample from the solutions to ”ising model-like” problems, which I assume would be extremely difficult for a classical computer to do (but probably possible to verify).)
As for Ising model, D-wave didn't outperform classical algorithm after classical algorithm was tuned (it was a new problem, so existing classical algorithm wasn't the best possible), see https://arxiv.org/abs/1401.1084, also there are reasons to suspect why Ising model will not provide any quantum speedup, see https://arxiv.org/abs/1411.5693.
To answer your question specifically: DWave does not allow the same level of control over qubits.
Quantum supremacy in any reasonable sense of the word supremacy is a long way off.
> [quantum computing supremacy]
term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers
.. and continues to explain why this setup does exactly that.
But comparing it to manned flight or the first nuclear reactor... In those two cases there was a clear path to something very useful. I’m my mind this experiment changes very little as to how probable it is that we will soon have useful quantum computers, or even that we will ever have them. I guess that’s another question for Scott’s list:
Q: If what we care about is computing solutions to difficult real world problems, in what way is this a meaningful milestone?
The first flight was 2 people in a plane that lasted very short. I could have ridden my horse more quickly over that distance! How is this a meaningful milestone?!
Well.. it demonstrates something that cannot be done on a horse (classical computer) but that can be done with a plane (quantum computer).
Is the application useful today? Probably not. But that shouldn't discourage anyone.
Not a lot of people directly own & operate their own airplane, let alone a nuclear reactor, and yet both have had a significant impact on modern life. As for this experiment & QC, only time will tell of course.
I completely agree with this statement. This is mainly about Extended Church-Turing Thesis.
That is still clearly quantum supremacy, even if Shor's algorithm is not yet implementable.
(Seriously though, why the hell did those vcrs have such a large green board??)
This seems more comparable to the moon landing or LHC Higgs-Boson discovery. Challenging to pull off once, and the next steps seem to increase exponentially in man-power and cost.
Does it only do certain kinds of simulations well, i.e. simulations of quantum systems, or can that be generalized? Can it be applied to economic, environmental, traffic, etc as well?
Our current understanding is that quantum computers won't offer a speedup for the simulation of nonquantum systems. The only simulations they'll be faster for are systems for which quantum effects are important.
Of course it's possible that someone will discover an algorithm that gives quantum computers an exponential speedup in the simulation of any system. But I think that's pretty unlikely because it would imply that quantum computers were exponentially faster than classical computers computers for every problem, because you could just use your quantum computer to simulate a classical one.
The speedup from grover's algorithm is essentially universal it's just not an exponential speedup. So for example in your non-quantum simulation task, you want to search for an input to a cellular automata that makes it spell your name after 5000 timestemps. You can use grover's algorithm to find that input with a sqrt() the number of simulation runs that a classical machine would need (and perhaps fewer, since there are likely multiple solutions).
To me that's one of the most interesting things about quantum computation: There is a very broad class of things where it offers a modest speedup (sqrt) and a narrow (but useful!) class of things where it offers an exponential speedup. But if you imagine almost any kind of 'superpowered quantum computation'-- something that is a little more powerful than the laws of physics as we know them allow-- you almost always get an exponential speedup for everything (or even more absurd results, like every problem being solvable in constant time). QC has this interesting property of being just strictly more powerful than classical computing, but without being magic-instant-computing.
[Which is also why many of the common incorrect descriptions of quantum computing are sad, stuff like "testing all values in paralle"-- if it worked like that description it would be magic instant computing.]
I'd view this as using fewer simulations, rather than doing the simulations faster.
Incidentally, I happen to be the author of some software for finding CA predecessors: https://github.com/OscarCunningham/logic-life-search/. Good luck getting it to do 5000 or even sqrt(5000) generations though! It's SAT solver based, so as soon as someone does invent a quantum SAT solver it'll plug right in.
> [Which is also why many of the common incorrect descriptions of quantum computing are sad, stuff like "testing all values in paralle"-- if it worked like that description it would be magic instant computing.]
To be fair, many quantum algorithms do begin by preparing a superposition of all possible intputs and then applying the unitary corresponding to some classical function. The difficulty arises when you have to extract information from the resulting output superposition.
> ...It is by no means assured that our national security sector will be able to attract on a sufficient scale the scarce engineering, mathematical and scientific talent that would supply the necessary expertise. That challenge will require investment, enlightened strategic management and an innovative approach to luring a different type of expert out of the private sector into government. Meeting this challenge will require a greater reliance in general on the private sector, since government alone does not possess the requisite expertise. A large portion of the intelligence community’s experts on the military capabilities and plans of Russia and China joined government during the Reagan administration; other experts on counterterrorism and new technology burnished their technical skills following the Sept. 11 attacks. Many of those experts are nearing retirement or have already left to join an attractive private sector. With millennials believing that technology in the private sector now allows them to help change the world — previously the idea of a mission had been largely the province of public service — it is not clear that the intelligence community will be able to attract and retain the necessary talent needed to make sense of how our adversaries will make use of the new technology...
> ... the government no longer possesses the lead in complex technology, at least in many areas relevant to national security. Arguably, the most powerful computing and sophisticated algorithm development now occurs not in the Pentagon or the N.S.A. but in university research labs and in the Googles and Amazons of the commercial world. (To be sure, the government still maintains its superiority in important areas ranging from nuclear energy to cryptography.)...
> ... our national security agencies for the first time must amass the talent and systems to understand not simply a military challenge but also challenges across a broad range of technology and global finance issues. The capacity for such understanding currently resides principally in the private sector and our universities, not the federal government.
"I’m trying to understand the chain of inference from Google’s leaked result of quantum supremacy to theoretical computer-science “hardness” of the computation.
Computing the exact probabilities of a random quantum circuit is proven hard, but computing the exact probability of a random algorithm is also an open problem, so what you really care about is approximation of computing the probability (up to epsilon), or even weaker, just sampling from a probability (also up to epsilon under some metric of comparing distributions).
Their computer implements Random Circuit Sampling, and their cited “theoretical” hardness results are your paper from 2017, of QUATH => HOG, and as far as I understood from your paper, proving that approximation / sampling from a (random) quantum computer/circuit is hard is still an open problem (Am I wrong here? I’m not up to date with everything), and a difficult one. But you did make a compelling argument that even if QUATH was solvable, it will lead to new insights.
Their actual benchmark uses cross-entropy benchmarking, called xeb in their paper, and defined as 2^n* [P(x_i)] _i-1 (Can’t type brackets). I could not find any ‘theoretical hardness’ paper at all using this benchmark, some results talk about different XEB using log but they prove these are not strong enough and can be reached classically.
Are there any hardness results for their benchmark? I wonder why they would use that instead of the proven HOG, even for smaller input sizes, I would see more value in a benchmark which has theoretical roots. I feel intuitively like their benchmark is much more similar to the log variation of XEB than to HOG, but didn’t think it through completely.
As for their gates, I understand they are not general random quantum gates, but instead they have a variation of iSWAP and controlled phase CZ. The hardness results that do exist also don’t address the limited gates, in how it changes the distribution of random circuits. Are those two gates at least universal, so that we have hope that this could be proved? Their statement was “but reliably generating a target unitary remains an active area of research”, so I assume they aren’t universal. I would love to see some heuristic argument as to why those two in particular are hard, especially given results like Gottesman–Knill theorem, it seems like some surprising gates do have classical simulation. iSWAP is just swap and phase gates, and CZ is phase gate only on the 11 state. Doesn’t feel like it could be universal to me but maybe I’m wrong.
I probably missed some things, so I’d love it if you could point to papers filling the gaps between their results and a real theoretical statement. I don’t have the intuition to tell which gaps are important and which are not. I also don’t know which papers/results already exist and I can’t really search as I am not an academic, and don’t have full access to many papers, and you probably know the state of the art results.
- CZ is a perfect entangler. No, iSWAP and CZ do not form a universal set. They typically combine virtual Z rotations (which has a continuous angle parameter) with an additional one-qubit gate, such as a X or Y gate (or their square root, which they mention in the paper) which gives a universal set. Although they don't make use of continuous Z rotations in that particular experiment, the device is capable of doing that, and is a (noisy) universal quantum computer.
It's the "any initial condition" part that is hard. You cannot put a cat, or even a small molecule, in any one of it's possible initial states, let it evolve, measure the outcome, and get reliable answers. (If you could, that would make it a computer! That's all a computer does, assuming the initial states and evolution are rich enough to, e.g., be Turing complete.)
> The computer is programmable, but the way supremacy is demonstrated is orthogonal to its programmability
The fact that the problem posed by the challenger C is "run an arbitrary quantum circuit" is in fact directly connected to the programmability.
1 - cat having its ears tickled
2 - cat watching something move under a sheet that might be a mouse
3 - cat waiting for tin of cat food to be opened
4 - cat wanting to get through a door
etc etc up to 'N'
So a “challenger” generates a random number C between 1 and N, and the challenger then sends C to me and my cat, and I apply the appropriate 'input' to my cat, tickling her ears or openeing a tin of food or whatever, to get her into the correct mode. We then measure her behaviour and then fire up our cluster of computers and run the simulation and then wait for a few months for the numbers to get crunched to verify if the cats behaviour was within expected probability distribution for C.
If you can solve complex computational problems faster than a cat-sized computer by carefully arranging tins of cat food, it would definitely be fair to say your cat computational supremacy is a significant achievement.
Note that the "random series of gates" is generated on a classical computer, and then the quantum processor is set to that configuration. It's not that you turn on the quantum processor and whatever random uninitialized state is it in.
It's not told to arrange its qubits into a random series of gates. It is told to arrange them in a specific order that was chosen at random.
It is programmable. But it is as if you had to program malbolge, except worse so due to the hardware constraints and lack of error correction.
I'm not sure how to evaluate the question of whether it could do the Fizz Buzz algorithm. Could it run fizzbuzz.c? No, it doesn't have an OS. Could it perform a sequence of operations isomorphic to "count to 100 by 3s and 5s"? It sounds like Aaronson's answer would be yes, but I think you'll be skeptical (and I am too) about whether that really means anything.
I question this. All of this.
"Always naysaying! Everything I create!"
Uh, there are birds. Literally everyone thought useful air travel was possible, and not only possible, but so easy that a Darwinian process was able to produce it, not once, but literally thousands of times, in thousands of ways.
But looking at the actual "experiment", I don't count that as computation in any meaningful sense, and morally equivalent to the following:
Set up a digital camera and point it at a scene. Take a picture. Now take millions of additional pictures, without moving the camera.
Measure the values at each pixel. See how they correspond to "amplitudes"? That's our computation. <= (This is the part of the article that should raise eyebrows...)
Article (smugly): How about you simulate (render) the scene using an actual computer? Measure the amplitudes of the resulting millions of images. OMG, that took you so long!!! Loser.
Are we being punk'd here?
The digital camera is the quantum computer. The "scene" is the random initial state C. The scene is then translated into a renderable scene for the classical computing version, and rendered with an unbiased physically based renderer, which produces the same result.
I fail to see how any of this is even remotely exciting, much less interesting. A camera is not a computer, no matter how many measurements you make with it, nor how "random" the scene you are taking pictures of is.
And yes, simulating reality takes more cycles than just measuring whatever happened with a camera. Photos are "faster" to get than the equivalently rendered scene—news at 11!
Again: Are we being punk'd here?
You can find other contemporary counter-examples too, for instance nuclear fusion. We know it's possible, it exists in nature, we managed to do it under very special conditions but it's still unclear whether it's actually possible to build a competitive, safe commercial nuclear fusion reactor.
Imagine a bunch of magnets. Imagine forcing them into a "frustrated" configuration; maybe you have them all on a grid, and you have servomotors that can rotate them to face any way you like. The servomotors are strong, so counteracting the magnetic forces is easy for them. You design an appropriately frustrated configuration, and then release all of the magnets at once. What configuration do they rotate into?
A quantum annealer is conceptually similar. Each qubit, on a regular, patterned graph, has connections to its neighbours. You can leave these alone (no corellation) or tune them up to +/- 1, corresponding to "must be the same as this other qubit" and "must be different than this other qubit". You can also bias each individual qubit to be a 0 or a 1.
Then, you let it go, and it anneals, and you observe the result. Your goal is to get to the _lowest energy state_ possible: the least possible frustration remaining.
In our magnet example, it would be as few magnets as possible wanting to move - if you poked them with your finger they'd want to go back into their current state. You could imagine that your magnets might not get down to their absolute lowest energy state; maybe it would take too much energy to flip from their starting state to that lower state. In a quantum system, because of tunneling, the system can reach these lower ground states. Rather than being in a fixed position the way our magnets were, qubits are in a quantum superposition, so they can reach a lower energy state without having to climb up that energy hill. Or so we're lead to believe by the numbers, anyway; I'm not a physicist.
Now, if you can map some useful computational question onto the original configuration of qubits that is answered by the ending position, you've got yourself a useful quantum computer. This is the hard part! The key is to use optimization algorithms where a lower energy state = a more optimized result. If you can do this, there's a ton of employment waiting for you.
Then, if you want "quantum supremacy", it's matter of providing more optimized answers in less time, particularly as the problem scales up in complexity. There does indeed appear to be a crossover point coming in a decade or so, at least for the small class of real-world problems that the Ising Hamiltonian works for.
Yes, I agree. But this has not been demonstrated. What's being demonstrated (apparently) is that measuring a quote-unquote "quantum computer" doing whatever it does naturally is easier than simulating said quantum computer classically. Well, yeah. Duh.
That's the same thing as "rendering" a scene with an unbiased renderer vs. setting up that same scene in reality and using a camera. No one in their right mind would point to the camera and say they'd create a next-gen, heretofor impossibly fast unbiased rendering algorithm.
Technically, the digital camera is "computing" the same result—but no one would call it that, and IMO, the same is true of what is being discussed in the FAQ. It's literally NOT COMPUTATION, which brings us back to your line:
> Now, if you can map some useful computational question onto the original configuration of qubits that is answered by the ending position, you've got yourself a useful quantum computer. This is the hard part!
It's not only the hard part, it's the only part that matters. Until then, you have AT BEST a "quantum camera". Potentially useful, perhaps—but it's not a computer, or computation.
Anyway, thanks for responding. Much better than drive-by downvoters probably hoping to get PhDs in this stuff.
Also, my intent is not to belittle Google's engineering effort. In the same way that I wouldn't belittle Sony for making 24mmx36mm backlit CMOS sensors. It's impressive! Good for them. But it's not computation, and it definitely doesn't establish some kind of "quantum computing supremacy" (since no meaningful computation is being done). When they stop handwaving about mapping actual computation problems to the scene they've set up and are measuring, then I'll get excited. Maybe it's doable, maybe not. But a quantum camera, AT BEST, is one step along the path...
The experiment used a classical computer to randomly generate a circuit C, told the quantum computer to execute it, and recorded the result. Then they repeated this but executed each circuit in the most optimal way a classical computer can. Finally they compared the distributions to verify that the quantum computer's results matched the correct classically computed results.
This proves that the quantum computer is able to generate that distribution faster than a classical computer and isn't just doing some other spurious process that happens to be faster.
Yes, this random distribution isn't very useful and so this result is only really interesting as an experimental verification of theory. However, it is a computation that benefits from quantum speedup in real life!
Hopefully soon, algorithms will be found to generate more useful distributions (as it seems for the time being sampling is the only application of this type of QC that is practically doable). For example, Aaronson mentions that generating verifiably random bits is not much more difficult than the noise generated in this experiment and could have an impact in a variety of cryptographic applications.
[rewriting your words] The experiment used a classical computer to randomly generate a scene C, set that scene up in real life, and recorded the result. Then they repeated this but rendered the scene in the most optimal way a classical computer can. Finally they compared the results to verify that the scene in real life matched the correct classically computed results.
The scene + digital camera has the exact same role (and proof value) in my hypothetical experiment as the quantum computer + measurement device does in the Google experiment.
It's not the camera that "contains the circuit", it's the scene and the camera together that computes the same values (exactly, as it turns out) as the classically computed rendering algorithm of the same phenomena.
Call it "camera computing", write a paper in Nature, win Turing award. The hard part with camera computing is the same: How to map a computation onto the generated scene so that the measurement device (camera) gets a meaningful result faster than it can be simulated in the computer.
Look, I've worked on things for a long time only to discover in the end that there's nothing there. It sucks, I get it. Move on, try something else. There's nothing here.