IBM's critique https://news.ycombinator.com/item?id=21333105
Scott Aaronson's blog https://news.ycombinator.com/item?id=21335907
Note, that the "extended Church-Turing thesis" says something very different than the actual "Church-Turing thesis" (and is also not due to either Church or Turing).
The original Church-Turing thesis is not challenged at all by these experiments. Indeed, it has been shown that quantum Turing machines are not computationally more powerful than deterministic Turing machines - as in: any program that can be computed by a quantum Turing machine can in principle also be computed by a deterministic Turing machine (it just might take a little/a lot longer).
The extended Church-Turing thesis is concerned with the performance of a computation and says something about the "efficiency" of the computation.
The original thesis is concerned with what's possible, while the extended thesis is concerned with what's feasible.
In the preprint, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.
Basically what IBM is claiming is that Google's circuit doesn't do anything useful really, it is just meant to be very complicated to do in traditional computing systems.
And even in this idealized position, Google's quantum computer doesn't transform an unsolvable problem into a solvable one.
According to IBM, "Quantum Supremacy" means accomplishing something useful to the society, that simply wouldn't be possible to do otherwise. Google's article doesn't show any sign of that, IBM claim.
Reading the IBM article, they are fully aware of what "quantum supremacy" means in a technical sense, and they are urging the media not to use that term, since it will be misunderstood by the public. Their claim that Google has failed to achieve supremacy rests solely on their claim that they can simulate the circuit far faster (and scale the simulation linearly) using better classical algorithms.
That's a strong claim, and I'm interested in seeing what Google responds with.
Disclosure: I work at Google, but hahaha, no, I'm not cool enough to work on this.
Another thing, this is actually Martinis' decades long work. I know Google recently started raining money down on his lab a couple years back, helping with the classical aspects, design etc, and media loves reporting as Google's Quantum Computer, but the actual quantum computer, the nitty gritty physics isn't Google's work. Martinis already had a working setup with ~10 qubits when Google started supporting him ~5 years ago.
This IBM "rebuttal" sounds a bit like cheating on multiple aspects, and the timing of the announcement is interesting.
Note that they don't tell you how the memory requirements grow with the number of qubits either (which is exponential as well). I expect the response will be new toy computation proposals which will also be prohibitively expensive in classical memory (not just classical CPU with limited memory) in current supercomputers as well. If the experimentalists can roll out more qubits faster though (less likely), the "concern" will be addressed as well.
Here's a simple, extended version.
A quantum gate is, mathematically speaking, a matrix. For a given physical system, of fixed number of qubits, obtaining that matrix on a classical computer takes (on average) a fixed amount of time, let's say T seconds. A quantum "circuit" is a sequence of quantum gates, applied consecutively in time, and you simply multiply them all to get their overall effect.
So if your circuit is made of 10 gates, the total CPU time is 10T, plus the time for 10-1 matrix multiplications. If it is 20 gates, then it is 20T plus time for 20-1 matrix multiplications.
Since multiplying two matrices of the same dimension also takes a fixed amount, on average, the simulation time grows linearly with circuit depth.
The quantum supremacy is related to how T grows as you increase the number of qubits, n (which is exponential, it's a 2^n by 2^n matrix).
Even the brute force "simulation" of a quantum computer is like UN...U2.U1 where Us are unitarity matrices. The hard part is obtaining those unitaries (whose dimensions grow exponentially with the number of qubits). For fixed number of qubits though, once you have N unitaries, you do N matrix multiplications. If you double N, it'll take twice long on a classical and roughly twice on a quantum computer (different gates take different amount of time to implement). But on an actual quantum computer, there are tricks you can do (if the Hamiltonian allows) which may allow you to do it in fewer unitaries.
Circuit depth is still important because it is important 1) for modelling the noise in the device and extracting gate fidelities, that's basically how randomized benchmarking works although they're doing something else for fidelity estimates it still is a function of circuit depth 2) for doing anything meaningful when using a given set of basic building block gates.
> we ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible
Or am I misunderstanding something? (ELI5 plz, I'm not well-versed in quantum computing).
It looks like Martinis' group thought a "brute-force" simulation for 54 qubits is impossible, and this appoximate and "clever trick" is the only way to go at this number of qubits, but IBM says that with some different tricks, 54 qubits is still doable (I'm just guessing what they were thinking, and this is the only plausible explanation I can think of).
Overall, a discussion which has nothing to do with quantum supremacy really...
Whether it is a factor of a million or thousand though, the gap between a quantum computer will increase exponentially as the number of qubits is increased. This is fact, assuming quantum mechanics is correct.
Actually, physicists have been trying to deal with this painful fact for quite a long time: it is also the reason why many body physics is so hard computationally and we spent almost a century to develop approximate methods to calculate even the simplest idealistic situations even with hundreds or thousands of atoms using density functional theory, quantum monte carlo etc etc.
The whole idea of quantum computation is to turn this difficulty upside down and try to use it into our advantage.
I agree, but then there is no need to prove quantum supremacy after all. This entire business is about whether quantum mechanics is correct or not.
There is still a need, but it is for an entirely different reason: not everyone (people with money and funding agencies, in particular) is physicist.
Quantum computing is analogous: test of quantum mechanics in "strong computational regime" is scant. You seem knowledgeable, but your comments on current claim of quantum supremacy is akin to, say, when claim of discovery of gravitational wave was made and then disputed, replying, "gravitational wave will be discovered, this is a fact assuming general relativity is correct, general relativity is 100 years old, no physicists doubt the theory" etc. All true, but rather pointless.
General relativity was, and still has never been tested to anywhere near that level of precision, and that many times.
And in fact, we still have strong reasons to doubt general relativity because there may or may not be deviations from it observed in galaxies and large scale universe.
General relativity may be correct in that scale (with the ad-hoc addition of a cosmological constant) but to be consistent with those observations, one requires the existence of black holes, dark energy and and dark matter, things we never truly observed and don't know for sure exists (although it is our best explanation at this moment).
We don't really understand how gravity behaves in very small scales, extremely large scales, or in the presence of very strong energy densities. One thing we know for sure is, general relativity is not the ultimate theory of gravity, it spectacularly fails in very small scales.
We would like to stress-test all aspects of general relativity to 1 in a billion precision as well, but we can't.
This is basically because gravity is very weak and you can't design all sorts of controlled experiments to test it. The best you can do is to make observations in the vicinity of readily massive things like Earth, Sun or a black hole, which you have no control over. You can't make two black holes, pit them together and see what happens in the lab. A situation very different from the quantum theory.
Physicists did expect to observe gravitational waves, and it wasn't a shocker to anyone. The thing that makes is very big deal for physicists is that we now have a whole new way probing things that we couldn't before, in particular things which we don't understand yet, including the violations of general relativity which we do expect to see.
We don't expect to see deviations in quantum theory (unless you bring a black hole nearby your quantum computer).
In the Hipparchian and Ptolemaic systems of astronomy, the epicycle (from Ancient Greek: ἐπίκυκλος, literally upon the circle, meaning circle moving on another circle) was a geometric model used to explain the variations in speed and direction of the apparent motion of the Moon, Sun, and planets. In particular it explained the apparent retrograde motion of the five planets known at the time. Secondarily, it also explained changes in the apparent distances of the planets from the Earth.
Epicycles worked very well and were highly accurate, because, as Fourier analysis later showed, any smooth curve can be approximated to arbitrary accuracy with a sufficient number of epicycles. However, they fell out of favour with the discovery that planetary motions were largely elliptical from a heliocentric frame of reference, which led to the discovery that gravity obeying a simple inverse square law could better explain all planetary motions.
So the epicycles worked very well to explain and predict observations, but for reasons irrelevant to what really caused the motion of the planets. It's still possible that quantum mechanics will fall in the same way.
Note I don't have a horse in this race. I have no opinion on whether quantum mechanics is right or wrong.
sorry, i think you're doing a slew of hands here. Quantum computing relies not just on QM, it relies on Copenhagen interpretation of it - superposition being a physical reality, not just statistical description. That interpretation is tested by the Bell experiments and granted where have been a bunch of them which do look like confirming the Copenhagen.
>Even classical computers rely on it.
all that confirms QM, not the Copenhagen interpretation.
Wrt. Google supremacy demonstration - it would work the same in statistical aggregate interpretation too thus actually not showing anything quantum computing.
If they don't give the same results, it won't be interpretations: you'd have two competing theories and one of them will be wrong since it can be ruled out experimentally.
This is also why majority of physicists don't care much about such philosophical aspects. You can argue that they should, are there are a few people working on foundations of quantum mechanics, but most physicists (including me) see it as semantics and choose to spend their time on practical physics. At least that's what my field (condensed matter physics) is about, which also encompasses the realization of these quantum computers. You can't change the conductivity of a material, or the measured charge state of a transmon qubit by using a different interpretation.
Sorry, no. Bell experiments do produce different measurements for different interpretations thus ruling one of them true. As it stands now they seems to confirm Copenhagen for pretty much everyone.
"Ensemble interpretations of quantum theory contend that the wave function describes an ensemble of identically prepared systems. They are thus in contrast to “orthodox” or “Copenhagen” interpretations, in which the wave function provides as complete a description as is possible of an individual system."
Bell experiments seems to almost everyone to rule it out, while myself unfortunately, as i really want to wholeheartedly jump on magical bandwagon of superposition and quantum computing, see gigantic holes in those experiments which allow all those other, compatible among themselves interpretations - local realism, pilot wave and ensemble - in and actually pretty much rule Copenhagen out.
We are getting more and more proof points and eliminating a lot of the doubt but this has not been shown yet.
What you're describing is mainly the new generation of people who grew up hearing about quantum computers on the news about experimental realization of small-scale (a few qubits) quantum computers.
I am not positive we are going to get quantum computers with error correction on boolean qubits that can do all the meaningful tasks we hope quantum computers can do. I think it's more likely than not, but it is not close to happening and may never happen. I am not even 100% certain (but it is very very likely) that it is physically realizable.
In my view, this current milestone is kind of contrived.
And even if we do, it's not clear what subset of tasks currently performed on classical computers will be superseded by quantum computation. That's perhaps one of the biggest problems: normal computing has had a whole lot of use cases to pay the research and capital costs.
I am involved on the theory side of the implementation of different kinds of solid state qubits, so you may say I'm biased, but the question really isn't whether we will ever get it or not, the question is when. We already have had exponential growth in single qubit coherence times in the past decade, we have very good entangling gates, and there isn't any fundamental reason why the number of qubits can't be increased. It's not like there is an invisible great barrier ahead of us, and nothing in the physics of these devices say we can't.
By the way, they aren't using quantum error correction methods right now, basically because it's not worth it: you need a lot of physical qubits to encode a high quality logical qubits.
The fact that we have exponential improvements to parts of the process is nice. But exponential improvement doesn't continue forever. Extrapolating from the current status 5 orders of magnitude out seems reckless.
- Qubits are not boolean. Classical error correction does not work in quantum computers. You need quantum error correction.
- The exponential improvements are in single qubit coherence times, and a stagnation in those improvements doesn't prevent an increase in the number of qubits. (it'd be nice to have, though, because then eventually we wouldn't even need error correction)
- No, it doesn't take 1000 or even 100 physical qubits to have a high fidelity qubit (it can actually be 1:1 with dynamical error correction) or 20000 error corrected qubits to tackle real world problems (Shor's algorithm isn't the only interesting thing out there, simulating some quantum systems requires far less qubits and probably more interesting for people in natural sciences)
- Even if we were 3-5 orders of magnitudes away in # of qubits, it's still a matter of time, and there is still nothing fundamental in physics or material science that prevents having millions of physical qubits.
- Now I feel like I'm dealing with one of those non-physicists people again who claim quantum computers are a pipe dream, only this time instead of a faulty physical argument, it has no real technical basis at all, so this is where I'd like to stop. Enjoy the rest of your day
Figure 4b is about error estimation, They use XEB which is exponentially faster than, say doing full quantum process tomography, which is also true. That's the whole reasoning behind XEB, which gives far less information about the error channels, but you still have a fair estimate on the overall fidelity.
None of these have anything to do with the complexity of the actual computation done on the quantum computer though.
Google chose an algorithm exponential in circuit depth as the best classical algorithm in order to establish quantum supremacy. IBM demonstrated (as you agree) it is in fact not the best classical algorithm. IBM is entirely correct to point this out.
It is a trivial fact that on a classical computer, you can simulate a quantum computer in time that grows linearly in circuit depth, in principle (as in the case of the "naive" way I mentioned above). No one in doing quantum algrothims ever claimed this was the case, and claiming otherwise is just a silly mistake.
Preskill coined the word "quantum supremacy", not Martinis. Even if someone from Martinis' lab misspoke, you can't pin it on Preskill.
Take Shor's algorithm, which has been the poster boy of quantum computing for decades. It gives exponential speed up in the number of qubits, not circuit depth.
See other examples here: https://quantumalgorithmzoo.org/ The complexity is in the number of qubits, "cirucit depth" is a non-concept in quantum algorithms.
No one cares about the circuit depth in quantum algorithms, because in principle, you can always reduce the circuit depth to 1 on a quantum computer: quantum computers aren't necessarily made of basic logic gates, you can implement any unitary in by for example smoothly pulsing the fields in a correct way in a single go.
"Depth" is a concept which usually comes into play when you try to measure the quality of the implementation of some given gate U. You repeat U many many times say M, and see how some measured quantity decays as a function of M, which gives a measure of the fidelity. But this has nothing to do with the quantum algorithm's complexity, it's just for "benchmarking" a physical implementation of the gate.
Quantum supremacy is O(n) claim. Then what is n? The answer is that there are two parameters, not one. In Google paper, n is number of qubits and m is circuit depth. n is well known to be difficult to increase. m is not easy either, because if your qubit isn't stable enough, you can't run deep circuit. What Google did is to run n=53 and m=20.
Then, why do you need n=53 and m=20? After all, you could see whether it's exponential by, say, trying n and m from 10 to 15, it doesn't need to take days and years. The answer is that there is time-space tradeoff available and if your n is constant, exp(n) space (but still constant space, since n is constant) poly(m) time algorithm is available, and if your m is constant, exp(m) space (but still constant space, since m is constant) poly(n) time algorithm is available. So if you want to show exponential speedup, you need to be able to exclude these algorithms by increasing n or m enough such that exp(n) or exp(m) space is not realizable. Google chose n=53 such that exp(n) is not realizable, and ran scaling experiment on m.
This is what they mean whey they say in the paper "Up to 43 qubits, we use a Schrödinger algorithm, which simulates the evolution of the full quantum state... Above this size, there is not enough random access memory (RAM) to store the quantum state". Now what IBM is saying is becoming clear. There is not enough RAM, but there is enough disk, so exp(n) where n=53 is realizable, and simulation runs linear in m. It's not a new algorithm, it's exactly the same algorithm Google ran up to n=43. So 43 qubits clearly can't demonstrate quantum supremacy. For the same reason, 53 qubits can't either.
Summit has 250 PiB of total storage currently, so it would seem that a 55-qubit simulation is almost within reach by IBM's method as well.
So why is this in the spirit of the test? Because this means that there are some problems that can only be efficiently solved by quantum computers. So this establishes "supremacy" in the sense that while a quantum computer can efficiently solve any classical computing problem, a classical computer cannot solve any quantum computing problem.
The distance between having this proof-of-concept and having meaningful speedups on real problems that cannot be matched by classical computers is very large; all this tells you is that it's possible, and looking more might not be a waste.
I often wonder why people feel compelled to write this. There nothing in your profile or in your post history to prove unequivocally you do or don’t work anywhere in particular so why even mention it?
Edit; responders are citing ethics and company policy; but does anyone really think vague hand wavey speculation on a public message board is relavant to anthing?
Sure if you work in the ads group and you posted "wow, our division is in trouble, competitor Z is really killing it, and our quarter earnings are going to miss big time!" Yeah that matters, but GP? Really?
In addition, there are plenty of ways to figure out where an HN member is employed besides looking at their post history. It’s not necessarily found in publicly available data, but it can be found.
Also, you have to account for future acts. Disclosing early can help insure against accusations about misconduct if you were to out yourself — or someone were to out you — later.
(I did forget to mention I didn't speak for Google, but it should be pretty obvious from context)
1. Regarding policy, I think a simple disclosure like this is my personal opinion not that of my employer would be good enough.
2. Indicate possible bias: nope, one should always understand any opinion is biased due to many different reasons. So, unless this is a rigorous analysis of something, this is quite uncalled for.
I thought that the whole point was to show that what you'd done definitely wasn't just build a complex classical processor. The usefulness of the calculation is besides the point.
Quantum Supremacy is more about where quantum computers are clearly superior to non-quantum computers to solve a problem.
I posted an article explaining more:
All in all, it seems more of a battle of semantics, and the two companies talking past each other. It's kind of how everyone seems to interpret Moore's Law to mean "any progress in computational performance whatsoever" these days, as opposed to the original definition of "transistors doubling in the same space every 18-24 months," and then using that to "prove" that "Moore's Law is still alive." (it's not, and hasn't been for a long time).
I think in the end what's important here is that it was proven that a quantum computer can do "something" (yes, even anything at all is a big milestone) significantly faster than a classical system.
Because this is what matters in the end that quantum computers can do some tasks significantly faster than classical systems. They don't have to do only tasks that are "impossible" for classical systems to be useful. It's also why we use GPUs, FPGAs, and ASICs for some tasks, instead of just using CPUs for everything.
I do agree that it was in Google's interest not to work so hard on optimizing the classical system for the simulation, to make its quantum computer look that much more impressive. But if the quantum computer is still faster, then it doesn't really matter.
Or is there some catch that undermines that headline claim?
Do you think it wouldn't be possible to simply upgrade the supercomputer to shave some time off those 2.5 days ?
To me it seems clear that everyone gullible already bought into the Cloud, ML/AI, CI/CD buzzwords and now they need a new buzzword to market.
> doing something on a quantum computer that would be infeasible on a standard computer
Sorry, but I fail to see how these two are different.
> it's a state when a quantum computer can achieve results that would require more operations on a binary computer than a binary computer has proven to be able to do
but the point will still stand.
We can't say we achieved quantum supremacy for this one thing because binary computers still have supremacy over everyting else.
I guess we can agree here that quantum supremacy was definitely not achieved since we are not clear on the definition of said quantum supremacy.
> and with far greater fidelity
comes into play.
Some overhead is expected when using different hardware to simulate it.
What matters is asymptotic computational and memory complexity.
The QC isn't cheap or available enough to be relevant is your can compute it reasonably well in a few days.
A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.
For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.
Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.
Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.
As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.
So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.
Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bitstrings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bitstrings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow.
It sounds like they're saying "simulating this physical process accurately is extremely computationally expensive, but just running the physical process in the real world is MUCH faster". But that's true of basically any physical process!
Simulating turning on a single lamp in a room and figuring out how much light falls on what walls is an enormously computationally expensive process, which can take immense amount of time to figure out (depending on your desired level of accuracy). Movie studios, CG artists and game companies build entire render farms to solve this problem. But the physical process itself (just turning on the light) happens instantly. Just replace "turning on the light" with "running a quantum circuit".
The trouble is we don't know many (any?) interesting algorithms to run on such processors. We can factor prime numbers using Shor's algorithm, but the numbers we can factor with 54 qubits are so small that our best non-quantum algorithms on our best non-quantum computers far outperform them. No supremacy is demonstrated. It's only when one takes problems that are ideally suited to run on this processor that it outperforms classical computers.
Even with the deck stacked this way, it's taken until 2019 for it to be done for the first time! It's a very, very early milestone.
Quantum "supremacy" only feels like a real thing when a quantum computer can solve a problem that isn't directly tied to the simulation of itself. That's my feeling, anyway.
BTW: this is not to disparage the work itself. This seems like great research, and great progress in this field. Good on Google and the researchers!
I just want to emphasise that there were people who believed that even this small step was impossible. Prominent quantum-computing skeptic Gil Kalai believes that scalable quantum computing is impossible in principle, and believes that this paper didn't achieve what it claims, but he says that his view would be refuted if they did! (This is in contrast to IBM, who isn't criticizing the quantum side of the experiment, but the comparison to classical computing; and to the Internet commentariat who are saying that even if they did everything they claim, it's not interesting).
While that's true, there are a great many physics researchers who are extremely interested in quantum effects. A computer that quickly simulates quantum circuitry, if only for study / practice, is still extremely useful.
In effect: a lamp isn't very important because most engineers and researchers already knows how a lamp works. But quantum circuits are itself an interesting, barely-studied, field with few experts truly understanding it yet.
Has something like this been actually demonstrated beyond the theoretical possibility?
EDIT: it looks like it only has a single, weird, type of gate. But I guess the connections between gates are configurable? Still different to a lamp!
GANs for generating random organisms anyone? Human 2.0 can probably come about after some of the feature vectors are figured out via PCA on the activations. Just boost feature vector for intelligence.
Anyways, yeah! Real matter is already programmable. As programmers its easy to forget that physics says this whole thing is all a monism of information and computation. Code and data. Yin and yang. Even the prime numbers come about as data because of self modifying code. P(0)=2 but all future primes are defined by not being divisible by P(x<n) - think snake eating its own tail and yet growing while doing so.
In fact, the most beautiful part about the prime numbers is that they are defined sequentially yet alternative constructions exist which are `atomic`. It is odd that something defined in such a sequential process would form a beautiful structure that can be reduced to direct construction. Relatedly, we see this all the time with recursive functions being turned into procedural ones or functional style data pipleline transformations.
The ordering is actually irrelevant to the chaos that the primes display between multiplication and addition. That is, if you defined the first prime as 3 you would yield an entirely different set of primes. For example, 4 would be prime. Beurling figured this out and a lot more through what he called Generalized Primes .
Reminds me of the last line from Watson and Cricks 1953 Structure of DNA paper: "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material" ;)
I bring up genomics specifically because the main profit driver appears to be biotech. Predicting near-instantaneously whether regulatory proteins bind to a target genome site could revolutionize drug discovery.
And while full "programmatic" control of transcription factors may be dozens of years away. In the near term, annealing based ML can speed up analysis of genome-wide datasets
Unconventional machine learning of genome-wide human cancer data
It’s hard to parse what the actual thing being computed is.
"Hey, we have a new fluid flow simulation processor that reaches CFD supremacy over classical computers... we pour water across the surface of things and the results are indistinguishable to reality compared to the days, weeks, months it would take to simulate and still wouldn't reach the same resolution of accuracy on a classical computer."
I’m wondering if there is some other way the problem could be computed without having to simulate qubits.
By taking an ensemble of real-live measurements, you can infer what the most probable eigenstate is, which is the "output" of the program.
Compare this to, for example, D-Wave. As far as I understand their quantum computer they could show it was faster but it was "X times faster" rather than scaling in a completely different way.
Can I ask an honest question? Shouldn't people be scared shitless of quantum computing?
If a single organization develops the ability to factor prime numbers at unheard-of speeds, doesn't this put in jeopardy the mathematical backbone of all encryption, and therefore don't they pose a threat to all financial institutions? Don't they then have the power to hold the world at ransom?
I don't mean this as a conspiracy theory -- I don't think an actual "event" is even necessary to pose such a threat. If it's proven that theoretically sound classical encryption can be theoretically cracked in record time by quantum computers, I am afraid of what will happen to the world financial markets due to panic. Nevermind the first time it is actually used to steal billions of dollars, or some other large-scale event.
I'm no expert on quantum computing, please enlightenment if indeed Shor's algorithm does not pose a threat to privacy or to the financial markets. It would help me sleep at night. For now, I do think the idea is not totally unfounded  and therefore quantum computing, for all the positives (eg fast solves for hard optimisation problems, which would have massive applicability), scares the living daylights out of me.
1. It's extremely unlikely that this will happen overnight, and practical factoring is still pretty far away compared to today's quantum computing state.
2. There's work done on developing post-quantum-cryptography (which is in essence all cryptography that we believe cannot be broken by a QC). NIST, the US standardization organization, is currently running a competition for future standards. It has some challenges, but it's doable.
That still leaves the issue that past-time communication may be decrypted in the future. But other than that - we will almost certainly have alternatives when QC factoring becomes real.
> That still leaves the issue that past-time communication may be decrypted in the future.
Honestly I didn't even think of this. Pretty freaky nonetheless.
There are arguments against using RSA, but I don't see how Quantum Computing is one of those. The common elliptic curve alternatives are even more vulnerable to QC than RSA, while the new QC-resistant algorithms aren't accepted yet.
So if QC is your concern, you have algorithms worse than RSA on one corner, and not-entirely-tested algorithms on the other.
Patents are the most reliable way to make sure your crypto isn't used widely for 20 years...
I'm very perplexed that the main article states that "𝑃(𝑥_𝑖) is the probability of bitstring 𝑥_𝑖 computed for the ideal quantum circuit", but Supplementary Information section IV.C seems to argue that if the "qubits are in the maximally mixed state" (i.e. the quantum computer doesn't work) then "the estimator [𝐹_𝑋𝐸𝐵] yields zero fidelity" since 𝑃(𝑥_𝑖)=1/2^𝑛 for every 𝑖 in this case. To me this doesn't make sense, since P(x_i)=1/2^n is clearly the probability mass function of the empirical non-ideal quantum circuit output distribution (and not the ideal one).
I've posted a question on the QC stack exchange: https://quantumcomputing.stackexchange.com/questions/8427/qu...
"In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be utterly devoid of scientific interest or merit."
IBMs responce: "We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced." From: https://www.ibm.com/blogs/research/2019/10/on-quantum-suprem...
Currently at a much greater cost in time and space, which is why they are saying "can be performed" not "has been performed".
> we expect that with additional refinements the classical cost of the simulation can be further reduced
Any such refinements would be certainly interesting to see, as they would probably be very hard won.
or efficiently are terms of trade for computer scientists and mean that there's a polynomial upper bound.
On the other hand IBM also claims (from the graphs? not sure if they have anything more to back up the claim) that their scheme "scales approximately linearly with the circuit depth" which would likely make the experiment meaningless.
The entire post is a bit odd though since it starts out with a serious discussion of computer science topics and then swerves into criticism of how the media misuses terms.
From Scott Aaron's blog post about this paper: https://www.scottaaronson.com/blog/?p=4372 -- he shares some thoughts on whether this benchmark is itself easy or hard to simulate on a classical computer.
> Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t spoofing Google’s benchmark be classically easy?
He goes on to supply an argument that its a hard test to spoof classically.
I'm wondering though -- has anyone addresses how hard it would be to spoof this benchmark using a collection of smaller (potentially non-scalable) quantum computers ...? E.g -- if it were possible to spoof the benchmark with a smaller quantum computer -- would that undermine the argument in the paper that this benchmark is evidence of quantum computing scaleability ...?
I wrote about half of the wikipedia article that Google is using to explain how their quantum computer works (https://en.wikipedia.org/wiki/Quantum_logic_gate)
Should probably had done that using a pseudonym, and not just anonymously using my IP address.
But at least the IP address I used for the last weekends work is my homes static IP, that almost never change.. But still. I need an account at wikipedia!
If I treat my quantum computer as just a channel for sending a message to myself I think I can invoke the channel capacity theorem. Furthermore, I think that if I try to read an entangled state, it's as if I am reading the entire state all at once, so the maximum number of bits I should be able to read is bounded by the logarithm of the ratio of the signal to the noise, where noise scales with temperature. That number doesn't scale well. It suggests that if I want to read an additional bit from a quantum computation, I need to drop the temperature by half.
Furthermore, I introduce heat to my quantum computer when I input the state. I must construct the input to my quantum computer from a uniform mixture by the no cloning principle. This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light, This gives me an upper bound on my temperature of O(1/t^3) assuming extremely optimal heat dissipation. This means that the time required to cool my state to the point where I can read n entangled bits is exponential in n, right?
You can't read the entire state. A quantum channel where you send n-bits lets you read n-bits, not exponential in n-bits.
There is indeed reduction of signal as you increase temperature. Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)
Also this: "This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.
An entangled state is different, Since I can read all of the bits outside of each other's light cones, they cannot causally influence each other, which is why I said as if instead. There is no such thing as "all at once", but I can properly read each qubit before all of the others, as far as that qubit is concerned and it would still obey the correlation. The reading of an entangled quantum state is indeed a single measurement.
>Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)
2) I don't think that quantum error correction ultimately solves the problem of heat. It tries to solve the problem of single bit/sign flips from not doing your computation in a closed system. My argument assumes that the only things that exist in the entire universe are the computer and the heat.
"This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.
3) you're misinterpreting my statement of the problem. I don't have to get each additional qubit just as cold as the original. Every qubit I add requires me to get the entire quantum state to half the temperature that I had previously. The geometry of your refrigerator here only matters in that it's finite dimensional.
To be honest, after the hype I was expecting there would be more applications.
In terms of where I am personally excited about quantum computing, it has the potential to greatly speed up simulations of chemical and material science properties that are currently hard to simulate accurately at all. This will be great for, for example, semiconductor development and medicine. These applications are a ways off though, but if we weren’t hitting milestones like today, it would suggest they were not worth vigorously pursuing.
Disclaimer: I work at Google, not on quantum computers. Not an expert in quantum computation.
In coming days, I am sure many will perform the independent analysis of these datasets.
See e.g. https://www.scottaaronson.com/blog/?p=4372
As for where we are with Shor’s algorithm, it works by figuring out the prime factors of an integer (which is how RSA keys are constructed). In 2001, IBM factored the number 15 (into 3 x 5) using a quantum computer with 7 qubits. In 2012, they were able to factor 21 into 3 x 7, which is currently the largest number factored using Shor’s algorithm.
Given that cryptographic keys run into the hundreds and thousands of bits, Shor’s algorithm isn’t going to be a threat anytime soon. But note, too, that it’s focused on RSA, which is the original PKI algorithm. Newer algorithms exist which aren’t threatened by Shor’s algorithm, and work is already underway to develop new quantum-resistant PKI algorithms.
We’ll hear about it when low-bit RSA is easily cracked. And for several years it will be good enough to just increase bit lengths to stay ahead of it. Given all of that, I’d expect it’ll be 20-30 years before quantum computers are a threat to today’s PKI. And by then, PKI will be 20-30 years on from what it is now.
"Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output."
10e10000000000 + 10e10000000000 = 2*10e10000000000
But could we use this quantum processor to aid the rapid design of far more powerful classical computers?