Hacker News new | past | comments | ask | show | jobs | submit login
Google claims to have proved its supremacy with new quantum computer (telegraph.co.uk)
256 points by andrewstuart 11 months ago | hide | past | favorite | 229 comments



Quantum Computing/Information researcher here. This article is largely garbage, the original (~2 month old) paper is surprisingly readable [0] and I suggest you to check it out. Here's my $0.02: The efforts of the Google team is commendable in that they're trying to squeeze as much out of their noisy systems as possible until error correction is here (they need to, to justify their existence after all) and they are aware of the shortcomings of the paradigm. But personally I don't see anything useful coming out until error correction and number of qubits are improved many orders of magnitude to have fault tolerant QC. The jury is still out on their utility once realized; Shor's algorithm is an obvious one followed by quantum simulation algorithms that might benefit chemistry and fundamental physics, but it's not as big of a silver bullet as we thought before for simulation. Maybe someone can tell me what fast prime factorization is good for besides breaking encryption.

Also IBM released a paper[1] recently making similar claims ("quantum advantage without complete error correction") which got obliterated by 2 back to back papers [2,3] that did what they did for much cheaper on a classical computer.

[0]: https://arxiv.org/pdf/2304.11119.pdf

[1]: https://www.nature.com/articles/s41586-023-06096-3

[2]: https://arxiv.org/pdf/2306.14887.pdf

[3]: https://arxiv.org/pdf/2306.16372.pdf


Questing regarding error correction and logical qbits. From my very limited understanding logical qbits could mitigate some of the error but uses multiple physical qbits for every logical qbits. Has anyone created a quantum computer that has logical qbits? Say a small one with 16 physical 4 logical qbits quantum computer, or similar.


No, but the quantum groups are making progress towards essentially making 1 logical qubit -- a common scheme uses a square grid of physical qubits, say with size 7x7, 8x8, 9x9, etc.. as a single logical qubit. This may seem like a moving target, but it is not. There is a specific, concrete error rate on the physical qubits, below which it is known that the logical qubits on increasingly large grids get _exponentially_ more accurate. Once the engineering achieves that noise level, logical qubits in the form of NxN grids where N~10-20 will ~eventually~ follow. I won't venture to predict how long, as building and controlling these grids of qubits while keeping the error rate low isn't easy. Could be years or decades.


I don't agree. I do a bit of QC in my PhD, but my advisor works specifically on error mitigation, and from what I see, we'll soon (maybe a couple of years) be there: we'll have state-of-the-art simulations on quantum computer. There will be classical simulations that will improve on a specific claim, but eventually they'll die out.

But even if it is not true, think how good can be QC without being good enough for quantum error correction. You're saying orders of magnitude improvements are needed to be able to do EC, but I'm sure useful and interesting things can be done with log(QV) of 100, only 5 times more than current record.


You're talking in very vague terms here, what does """interesting""" mean?

Let me give you some numbers. Factorising RSA is order 10^6 logical qubits (and I'm being charitable here), simulating FoMoCo is around 10^7. The state of the art error correction is around (again, charitably) 10^3 physical qubits per logical qubit at the moment, that gives us 10^9-10^10 qubits necessary for the simplest quantum application. We're at order 100 right now.

Peter Shot thinks that error correction can go down to 100 physical-per-logical, that's an order of magnitude shaved off there, but the algorithm itself is pretty basic, i don't see it getting any better there. Simulation algorithms have much better odds in seeing improvements as I think the gates used there are rather non-standard and have avenues for better gate compilation techniques.


Well, I was talking specifically simulations of quantum systems (physical and chemical). Interesting is indeed vague, what I meant systems that people study outside of attempt to create a hard-to-simulate (classically) quantum system. Say, 2D Hubbard model or some 2D topological insulator.

Optimization (VQE, QAOA and stuff like that) also is realizable on NISQ. I'm more sceptical about its usefulness but it is hard to say.

Shor's algorithms is definitely outside of NISQ, I don't think anyone serious about QC thinks otherwise. From there to "no applications for NISQ" is a far fetch.


I'm not educated on QC. But wouldn't Grover's algorithm be very useful? I believe it provides a sqrt(n) time search. For QC to be useful,it just needs to make common utilities very fast. I should look more into QC. It would be cool if it could speed all computation up. I understand there are algorithms that it could execute that a classical composer could not, but I wonder if it could eventually he like just faster hardware.


Most general purpose computing is based on algorithms that are essentially O(n).

What that means is that they are a constant times the cost of simply reading the input. Often, the computation is actually cheaper.

Quantum computing might make some computations have a smaller theoretical constant, but it is almost certain that the practical constant ($/bit of input) will be much, much worse due to scale.

Right now, only very specialized computations look like they will ever have any speedup. And even that is only maybe, in practice.


According to a recent article [1], quadratic quantum speedups are very unlikely to outperform classical computers in any application. This is assuming very optimistic parameters for future quantum computers. Grover's algorithm is therefore not considered useful in practice.

[1] https://doi.org/10.1145/3571725


To load those bits into your quantum computer, you need to spend O(n) time. Most practical searches are O(n).


Not neccesarily, you can calculate them on the fly.


How so you mean calculating them on the fly? You need to at least load the data to perform operations on it.


Grover's algorithm does not require you to load the data to be searched through if it can be generated instead. Grover's algorithm is not neccesarily searching through a list but searching through the output of a function. Sure if your function is an array index operator you need to load the underlying data, but many large search problems are not that.


That clears that up thanks.


I think you have to scale very big before that starts to matter.


Grover's algorithm seems quite useful, though I'd guess it requires an enormous number of qubits (large unordered database) to be useful.


What class of actually useful algorithms are limited by unordered search?


> I don't see anything useful coming out until error correction and number of qubits are improved many orders of magnitude to have fault tolerant QC.

Error correction and control are vital for any system to be reliable.

Thankfully the ability to run simultaneously and vet against multiple outputs should overcome this fault for a time.


This doesn't work because the success probability drops exponentially in the length of the algorithm. It is possible that any real application requires quantum error correction which has not been demonstrated in any meaningful way.


Thanks for your input. Can we all take a moment to recognize what a rare and special corner of the internet HackerNews is? Here we have an article on a highly advanced topic and lo and behold a researcher in the field shows up to add clarity. In a mess of advertising and misinformation that makes up the modern internet, it’s really an island of knowledge and experience that I hope we all truly appreciate.


> Maybe someone can tell me what fast prime factorization is good for besides breaking encryption.

Maybe someone can tell me what alchemy is good for besides turning lead into gold?


Prime factorization only breaks a subset of crypto algorithms, it’s not that useful or game changing alone.


considering that the world is transitioning to post-quantum crypto... everyone moved from gold to digital currencies, pretty much


> Maybe someone can tell me what fast prime factorization is good for besides breaking encryption.

That untapped market of trillionaire hobby number theorists?



> He said: “This is a very nice demonstration of quantum advantage. While a great achievement academically, the algorithm used does not really have real world practical applications, though.

> “We really must get to utility quantum computing – an era where quantum computers with many thousand qubits actually begin to deliver value to society in a way that classical computers never will be able to.”

This seems to be the most level headed take from the paper and an explanation why (if claims are to be believed) we haven't seen anything useful come from quantum computing yet that couldn't be done by classical computers.

Maybe it still is all hype and the reason we haven't seen a useful calculation is because the only problems it can solve better than others happen to be especially contrived. I guess time will tell.


I don't think it's that the concepts are contrived (although the test cases certainly are), it's just that our quantum computers are really early in their development and can't do the more complicated things yet. It's like if we had calculators that took an hour to do each arithmetic operation- the fact that people wouldn't use it doesn't mean arithmetic is contrived, just that it isn't as powerful as better alternatives.

As the quote mentions, we'll need "many thousand qubits" to reach the point where society itself is getting new benefits out of things. These contrived cases help figure out how to scale the whole system.


There's a fundamental question as to whether or not it's exponentially difficult to add additional qubits. If each marginal qubit is 5% more difficult to add as the previous one the task would be essentially impossible - and even if it wasn't impossible it would turn out to basically be cheating (in that you would be doing exponential work via either the quantum computing route or the conventional route).


Note: that there is good physical reasons why the cost of QC may grow exponentially with qbits. Refrigeration is exponentially inefficient as T=>0 and the gap of a quantum system which sets the temperature you must cool to shrinks as you couple new degrees of freedom. This dynamic has been the basic reason for the sub exponential progress in the area (despite exponential expenditure)


Not for photonic quantum computing. Only detectors require cooling, and it is possible to build adequately sized quantum computers with constant number of detectors using loop based architectures.

Even more realistic architectures are very very cost effective on the number of components https://quantumfrontiers.com/2023/06/21/what-is-the-logical-...


Yeah, it always struck me as odd that the main quantum computing research labs went for the "VLSI" approach straight away rather than building room-sized computers with qbits that are known to be more stable and don't require such aggressive refrigeration.

Between the costs of refrigeration/fabrication and the increased speed of decoherence, it's not hard to imagine that the approach of using superconducting qbits may be a dead end, despite quantum ECC.


There are very valid reasons for this. Historically, this is what happened.

* People did very basic qubit experiments with NMR in the late 90s. Very noisy experiments.

* Around 2000 they realized photonic quantum computers would be "easiest" because light experiences very little noise. The problem was that its insanely difficult to do non-linear interactions between photons (which is necessary to do any sort of non-trivial classical or quantum computing with photons). In 2001, somebody came up with a clever way of doing non-linear interactions by using fast detectors.

* Huge efforts started to try and build photonic quantum computers. Unfortunately, around 2004-05, people started to do estimates and it turned out that the number of sources and detectors needed with the clever way was humongous. Far more than we could hope to achieve, and there didn't seem to be any way of reducing it. People abandoned photonic quantum computing and started doing ion traps and superconducting.

* Interestingly around the same time in 2005, there emerged an alternate method of building photonic quantum computers, based on "cluster states". However, the method also had the same humongous resource problem, but it had an advantage: the framework could be modified and played with to improve it. Over the next two decades, very slowly people figured out improvement after improvement to this architecture to bring down the resource costs.

* At this point, this cluster-state photonic architecture has improved quite a bit and is starting to become very competitive with ion traps and superconducting qubits. PsiQuantum (whose article I shared above) is the leader in this right now. And they might win the race.


Yes, but all existing photonic platforms use post-selection which is even more clearly exponentially lossy. Although this could be solved with a deterministic single photo source if one can be found. Photonic quantum computing is an especially funny post-transistor paradigm because classical photonic computing is also quite attractive.


If you read the linked article, it explains how to avoid exponential loss on post-selection.

> Imagine you can toss coins, and you need to generate 20 coins showing Heads. If you repeatedly toss all 20 coins simultaneously until they all come up heads you’d typically have to do so millions of times before you succeed. This is even more true if each coin also has a 20% chance of rolling off the table (akin to photon loss). But if you can toss 20 coins, set aside (switch out!) the ones that came up heads and re-toss the others, then after only a small number of steps you will have 20 coins all showing heads. This large gap is fundamentally why the first whammy is not relevant: To generate a large photonic entangled state we begin by probabilistically attempting to generate a bunch of small ones. We then select out the success (multiplexing) and combine successes to (again, probabilistically) generate a slightly larger entangled state. We repeat a few steps of this. This possibility has been appreciated for more than twenty years, but hasn’t been done at scale yet because nobody has had a good enough optical switch until now.


Haha a self promotional blog post without any experimental data from a lab self-interested in the technology is somehow not persuasive to me.

There is no way to add entangled particles to an entangled photon state without new light matter interaction which may (and almost certainly) bring decoherence.

Ie to claim a good enough switch is possible is a claim demanding a ton of evidence, and there is none right now.

Ofc I would be thrilled to see it work! But explaining a scheme in back of the envelope fashion and measuring it are two vastly diff things.


The terms "realistic" and "cost effective" in the field of quantum mechanics do not have scalar definitions. The meaning changes depending on the problem you are working on and the deadline for funding opportunities.


This is not quite true. You only need to keep the qubits at a fixed temperature as you scale the system, so the resources required to add additional qubits grow only polynomially with the system size. Once you have many qubits with a sufficiently low (but constant) error rate, you can do quantum error correction which also only has polynomial overhead.


No you are missing the fact that if the qubits are coupled the systems fundamental gap has shrunk demanding a lower temperature for the same error rate.

You should think more about why your rosy scheme hasn't worked yet if you can't explain that empirically maybe you don't quite understand.


The qubits aren't all coupled all the time. The whole point of a 2-qubit gate is the coupling is controllable. Also -- why is the gap of the bare system important? The whole point of QEC is the creation of a decoherence free subspace. Your model is wrong and you don't understand quantum error correction.


The qubits aren't meant to be coupled, but in practice everything is coupled at some level. I find it plausible that having a system with tons of degrees of freedom on a limited energy scale might make it difficult to isolate all the degrees of freedom from each other.


Yeah but these are the most basic of basic problems in QC and people would be discarding approaches that didn’t have potential solutions to this issue. In superconducting QCs, neighboring qubits are not necessarily even resonant, and are in their own little metal boxes. With neutral atoms, you have extremely long lived internal states that barely couple to the environment, and the atoms are macroscopic distances from each other.


> ... quantum computers are really early in their development and can't do the more complicated things yet. It's like if we had calculators that took an hour to do each arithmetic operation ...

Reminds me of the story George Stibitz tells after developing one of the first devices that could be called a computer. Management at Bell Labs was not impressed that he had built a "20000 dollar calculator".

https://youtu.be/qundvme1Tik?t=259


Well a “20000 dollar calculator” is something QC can only dream of


It's more like if we had calculators that could not do any arithmetic operation at all...

Someone someday might figure out how to make the calculator actually calculate, but not yet.


The field of quantum computing HAS to have incremental results, or else nobody will continue to fund it. Unfortunately, it doesn't seem to be presenting great opportunities for incremental results. This doesn't invalidate the field, it just means that it's really hard to tell if you're going to see a pay-off. These things aren't really factored into the risks that investors think they're taking, so it's giving me a weird feeling in the pit of my stomach. I desperately want to see quantum computing work -- I've worked on it for 15 years now. From my vantage point, I see progress, but that's progress in making better qubits, better gates, better architectures, not solving problems. There's an error correction threshold, remember, and you start seeing real results when/if you cross it.

Anyhow, with all these huge beautiful dilution fridges with their fancy gold plating and all that, the expense of QC is a rounding error relative to AI. It's still high-risk/high-reward and has seen a level of investment that's commensurate with that.

Also, I don't know who all thinks that these physicists like myself are making beaucoup bucks working on QC. I make far less than a software engineer at grubhub or something, and have taken the opportunity cost of getting a PhD and all that.


Quantum computing feels like nucleair fusion. Been hearing about how its going to change the world since I was kid.


Quantum computing has been just as sexy a topic as stable fusion.

Those were/are goalposts. We've never actually /seen/ the goal though, and the posts are a collective hallucination. It IS a pretty nice hallucination though.

Since we as a species don't quite understand quantum interactions, it could be quite some time before we can build ordered and predictable systems out of those interactions.

It could take days/weeks/months/years/decades/millennium... who knows? The fun is in watching and playing.


Well, in fairness, I could say the same about solar, improved batteries, etc.


Solar has become incredibly inexpensive in the last decade, and batteries likewise are many times better than they were 20 years ago. Is there further to go, yes, but progress has been steady and real applications are all around you for both. Our phones are marvels of battery technology that was impossible 15 years ago.

Fusion and quantum computing on the other hand. Neither has yet solved one problem or made net progress. They have great promise but the practicalities may ultimately outweigh the benefits. Meanwhile, solar-charged batteries powering high performance electric cars are more common every day.


You think there has been no net progress in QC or fusion? There has been a lot of progress, but not to the point where we can make practical use of them yet.

Some problems are just harder than others. Maybe some of them are not soluble, or some are not economic. We won't know if we don't try, and they hold out great promise.


> solar, improved batteries

Neither of those sounded as sexy (i.e hyped) to me a nuclear fusion and quantum computers.


> solar, improved batteries

>> Neither of those sounded as sexy (i.e hyped) to me a nuclear fusion and quantum computers.

Breaking News: The Gates Foundation has actually deployed a product -- Solary!

Meet Solary, The Gates Foundation replacement for salaries and a new way to power your entire home. This $1,000 (price pending review) device can provide a multi-family, multi-story, dwelling with power for 100 years. The software called Solary GoFY (possibly a subscription service) serves as a full and complete monitoring solution for the premises!

--

See, solar can be sexy!


Eh? A random person making up a fictional press release doesn't count as hype, you know that right?


It’s a milestone on a roadmap. Science like this doesn’t happen in eureka moments - it’s a methodical slog through our ignorance where we latch in wins bit by bit over decades. Quantum computing, AI, fusion, curing cancer, ending aging, they’re pinnacle achievements at the extended play of Civilization.


A potential hidden bonus: if quantum computing theories get enough attention, maybe people will start to consider using the quantum computing abilities of the human mind with conscious intentionality (since by the time we get computers to be able to do it, it may be too late)!


What are these capabilities of which you speak? Or have i missed the sarcasm


In many other worlds gp got tremendous karma for their post. That's how quantum mental abilities work.


Do you mean; Use a quantum random number generator and emit random words. Changes are that in some world the comment is insightful?


I’m guessing he’s referring to the Roger Penrose stuff — https://en.m.wikipedia.org/wiki/The_Emperor%27s_New_Mind , etc.


Well, generate reality for one. It's not a trivial detail, but it tends to be dismissed or taken for granted (or downvoted lol).

Someone may venture into this territory some day, and perhaps that someone will find some travelling companions to make the journey more exciting and productive....time will tell!


Generate reality? I am not crystal clear on what you're talking about. Quantum computers are still just computers, not magical replicators from Star Trek.


I'm talking about the mind's ability to generate what "is", and "is not". It is a curso no doubt, but it is also sometimes a blessing. But usually it runs unnoticed (or even: denied) in the background, duffing things up.

And when it gets noticed, sometimes really weird things happen.


> And when it gets noticed, sometimes really weird things happen.

....such as?

If you have any proof you can probably get a nobel prize.


I was thinking op meant something like realizing everyone has their own take on NULL and none of them are really wrong or right.


This is pretty close...although some can be correct, but correctness is not necessarily justified (such as when guessing what "is probably" true, which is why the J in JTB exists, and perhaps also because people like to take the T for free, as doing otherwise "is pedantic", which almost always has consensus agreement - reality domes don't maintain themselves!!).


But where does the quantum come in? You don't need it to have/manipulate consensus.

What are the "weird things"?


> But where does the quantum come in?

My thinking is here:

https://en.m.wikipedia.org/wiki/Quantum_superposition

> You don't need it to have/manipulate consensus.

Right, it's obliviousness to (or ideological, indoctrinated denial of) the phenomenon of extra variables/state that enables the manipulation of consensus. (I'm looking at you, scientists.)

Also, if these higher states did not exist, I'd think the problem might be either be impossible in the first place, or impossible to detect. (Noteworthy: having flawed methodologies can also make them impossible to detect.)

> What are the "weird things"?

Hallucination is the most obvious, but I would also say: just look around!

Or for a more serious analysis:

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Everyone "already knows" these things, so no awards or even surprise awaits one.


If you can change bits that have already been observed, that would be very surprising to people.

Otherwise I'm not sure what you mean by "generate" but almost any kind of influence would have massive potential.


Surprising to some, pleasing to others, infuriating to others. But to those who have mastery over it: powerful.


The trick is you can only flip as yet unobserved bits. If we know the asteroid is coming we can't delete it.


Some bits can even be flipped in broad daylight while everyone's watching. One of the most popular ways to flip bits is by telling stories in which the bits have been flipped.


The power of stories is that you are alone in the telling of them. The survival, your own undressing, or redressing, the terrible fury of living life, constrained to the teacup of bedtime for children. The power that storytelling takes is a kind of leadership that is grudgingly respected, like a real bastard's funeral.

It doesn't have much to do with physics.


> The power of stories is that you are alone in the telling of them.

I don't know about you, but I've noticed a pattern where the stories people tell on social media and even in person often have an uncanny resemblance to the stories that are told the day before in mainstream media.

Heck, if everyone plays their cards right, you can often even get a nice (profitable) war going!

> It doesn't have much to do with physics.

Mostly agree (the jury is still out on the hard problem of consciousness), it's metaphysics... and we all "know" what that "means": "woo woo"...or so they say.

And, that realm often leaks into the physical realm, much to our horror/delight/confusion.


> (or downvoted lol)

Just use your quantum mind to change that downvote to upvote, problem solved.


Dumb question:

Say I have a wooden stick and I break it in half in less than a second. Assume a computer would need several minutes to simulate everything that would've happened in the stick. I clearly got the output faster than a computer (and with more precision), so does this imply I'm doing anything particularly fascinating?

I assume the same scenario is possible to concoct for a quantum computer. I assume it wouldn't be particularly interesting either. So what are the criteria for distinguishing those scenarios from the "interesting" cases? And how do we know which case this one is like?


The difference between you breaking a stick and the computer modeling it is that you've measured nothing. You don't know, with any precision, the amount of force you used, the rate the stick broke at, how much mass remains in the two pieces and how much was lost to splintering, etc.

In other words, assuming the computer model has sufficiently accurate data as an input, it can produce significantly more refined output than a human can through trial.

In fields where human trial is exceptionally inefficient- molecular physics, chemical synthesis design, structural material design, etc- a sufficiently fast computer will allow for faster design iteration and perhaps even development of new processes and materials.

More concretely, if you wanted to design a new stick that breaks under exactly the conditions that you intend it to, then you can skip a lot of trial and error if your computer can accurately model the materials you are designing and testing.

Think construction, civil engineering, energetic chemistry, biological processes (medicine design) etc.


> You don't know, with any precision, the amount of force you used, the rate the stick broke at, how much mass remains in the two pieces and how much was lost to splintering, etc.

But I can measure those with a ruler and a scale. Both before and after the breakage. Takes a few seconds, and I'd need to do that before punching those numbers into the simulator anyway. And I can be precise with where I apply the force, etc. So I don't see how this gets to the point of the question.


You really just answered this yourself:

> Assume a computer would need several minutes to simulate everything that would've happened in the stick. I clearly got the output faster than a computer (and with more precision), so does this imply I'm doing anything particularly fascinating?

You assert that you have output faster than a computer with more precision. However, you do not have any empirical data, just observable data; as stated by zdragnar:

> The difference between you breaking a stick and the computer modeling it is that you've measured nothing. You don't know, with any precision, the amount of force you used, the rate the stick broke at, how much mass remains in the two pieces and how much was lost to splintering, etc.

Then you further state that you can measure those with a ruler and a scale; however, this inherently takes time with significant uncertainty in your measurements and calculations. Whereas a computer will provide all of those numbers.

The other thing to consider is the method of simulation such as finite-element analysis (FEA) and the resolution you need. You can get segmented data all the way to down a specific volume of that stick, good luck with the hand calculations on that.


I don’t think you guys are tackling this right. You can observe the stick breaking to absurd precision, even using an electron tunneling microscope, and produce immense amounts of data about the sticks state at every point in the breaking and afterwards faster than a computer can simulate breaking the stick. The key is the stick breaking and it’s observation at any level of detail happens in parallel with all aspects of the system coherently acting in real time simultaneously in a way with perfect effects and transmission of information between all quantum parts of the system at the speed of light. The computer on the other hand is operating in a binary encoding Turing machine forced to rely on imperfect relatively serial (compared to the parallelism of reality) with encodings that and algorithmic mathematical models that produce encodings that are similar to what reality produces naturally. The issue is the layers of abstraction, nor the measurement and detail of the system. All the measurable information about the stick and it’s breaking is available and measurable with no impact on the time it takes to break the stick.

Here’s perhaps a more clear example. The universe influences itself at a distance with all bodies acting on all other bodies at once. This is a classical nbody problem where n->inf. We can measure quite easily these effects without perturbing the system, which would not be able to progress a single iteration in the entire span of the universe if simulated by a computer.


But the comparison is not fair.

Yes the electron microscope can image a lot of details "in parallel" but not all details from all angles, all internal microfractures. You can't easily measure all temperature gradients in all cubic nanometers of the material etc etc.

The simulation is slow because it works at that level and thus as a side effect will also give you that output.

Obviously if you don't need all that information you may find another route to arrive at the results you want


You could though, it’s entirely fair. All information about the system is available for measurement down to the quantum level across all dimensions, because it’s a perfect fidelity to reality. Regardless of whatever specific tool you choose, in theory I could build tools that measure in near infinite detail the stick. I could not however build a computer that could calculate that detail of the stick with perfect fidelity to reality. It’s a gross approximation no matter what you do. I would however point out that my nbody example is entirely analogous and is perhaps more tractable as to why it’s impossible to compare measuring reality to calculating a simulation. Simulations have the advantage that you can reproduce and tweak it repeatedly without having to configure the universe in a specific way every time. You can hold variables constant, you can change the laws of nature, you can simplify complexities to narrow in on a specific behavior. But simulations are never a replacement for measurement of reality, and the reason is the opposite of what you’re holding out. Reality can be measured to an arbitrary level of complexity literally in real time, even systems that are so complex we can’t consider possibly simulating even approximations of the system.


> I can literally measure those with a ruler and a scale

You can’t measure a number of internal stress-strain conditions during the moment of failure. You can’t repeat the experiment with the same stick. The best way to get a fast intuition for why the simulation is superior is to take an entry-level CAD course with a focus on material design.


> The best way to get a fast intuition for why the simulation is superior is to take an entry-level CAD course with a focus on material design.

I'm sorry but you're completely missing the point of my question. The question was not "what can simulations do that experiments can't". My background in simulation is not zero, and I've never had that question. The question as something else entirely.

> You can’t measure a number of internal stress-strain conditions during the moment of failure

But what if that's not what I'm interested in simulating or measuring? What if all I care about is something easily measurable, like the length of each remaining piece? Isn't that precisely my point here? There are so many things quantum computers can't compute either - yet we seem to be judging them by what they're really good at. Just like with the wooden stick. So how do I tell if they're the same sort of scenario or not? What's the distinguishing criterion?


> What if all I care about is something easily measurable, like the length of each remaining piece?

If you're not just talking about the two large pieces, but also the lengths of the small splintered pieces, I suspect the computer is going to be a lot faster at figuring that out than you will be manually measuring them.

Your responses in this thread seem to be a bit disingenuous: first you ask why a computer simulation could be better than doing it manually, but then when people tell you the things the computer can do faster/better than you, you say "but what if I don't care about that?" Well, duh, if you don't care about anything but the most trivial things, the computer probably isn't going to be of any benefit to you.


> first you ask why a computer simulation could be better than doing it manually

No, I never asked this at all. Some people changed the question to this for reasons I still don't understand. The actual question I asked was: "what are the criteria for distinguishing those scenarios from the interesting cases? And how do we know which case this one is like?"

The whole point here is to figure out if the problems they're declaring computational superiority on are analogous to the wooden stick problem here. Not to ask "when are simulations better than doing it manually".


Are you just asking if these are unrealistic, contrived performance benchmarks? Those have always existed and they're fine: https://news.ycombinator.com/item?id=20231084

Some are more useful than others. There's no strict criteria just as there is no perfect way fully characterize CPU performance.


"Unrealistic" or "contrived" don't really get to the heart of the matter, at least not as I understand them. The stick example can be extremely useful and realistic - it doesn't really make any difference to the question.

Perhaps another way to phrase it (though I'm not 100% sure this is equivalent) is: how do they know whether whatever they're accomplishing is quantum computation, as opposed to something else (like analog computation) that happens to be concerned with quantum physics.


You're going to measure every point like this? https://www.researchgate.net/profile/Andrzej-Baier/publicati...

Likewise, you can crash a car in a lab and do a simulation of one. Both tell you "fascinating" things and are still done by engineers.


This has nothing to do with the question though.

> You're going to measure every point like this?

Why would I need to? Is "tells you a bunch of answers to questions you never asked along the way" really the distinguishing factor for what constitutes computational supremacy? If all I wanted was just the lengths of the broken pieces, my simulation has to tell me the stress and strain at every point in order to be considered superior to a numerical simulator? Merely telling me the answer to the question I asked doesn't count? So if my prime factorization tool factored 4096-bit integers instantly, that's not enough? It has to factor a bunch of unrelated numbers and maybe solve TSP in the middle before we can consider it superior to classical computation?


Now if we had a way of measuring it, it would be interesting to ask what sort of model of computation we could derive from the breaking of sticks


I cannot wait to tell you about the cutting-edge technology my team is developing — it's called the twigchain and it's going to change everything.


Not a dumb question!

There is a great video by the mathematician Richard Borcherds on this exact objection to current examples of quantum supremacy.

https://www.youtube.com/watch?v=sFhhQRxWTIM


I love this video, thanks for sharing it. I'm watching it right now and I think I'm going to refer to his "teapot supremacy" from now on.


I declare AlexCoventry supremacy. No one can compute AlexCoventry better than I can. :-)


Wait until you see a GTP5 simulation of yourself.


There is also an interesting response to that video from a leading quantum computer scientist, Scott Aaronson:

https://scottaaronson.blog/?p=5460

He actually goes into significant details about your exact question and proposes some resolutions. It's quite long and I don't think I can do it justice in a TLDR so leaving the link to speak for itself.


Thanks for the link! I'll take a look.

Edit: I just read it and I don't feel like he really answered the question. He had a rebuttal about being able to freely specify parameters, but then his Facebook friend addressed his rebuttal, to which he replies "this is indeed more interesting"... which, well, it certainly is, and also doesn't answer the question!

(Well, I guess he also mentions we should expect speedup greater than Avogadro's number, which seems fair, but clearly that's not the bar anyone is claiming to meet, so it doesn't get us anywhere.)


One would be the direction of entropy. Breaking the stick is not "particularly fascinating" because you're going in the direction of increasing entropy. However, _putting it back together_ is. In the simulation it takes no more effort to go one way or the other, while you probably cannot put the stick back together no matter how hard you tried.

A quantum question that is "interesting" would also be similar to finding order out of disorder (e.g. factoring).


If I provide two model sticks to a physics simulation, it can work out how to put them back together with no more effort than it took to model the break?

Like I can model shooting a cannon ball out of a cannon and it will tell me where it lands. Or I can model a cannon ball sitting on the ground and it will tell me where the cannon was?


A hypothetical perfect simulation should be able to do it both ways indeed!

However, the current consensus take of quantum uncertainty means _if_ such a simulator exists, it cannot be of our universe. Or, more likely, such a perfect simulator does not exist.

Of course all of this is about a hypothetical of a hypothetical at this point...

(This is the same problem as entropy (our current understanding of it). We know it is increasing one-way w.r.t. time, and we can imagine what it means to "reverse entropy" and that there's nothing really theoretically preventing that, but we can't build an actual machine to do that.)


I really love this comment -- it "feels" like the right criterion -- but I guess I'm wondering what criteria (if any) researchers actually use right now. Do they have any criteria for this?


I'm not a quantum computing researcher but I have friends who work on it. The way they've described this to me (answering a question around "how does one know what quantum stuff is bs") was that it's largely a set of known "hard" questions where we've empirically found hard for classical computing but we maybe could solve with quantum. In a way it's been described as similar to crypto primitives or P?=NP problems where we don't know for certain why it's harder one way vs the other but it seems to be the case based on our current knowledge. It could possibly (but unlikely) be the case that we eventually find classical solutions that are just as fast.

They don't seem worried about this though, since at least research-wise just working with quantum as a new tool to solve problems itself is intellectually interesting (and the "solves classically challenging problems" is a good way to frame the work's potential impact to general audiences). Also, there's other supposed uses of quantum beyond just computing, like for communication etc.


> where we've empirically found hard for classical computing but we maybe could solve with quantum

Are we talking complexity classes here, or just "problems we've found hard in practice, but that which may in fact be in P even if P != NP"?

Like I would've thought that solving BQP problems like integer factorization would be the criterion (since they're proven to be faster under QC assuming P != NP etc.). But that's evidently the researchers aren't holding themselves up to that standard, so what bar are they using exactly? Are they just going with some sort of "if it walks like a duck then it's a duck" criterion, or are they using problems that they can formally prove lack efficient classical solutions under a well-regarded hypothesis like P != NP?


Based on what I've heard, I don't think the bar is as high as "formally proven to lack a classical solution" for the whole field. Then again, the caveat is of the two people I know, one is doing cryptography/security in a quantum setting and the other is working on quantum related HW... so take this with as much salt as needed.


I see, thanks!


Nice comment


You are right, basically anything can be viewed as a computation. Ray Kurzweil discusses this extensively in his book The Singularity is Near, building upon previous work by Edward Fredkin.

> To appreciate the feasibility of computing with no energy and no heat, consider the computation that takes place in an ordinary rock. Although it may appear that nothing much is going on inside a rock, the approximately 1025 (ten trillion trillion) atoms in a kilogram of matter are actually extremely active. Despite the apparent solidity of the object, the atoms are all in motion, sharing electrons back and forth, changing particle spins, and generating rapidly moving electromagnetic fields. All of this activity represents computation, even if not very meaningfully organized. (From The Singularity is Near Chapter 3, The Limits of Computation)


You're not modeling or predicting anything though. That's like saying "what if I built a bridge that failed on the first day? A computer would need several days to calculate all the forces that led to the failure, but my bridge failed just fine without any computer help".

Well... yes. But try building a bridge that doesn't break.

Or to keep with your scenario, try to predict exactly where and how your stick will break before you break it. You can't.


Does Google's implementation of quantum computing help with this sort of scenario, or is it a really fancy way of breaking the bridge?


Yes, and no. Don't ask me rhetorical questions if you don't want a rhetorical answer.


Well you don't have to sacrifice any sticks or bridges to see what happens, so that's a plus.

One could imagine stimulating experiments that are implausible to actually perform.


Yes, but that's nothing new -- we've had computers that were used to solve simulations faster than digital comuters could by setting up electronic (or hydraulic, or pneumatic) parts that simulated the terms of the differential equations: https://en.wikipedia.org/wiki/Analog_computer

Is this quantum computer another iteration on the same concept, or is it solving things that are polynomially harder to simulate classically?


Oddly enough, I think this can be answered by headlines alone. Imagine you could do anything remotely interesting with a relative speedup of 47 years to 'instant', where we can say an instant is 1 millisecond. That'd mean in one day of calculation, you could do things that'd take a current era supercomputer more than 4 billion years to achieve. And so on upward from there. Yet we're focusing on this odd and relatively boring metric of 47 years?

That suggests that the implications intentionally provoked by the phrasing "Google’s quantum computer instantly makes calculations that take rivals 47 years" are obviously false, even if the statement itself is true in some extremely specific context. So in other words, when genuine quantum progress has been achieved - I don't think we'll need to debate it, as the results would speak for themselves.


If you can control your experiment perfectly, then you don't need the simulation.

Now break the wooden stick in half while on the moon. In space. At absolute zero. While under acceleration. While spinning. While bombarding it with 10^100 neutrinos/second. And do it with 100'000 randomized iterations of the wooden fibers.


> so does this imply I'm doing anything particularly fascinating

If you mean "useful", then I'd like to point our that breaking rods is a common task in many labs all over the world exactly because it can calculate some things better than any non-rod computer we have.

If you literally meant "fascinating", then you'd have to ask yourself. For a lot of people, it is.


Your wooden stick is dedicated hardware for computing the behavior of wooden sticks. Fascinating? Pretty subjective—but one of the best concrete proxies we have for "fascination" is "economic value" (cf OpenAI's AGI definition). Is it going to make (or save) you a lot of money to quickly find out how that stick broke? Probably not.


I think you can make the case that, for example, going out, taking a stick from a forest, breaking it and record the sound will be much more economical than trying to simulate a realistic stick breaking sound -- in particular in developer time trying to model sticks and wood breaking sounds in a convincing way (suppose we need say an art asset for a game).

It's something complexity theory isn't well prepared to tackle, at least on first look. Complexity theory doesn't take into account the cost of devising an algorithm itself. And complexity theory usually only is interested in asymptotic behavior -- different in both ways from what we have (n=1 and algorithm cost). I think simulations win only when there are significant gains to be had over just doing it in real life, i.e. when you are only interested in a simple aspect that can be summarized by a model, or when for some reason the real life event uses a lot of energy or costs significantly. An extremely accurate simulation of wood and wood breaking, and sound, I think would be quite computationally costly -- the wood is already very complex (not as complex as the base atomic reality -- you probably don't need to model every single atom -- but still very complex). Sticks are cheap, and breaking a stick is easy. Breaking the stick doesn't seem to take much energy, and realistically stick breaking itself (although perhaps not stick collecting) comes out ahead energetically!

It gets quite complicated quickly, and it really depends on what you want to model, but I think it's interesting to see when and if simulations are advantageous (for example, you could take the cost of raising a tree into account, or not! is the stick "free" because it would grow and decay anyway?). I think in the long term many things can be simplified into what we're interested, so models increasingly win. But realistically (in our limited lives and limited universes :)) I think there will always be a place for physical experimentation, taking pictures of sticks and recording the sound as they break. (and hopefully our relationship with trees and sticks itself still lasts for a long time...)

I say that as someone who really enjoys modelling nature! :) I think modelling and trying to recreate natural process (sometimes with an artistic touch as well) is a great way to understand and appreciate the wonders of the universe.

Note: Also, even with ideal reversible 0-cost computation, there's likely to be some kind of polynomial time overhead to 1:1 simulation of reality. This can be enough to justify experimentation instead of simulation, in some cases, when your model itself isn't a significant simplification over reality.


Complexity theory doesn’t handle it, but that’s why I didn’t use complexity theory. Economics handles it fine: no one’s ever paid you or anyone else to do a wood-breaking simulation at a sufficient physical level of detail to capture the sound accurately, so the comparative savings on it aren’t economically valuable.


Sound effect libraries have long included the sounds of wood breaking/cracking, generally recorded (rather than simulated). A couple of jobs ago, we sold sound effect libraries that cost 2-5x the cost of the software applications that they were used with (hundreds to thousands of dollars).

I think you’d be hard pressed to find economists (beyond freshman college students who recently discovered Nietzsche second hand at a party) arguing for a purely direct monetary expression of value, but even they would have to admit that recordings of broken sticks in snow covered forests have non-zero value.


You misunderstand me. It doesn’t matter how much the sound library is worth: if you weren’t using simulation to build it, then it’s irrelevant for the economic value of the simulation.

Like, you can turn lead into gold if you run a nuclear reactor—and gold is valuable—but running this process is way more expensive than just buying gold, so it’s not valuable.


That’s classic cost benefit analysis, and there are plenty of examples in audio of trade offs where the simulation is sufficient (or more flexibly applicable). Just because the cost is high for a given application doesn’t mean something has no value. Bringing the cost down is the classic engineering conundrum.

Let’s take reverberation as an example:

- Folks used to record in big rooms for big room reverb. It’s nice, nuanced, and comically impractical. Place mics around the room and capture the effect live, but even your close-mic’d take is going to have the room in it.

- So some studios started piping audio out to a speaker in a secondary room and piping a mic from that secondary room back to the board. Boom. Configurable reverb, at your service.

- But you can’t take it with you, so spring and plate reverb devices showed up.

- Enter computers and DSPs, where simplified reverb models could be applied in real-time.

- And then on to impulse capture and convolution (e.g., pop a balloon in a church, record, convolve).

- And then beam tracing and other techniques approaching physical simulation (but orders of magnitude less computationally complex).

And all but piping to a second room were used in making the libraries of sounds that we sold.

However, throughout history, each of those techniques went through times then the cost benefit balance led to using some other technique(s) to achieve the desired result. That didn’t render those techniques worthless just because money hadn’t traded hands. It’s why economists describe markets as mechanisms for price discovery. What something had sold for is a way to understand value, but it’s not the sole definition of value (even monetary value)


Indeed, not a disagreement, just a contribution to the commentary :)


I wonder about that frequently. The universe 'executes' physics in, as far as I can tell, a realtime basis (at least within the local reference frame). What is that called as compared to computing a model of the same.


Of course for you it feels like realtime, because the physics of “you” are being executed on the same “system”.

It’s funny to think that if we are living in a simulation, that the machine we’re running on might have horrible uptime, but we’d never know because our “time” only works when the machine is running!


Just like a tick rate in a game


Why do you think time dilation needs to occur in our simulation? Be it from instance-shard transition due to cell crossing (special relativity) or inter-node lag due to spatial density (general relativity), system design constraints dictated the very rules of our “physical” existence.

(Poe’s Law notice: yes, I’m joking)


If you need to measure a billion sticks breaking the simulation might start to come out ahead in terms of cost.

I think this makes more sense if you use a bridge instead of a stick.

Building a full sized bridge to test your idea is prohibitively expensive but a simulation is cheaper to build even if the actually simulation is slower to compute.

In real life simulations, small prototypes , and full size prototypes are all useful with various trade-offs of cost and benefits at various stages.


I don't think this is a stupid question at all- it actually touches on some pretty interesting topics. The human brain is absolutely fascinating, in particular because it is extremely energy efficient and fast at the tasks it does. The more we learn and understand how the brain is working the more we'll be able to improve a ton of computer science topics, from AI to signals processing and a whole bunch more in between.


Any problems where the configuration space is large, and you want to find some optimal configurations to the problem, would in theory benefit since you can directly map the configurations into the entangled qubits. Entangled qubits give you the ability to physically represent large configuration spaces.

The difficult is ensuring entanglement between qubits, scaling up the qubit count, noise reduction between the qubits and the other physical parts of the quantum computer, error correction, and generating the circuit to represent the optimization problem, formalizing a proof that the total time of quantum computation (computation + preparation) is less than to simulate on high performance computers and what not.

There's several YouTube videos where some company has mapped their problem into a quantum circuit and claim it provided solutions to optimization problems that they couldn't have found classically but dunno, I guess it would really require AB testing between classically computing it on an HPC versus a quantum computer.


When you break a stick you just break a stick. When you simulate breaking a stick you know everything about the broken stick.

What if you broke the stick to study its material properties? Then you would need to spend months carefully taking samples and measuring all the broken spots.

With a perfect simulation? You're done the moment it ends. All the data is there available with a copy paste


"With a perfect simulation? You're done the moment it ends. All the data is there available with a copy paste"

As far as I understand, to perfectly simulate reality, you would need a second reality.


Well of course the simulation is a simulation. They are speaking of the ideal case. A real simulation is an application of a model, the physics of the molecules and structures interacting. They are saying that in the ideal case, once the simulation is complete you have complete knowledge of the entire system at every moment in time. Of course there's a resolution to all this. The time steps are comparatively large to the planck time and the wave functions are approximations, everything is an approximation, that's what makes it a simulation.

But the principle is that within your model, once the simulation is complete you know _everything_. You have measured every aspect of reality possible from your simplified version. If your model was arbitrarily close to reality you really would know everything


The key is things are interesting if they’re interesting to someone. The fact you can break a stick and observe in detail in real time at a minute level better than a computer is very interesting to many people. Botanists, biologists, material scientists, architects, industrial engineers, etc. It’s just not interesting to you.

Likewise even contrived quantum experiments like this aren’t interesting to many people. They’re not practical. But they’re profoundly interested to people interested in quantum computing. They’re milestones on a roadmap. There will and are things computer computers won’t do better than classical computers and there are things in reality neither will simulate, and those things are interesting to someone. In fact I think this is a crucial part of basic science- EVERYTHING is interesting. Even the lack of something interesting is certainly interesting to a psychologist :-)


The key detail you're missing is that the problem must be written down. All details must be specified, so that you can unambiguously tell how well it's being solved.

Yes, you can write down a problem inspired by a specific stick. And yes, the written down problem will of course inevitably differ in the details from the real stick. Normally you'd think of these small differences as small errors in the description. But as soon as you try to argue the stick is computing the written-down problem, they become errors in the stick because they affect its ability to implement the problem you wrote down.


It's like saying you can breathe, yet a computer might take a long time to simulate every chemical process taking place in your body in those 2 seconds of respiration.


Regardless of how good our classical or quantum computers get, simulating “complex” natural systems is ultimately limited by our ability to observe and measure the physical properties of the “components” (I put quotes here to acknowledge the difficulty/impossibility of delineating system components) of that system. I’m not sure quantum computers will help with that.


You're missing the point of the question.


I’m saying that for some problems, neither classical nor quantum computers may arrive at an acceptable answer.


Did you really get the answer faster than a computer?or put another way, what question did you answer? How much force was applied to the stick before it broke? How fast was the stick moving 0.04s after breakage? How much sound was produced? There are thousands of questions you don’t have answers to when you simply break a stick.


I think that was his exact argument against quantum computing.


seems more like an argument for quantum computing


I think a more fair comparison would be that you know how to break the wooden stick so that you end up with one piece measuring exactly 152 mm and you can repeat the feat every single time with other wooden sticks of different dimensions crafted from different trees under different environmental conditions.


I wouldn't think so in the case of a stick breaking, but if you consider a more realistic scenario like running a risk analysis across a hyperdimensional problem space, there is certainly something extremely interesting going on there.


If you're in the stick making business and need to know how sticks break, the yes.


the best thing about watching the entire quantum computing debacle is every time somebody says their QC does something faster than a classical computer, somebody tunes the classical computer to beat the QC. Combined with the fact that chip infrastructure is already paid for and there is a large market to enable economy of scale, coupled with the lack of true problems where exponentially better problem solving is critical to business or military success, means we're going to see a series of QC claims that really just prop up classical computers for some time.

(I used to work in supercomputing and chemistry and await the day we have useful QCs doing simulation better than what we can do on supercomputers)


With all of the sensationalization, I am now convinced that many of the people in quantum computing itself are grifters, and that funding agencies keep paying them to put pressure on classical computing (despite knowing that QC is a big grift).

Already, optimization problems have seen classical "ising" chips get great results (as good as the quantum annealing chips), proving that optimization doesn't actually need tunneling or superposition.

Now, the number theorists just need to come up with a classical analog for a QFT or another similar transform, and we will see them obviate QC here. Arguably, RSA being obsolete has already done this.

In simulation, supercomputers are also making huge advances in power.


In my experience in QC, I've yet to see many grifters. In fact, many of the people in quantum computing are quite open about the fact that we aren't that close to "useful" quantum computing, that the number of practical problems that quantum computers will beat classical computers with could be quite small, and are actively worried about over-hyping. People selling today's QCs as a solver for generic optimization problems are indeed grifting, but that is not most of the community.

Despite all these problems, myself and much of the community still think QC is worth attempting --- for my part, the applications to quantum physics is the main motivation, and one in which it's relatively certain that QC will not "become obsolete". (It's also still perfectly valid to _research_ how to make QC useful in various types of classical problems, including optimization, and it's plausible that progress _could_ be made that would open up more widespread uses of QC. The line, for me, is when people misrepresent the likelihood of success of that research.)


When people are hyping something that is probably still 20 years out, i think its safe to say they are trying to sell you something.

Quantum computers will be a real (albeit not earth shattering) advance, but we are still far off.

Still rather people pump money here than crypto bs.


Getting research funded is part of the process by which things that are 20 years off actually get developed. People seem to think it's just fated that we will have fusion on X date etc as though it's independent of how much effort is put into it. This kind of research is expensive on an individual scale, but very much a drop in the bucket at the scale of the US national budget.


Quantum computing has been 10 years away for the last 30 years.


I don't think that is true, i imagine most people thought it was much more than 10 years away 30 years ago. I think 30 years ago people still weren't sure if it was physically possible at all.

In the mean time real progress has been made. If we continue at this rate (big if) we will eventually have quantum computers. Just not tomorrow.


I think it’ll be like AI: perpetually 50 years away until some big discovery is made and everyone stops laughing.

For AI it was the “attention is all you need” paper. I’ve been told that for QC it might be error correction that scales but I’m not knowledgeable enough to know if that’s true.

I expect that nuclear fusion will follow the same trajectory of being 50 years away until it’s suddenly 0 years away.

The reality for all these is that steady progress is constantly being made but isn’t visible outside experts in the field. Then it reaches a critical mass and you have the “chatgpt moment.”


> QC it might be error correction that scales but I’m not knowledgeable enough to know if that’s true

"Error correction that scales" is basically what the definition of a working quantum computer. That is less the breakthrough needed so much as what the breakthrough would lead to.


Comparing to AI doesn't look so good for QC. The research that led up to the attention paper has a consistent trend of producing lots of useful applications for its 60-year history. You may not remember this, but in 2012 there was a computer vision "watershed moment" around the CNN architecture. Before that, there was the DARPA autonomous driving challenge and related research in robotics. Going back further, there have been many other big breakthroughs in ML/AI with immediate application, down to the perceptrons of the 60's and hidden markov models of the 70's. In other words, the field that eventually became AI was pretty much immediately useful.

The relative lack of useful intermediate results raises major red flags for me, but it doesn't seem that physicists are all that bothered by it. Indeed, the high energy theorists have been at it for longer with a lot less to show for their work: At least we have a few toy quantum computers, while there is still nothing testable about string theory.

Honestly, I hate to say this, but I think it's going to take a major war for QC to have its watershed moment (a "Manhattan project"), and I think the odds are that it will actually work if it comes to that. However, the window is kind of closing on the usefulness of the technology as post-quantum encryption starts to get legs, and I don't really want another World War 2...


I’m curious, what “real progress” did QC make since the 90s?


While they said 30 years ago, so since 1993?

On the theoretical side shor's algorithm was invented 29 years ago, which is what started the hype in the first place. 30 years ago is 1 year before anyone really cared.

Late 90s early 2000s you start to see progress on quantum error correction, which is key progress neccessary to make this all work theoretically

Starting Mid-2000s you start to see toy realization proof of concepts (devices with a few qubits). Thdy aren't very useful but realizing a physical device is the first step to doing anything at all. These keep getting better and better.

Early 2020s you start to see devices that can perform computation, that well not particularly useful, are complex enough that they would require a super computer to do clasically. This in my opinion is quantum computer's "hello world" moment in my opinion. Writing a hello world program is still very far off from say writing the linux kernel, but its the step where things get real.

Like most things in science, most of this is not zOMG breakthrough, but small gains compounding over time to create real progress.


The cited paper[1] references "random circuit sampling" which is defined in [2] which then gets so heavy into abstract math, and I give up.

Can someone explain this in terms an EE or programmer can understand?

[1] https://arxiv.org/abs/2304.11119

[2] https://arxiv.org/abs/2007.07872


> The theoretical basis for these experiments depends on sampling the output distributions of random quantum circuits; unfortunately, understanding how this theoretical basis can be used to define quantum supremacy is an extremely difficult task. Anyone attempting to understand how this sampling task relates to quantum supremacy must study concepts from random matrix theory, mathematical analysis, quantum chaos, computational complexity, and probability theory. Resources connecting these concepts in the context of quantum supremacy are scattered and often difficult to find.

At the end of the day, people writing these papers are also human. The very same sorts of humans who pad resumes with shiny technology and self-serving complexity.

This might be a case of a rabbit you do not want to chase.


> This might be a case of a rabbit you do not want to chase.

I'm sure the researchers will all collectively realize this, if it were to be the case, and disregard their 12 year academic journey with their great salaries in favor of research into more important topics like world hunger, renewable energy, etc.

I'm so sure that the percentage of sureness is an imaginary, quantum-entangled value between -7 and 13 billion percent.


I research QC. Feel free to explain how my skills could be used to meaningfully contribute to the problem of world hunger or renewable energy while also keeping me employed --- going back to college to train in a different field isn't feasible. If it's not so easy, then perhaps it's logical that QC researchers and AI researchers and software devs aren't all jumping ship to work on "more important topics"...

You may just be spouting off, but I genuinely am asking. If someone else dissing QC research wants to make a pitch for a concrete plan on how to make a difference in the world with a physics PhD and years of experience in scientific computing, drop me a message.


"academic journey" and "great salaries" generally don't go together.


[2] is actually a phenomenally easy read for a physics paper, but it does assume an undergraduate physics degree's level of background.

First, it's important to know that a quantum state (written as |letter>) is not directly observable. We can think of it as some vector, for which we can only apply an operation to to get out a scalar observable.

A quantum circuit is a series of operations on a quantum state to create a different quantum state, which can be modeled as a matrix U, which is unitary (meaning U*U=1).

The goal of "sampling a random quantum circuit" is trying to find the probability of observing some given observable if you keep applying random quantum circuits to a given input state.


Like reddit's ELI5 (explain it like I'm five), but ELIHAM (explain it like I (only) have a masters)?

Yeah, if someone could please do that.


Are you familiar with logic circuits (those made of gates like AND, OR, XOR, NAND)? Just like they are the founding blocks of classical computers, the founding block of quantum computers are quantum circuits.

Quantum circuits are made of quantum logic gates like Hadamard, CNOT, Z, CZ, etc. Instead of bits as inputs and outputs, quantum logic gates have qubits. Unlike boolean logic where bits are 0 and 1, a qubit is a 2D vector [α β] where α and β are complex numbers, corresponding to a superposition of the zero and one bases: α * |0> + β * |1>. You can visualise a qubit as a point on a sphere, the so called Bloch sphere [1]

There are multiple ways to implement a qubit, but you need to start with some quantum phenomenon. An example is the polarisation of a photon, so horizontal could be |0> and vertical polarisation could be |1> and the qubit is represented as complex vector of these two. If you've studied linear algebra you know manipulating a vector often involves linear transformations. Any linear transformation can be represented as a matrix - so applying gates is just doing matrix multiplication. Unary gates are 2x2 matrices and binary gates are 4x4 matrices - for photons they would be implemented with mirrors and optical waveplates. Measuring the polarisation at the end is the output. The output is not deterministic but it always follows the same distribution, so you could design a circuit that has |001> X% of the time, |010> Y%, |111> Z% of the time, etc. such that X + Y + Z + .. = 100%.

I'm not too familiar with the details of random circuit sampling, but the idea is that you start with a big circuit that wasn't intentionally designed and therefore has no known properties we can exploit - instead it's a random mess of transformations to the qubits. A classical computer cannot run big quantum circuits - N gates with the 49 Google qubits requires like 2^49 * N^3 classical gates, so it won't be able to calculate the output distribution. However, what we can do is run the quantum circuit many times (do measurements on the quantum computer) and collect many samples. Given enough samples, a classical computer can verify whether there's consistency between them and whether an actual transformation produced them (and therefore quantum computation happened) or its just pure noise / garbage using cross entropy benchmarks [2].

Note that the purpose of the "random" in the random circuit is to introduce hardness and prevent cheating (assume that the classical computer is the "opponent" of the quantum computer); the circuits don't calculate anything useful / of human value.

What's interesting is that once people with supercomputers saw the benchmark formula and analysed the constant factors, they found a loophole which let them run a classical algorithm which generates measurements/samples that satisfy the benchmark with 40K classical CPUs for a week, or even a single A100 within 140 days. Some of their success was due to the sheer power available and some is due to algorithmic cleverness (see: tensor networks). In my opinion, they are only disproving the Sycamore supremacy in a fussy way.

[1] - https://en.wikipedia.org/wiki/Bloch_sphere

[2] - https://en.wikipedia.org/wiki/Cross-entropy_benchmarking


> Can someone explain this in terms an EE or programmer can understand?

Unfortunately not. The physicists who are behind quantum computing don't think the same way, and go straight for the abstract math to solve any problem. I am pretty sure they don't understand that most of our progress on computing up to this point is because you don't need to go into Galois fields or discrete math to describe what a computer or algorithm does. I'm also pretty sure that they don't know of another way to think about it.


The second paper shared by the GP is actually just about as dumbed down as you can feasible get. The problem is that A) it uses some basic jargon, because it's fair to assume the only people interested in the result are people who understand the jargon and B) it's describing a completely impractical problem, designed to maximize the difference in performance between a quantum and classical computer rather than do anything particularly useful.


(I don't have a background (or interest) in quantum computing) I've tried some of the "chat with pdf" websites for other topics. It seems like they could help you understand the paper. If you give it a try, could you report back here on how it worked for you.


> references "random circuit sampling"

It's either Monte Carlo or bullshit.


Google's and IBM's previous "quantum supremacy" demonstrations were quickly crushed by improved classical simulations. Let's see if it survives this time.


The main issue I had with their demonstrations were that the problems they chose were essentially "let's show that a quantum computer is better at being a quantum computer than a classical computer". Obviously they are, just like I'm better at being a human than ChatGPT is.

I'm not saying they suck, but to proclaim that "quantum computers are superior" you need an actual use-case, IMO.


They're not claiming quantum computers are superior.

They're claiming they have a device which can do something that you couldn't do with a classical computer. A very narrow claim.

It is a big deal to academics, and a number of people here are close to that world and interested.


It's absolutely non-obvious, I'm dead serious. As far as I've understood keeping up over the years, there's a credible argument from a respected academic in the QC community that quantum computing is impossible.


it's not impossible (from a theoretical, and possibly physical perspective) but it's likely that there are no problems which would motivate the necessary investment to demonstrate a QC doing something useful which we couldn't also do in reasonable time on a classical supercomputer.

Kind of like fusors: sure, you can do nuclear fusion in your garage but you're putting in far more energy than you are generating.


Do you have a link handy? You've piqued my curiosity!


Here's an exhaustive covering: https://www.scottaaronson.com/democritus/lec14.html

I would have picked it up in bits and pieces from blogs over years, here's an attempt to render that useful that is surely nitpickable:

TL;DR: there's two types of caring about Google's claims of quantum supremacy:

1. HN tends to assume these articles are about breakthrough in _product usefulness_ and then tsk tsk about lack of impact. c.f. top comment currently gravely noting after reading the paper, its too noisy and a long way off.

2. The papers are about demonstrating _there is quantum computing at all_, the interest in the people in the know is about settling that question, the raw strongman feat isn't particularly interesting.


From the article linked by parent:

«To summarize, I think that arguing with skeptics is not only amusing but extremely useful. It could be that quantum computing is impossible for some fundamental reason. So far, though, I haven't seen an argument that's engaged me in a really nontrivial way.»

«Very little of what we do in theoretical computer science is directly connected to a practical application. That's just not what we're trying to do. Of course, what we do has applications, but indirectly. We're trying to understand computation. If you take that as our goal, then it seems clear that starting from the best physical theories we have is a valuable activity. If you want to ask a different question, such as what we can do in the next five to ten years, then, that's fine. Just make it clear that's what you're doing.»


This shouldn't leave anyone with the impression they are cranks. Scott's claim in this lecture is he hasn't yet found making a counter argument "really nontrivial"

To wit, the article names Leonid Levin and Oded Goldreich as the primary skeptics.

Leonid has 10K citations, 2012 Knuth Prize, 37 h-index.

Oded has 63K citations, 2017 Knuth Prize, 95 h-index.

What Scott Aaronson considers "really nontrivial" is far beyond our comprehension, and he's making a claim about synthesizing an counterargument, not settling the entire argument.


Would you have any links handy for those?


True, but as someone who works on quantum physics using classical computers and loves to see classical computers simulating increasingly hard quantum problems, these "crushings" were still only done on classical computers in a way that would scale exponentially with the number of qubits.

Even then, I believe the classical simulations to beat google's 53 qubit device were never actually performed -- it was shown that they could be performed quickly with petascale memory but actually doing it would be a huge expense. Add a dozen more qubits and even the hypothetical classical challengers fall off quickly...


"This is a very nice demonstration of quantum advantage. While a great achievement academically, the algorithm used does not really have real world practical applications, though." Not having real world applications is not necessarily damning, of course. Curious to know what implications this has for general algorithms. Reading this, it almost makes it sound like there will be some algorithms that quantum is better at and that they won't necessarily be the same algorithms that "classical" computing uses.


Quantum computing is not a competitor for general computing. It solves certain categories of problems much faster, but not most of them.

In fact, nobody yet really knows what those problems are. There are a few real-world problems that quantum computers could theoretically help with, like factoring large numbers, but nobody is really all that close to implementing them. There are problems that can be implemented, like this one, but they are of no use whatsoever. They just happen to be feasible to implement.

The implementation does suggest that they're building up a toolbox of physical parts that could some day (years, less than decades) be used on real-world problems. The class of such problems appears very small so far, but there is reason to think that we could find other problems that matter. (For example, in the realm of AI, which is very compute-intensive -- though thus far nobody seems to have any specific implementation.)

Think of a quantum computer as a magic box on the side that solves a few problems. If you had it, you might completely reconsider what kinds of problems you want to solve. Solving it via the magic box might require a complete reconsideration of how you frame the question -- like discovering a wormhole that lets you get from Atlanta to Seattle instantly, so how would you redesign a trip from New York to LA?


> It almost makes it sound like there will be some algorithms that quantum is better at and that they won't necessarily be the same algorithms that "classical" computing uses.

This is exactly correct. While you can simulate classical computations on a quantum computer, it doesn't make a lot of sense to. However, there exists a class of problems for which there exist solutions that have lower time complexities than anything we can run on a classical computer, if only we had a quantum computer of sufficient size.

Examples of such algorithms are the famous two, Shor's and Grover's algorithms, of which Shor's is more interesting because it factors integers in polynomial time whereas the best known classical algorithm runs in exponential time (this is the one that's scary for cryptography). Grover's algorithm is basically function inversion (find the x given a function f and some desired value y such that f(x)=y) in O(sqrt(N)) over the domain of the function, which on a classical computer requires O(N) operations.

Those were discovered in the 90's, so they're pretty widely understood by now. There's a few newer ones, such as the quantum algorithm for linear systems of equations, which solves for a scalar result of an operation applied to the unknown vector x such that Ax=b in O(log(N)k^2) time versus the classical O(nk), where n is the number of matrix elements and k is a measure of how sensitive to error the problem is.

Basically, there's a handful of useful algorithms, and another dozen or so theoretically interesting ones, that can run faster on a quantum computer. For everything else, there's zero advantage to using one.


My understanding is that quantum computers only have two real use cases, as of today:

1. Breaking crypto. 2. Simulating other quantum systems.

For (1) it's basically all downsides. For (2) unless you're a particle phycisist you'll never need quantum computers.

But that's now. Maybe there will be a killer app for it some day, changing everything. Or indeed, we could get it indirectly. Maybe simulating quantum systems we could invent new battery technologies, which in turn changes our lives.


(2) can be very relevant to material science and chemistry. Those things have huge ranges of important practical applications.

For example, people currently struggle to do accurate calculations of the band structures of materials and it doesn't look like there will be much progress there using only classical computers (using more computing power or better approximation tricks). Big enough quantum computers could do this. Band structure calculations are very interesting for pretty much all semiconductor development.


Indeed. That's what I meant by my example of indirect benefit.

Simulating physical systems better, faster, and more completely, can have many practical applications.

But unless you're already today trying to simulate physical systems like that, then QC probably won't help you one bit.


If it is able to break crypto then surely that means it can do other "interesting" mathematical calculations that are currently extremely slow/hard though?


That's not all that clear.

The problems QCs could solve regarding crypto are from number theory. That's basically a branch of math that doesn't really describe any real-world stuff. Cryptography is as far as I know the only practical application of that stuff.

I couldn't think of any implication of "being able to factor large numbers quickly" or "calculating discrete logarithms quickly" that does not relate to cryptography. But I'd be curious if others think these would have any implications beyond "we'll have to get new algorithms into our cryptography".


Nope.

QC allows a very specific attack that breaks the main asymmetric algorithms today, ECC and RSA - there is no meaningful attack on anything else (Grover's algorithm is technically a sqrt improvement on symmetric algorithms but that isn't close to breaking AES256).

There's nothing general purpose in the attack on RSA and ECC: you can use the quantum Fourier transform to find the period of the key, and that period tells you the key. But this is very specific - you have two algorithms that have inherently periodic behavior, where the period is meant to be secret, and so all you need from the QC is the period.

It's hard to see how to extend that to other problems.

I'm more curious about (and kind of wish more emphasis would be made on) programmable analog computers, which is what it seems QC should be capable of. A lot of the difficult with QC is that people seem really fixated on discrete problems (like factoring \o/) where there's a single correct answer, and so huge amounts of effort and research are going into error correction, etc. There are lots of problems (simulations as in this circuit), where the classical solution simply requires running millions of times with random variance to converge on a sufficiently accurate result where a QC would theoretically be great. The general use problem in that case becomes "how do I convert this classical problem into a QC compatible algorithm".


So far, to my knowledge, not faster than a classical system. We've been searching for decades for a problem with a better quantum solution than a classical solution, and it's been a tough slog. These are, at least thus far and from my limited perspective, very special-purpose accelerators. Shor's algorithm is the one thing that comes to mind tbh.

"Breaking crypto" in this case isn't anything other than finding prime factors of an integer.


Possibly. This is way beyond my knowledge.

But I'm not aware of many other "search this finite part of the number line for this property", where "this finite part" is still too big for classical computers.

It almost sounds like quantum computers are tailor made for the types of problems we've been building cryptosystems on.

But maybe this is not all quantum computers will be able to do. I couldn't even explain exactly how they apply to simulations of quantum systems, but have only taken more knowledgeable people at their word.


"Search this finite part of a number line" is probably fairly accurate in describing many Mixed Integer Programming (MIP) techniques. Such that, if you do get a breakthrough at that, you may see larger advances in optimization.


Exactly, ML is a potentially very valuable task for more mature quantum computers due to the ability to perform much more powerful searches than what we have to accept on classical computers.


Basically, quantum computers can solve problems of the BQP class in a time that is a polynomial function of input size. These same problems, on a traditional computer, can take exponential time.

It is hard to tell with certainty what can be done with quantum computers, but having powerful quantum computers will certainly open new fields of research, like trying to figure out if quantum algorithms for NP-class problems have better complexity than classical algorithms. Simply finding subexponential algorithms for problems that would have required exponential time on a traditional computer, may prove valuable.

Basically, outside of the immediately available problems, this opens up an entirely new complexity class from which to attempt to crack other classes of complexity. Trying to break NP-problems with turing machines, if not successful in an academic sense, has proved immensely valuable and profitable. By getting another architecture, we get a shot at similar breakthroughs.


This is so extremely well put that I want to reply just to highlight it.

What you said was the optimist view, that there's a world of possibilities. It's just that at the moment we just have the two I mentioned.

Maybe there's much more, and if we build it (the hardware), they (the algorithms) will come.

But for now really just the two. AFAIK.


I think there are some other areas where qc is likely to deliver better results than classical computers beyond these two. Modelling turblent flows is one such example. They certainly aren't common, but I think this overstates the case somewhat.


The real question to me, is how it will take to get more (useful) qubits.


Do you know if there are any arguments in the literature to the effect that actually useful quantum supremacy may never be possible in principle (not merely in practice)?

What I mean is: decoherence with environment must be held at bay long enough for the final answer to appear in the qubits with high probability. Based on examples from thermodynamics (such as impossibility of extracting useful work from a heat reservoir without a second, lower-temperature reservoir into which to dump heat), I could imagine a scenario where decoherence would always foil the computation at some point, perhaps at some intermediate time or in the final measurement. In particular examples, this would show up as irksome experimental limitations, but in fact these limitations would all stem from a deeper prohibition. And to the extent that decoherence could be managed (such as in the cited experiment), the resulting computation would not be "useful".


The ability to entangle particles feels like a really amazing new capability. (New in the last 100 years anyway). Like it’s a brand new kind of substance that has never been made before. Everything in all of history has been built out of boring old atoms, not these fancy new entangled things.

Even if quantum computers turn out not to be able to solve interesting problems, I wonder if computers are the only thing we can make out of it.


Meh. They're just quasiparticles made out of bog standard particles.


Electric space heaters? That's crypto's most practical application.


Crypto makes extremely expensive space heaters.

On the positive side, they are just as efficient as regular space heaters in producing heat from electricity.


Nope.

That is now overtaken by AI (Especially LLMs), [0] which does not have any efficient methods of training, fine-tuning and inferencing.

Crypto on the other hand has already has alternative consensus algorithms available today which can cut it emissions down significantly [1].

[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...

[1] https://consensys.net/blog/press-release/ethereum-blockchain...


I know nothing about quantum computing, but when I read "does X instantly when rivals need 47 years", I immediately think about it being abused for brute forcing passwords and encryption keys. Would it be possible?


Government institutions have already been instructed to start using quantum computer safe encryption algorithms. Internationally, governments have been storing tons of encrypted data that they can’t currently decrypt because they expect to be able to decrypt it soon using quantum computers.

There are already quantum computer safe encryption algorithms that have been released to the public. I don’t know exactly how they work, but one of them was about reverse engineering a path through multi-dimensional space.

Expect SHA256 to disappear soon and be replaced.


sha256 is a hash algo. you probably meant something else?


Yea. I think maybe this is what happens when I read something I don’t fully understand but want to share the info. I think they’re switching their encryption algorithms and also they were looking into how quantum computing affects SHA256. Would have to find the article I was reading.


Why are you being downvotdx? We need to have way more discussion about this. The answer is — no one knows. In fact even classical computer algos might soon break modern crypto, especially Elliptic Curve crypto.


Not yet!


I don't remember the details, but the reason the 2019 paper was challenged was because Google did not take into account quantum simulation optimization techniques when estimating how long the computation would run on classical computers. I wouldn't be surprised if there's a similar situation here, where unconsidered optimizations prevent this from being quantum advantage.


As I recall, they did not take the error rate into account. The result was dashed by somebody looking at their error rate and saying "we can compute that distribution to within 5%* with a classical computer." One would hope that they learned from this, but historically, supremacy claims have been rather short-lived.

* number pulled out of a hat


The main challenge for the Google quantum supremacy result was that calling it a computer or what it does a computation never made any sense.


I feel like quantum computing is one of those technologies that is going to be a lot of hype and excitement by researchers, but no real applications and no real impact to the general public.

... Until the day that it does.

And then, a lot of very interesting things will happen very quickly, and the world will change massively.


I wonder if Google is Xerox of our era, developing promising technologies only to be made practical and commercialized by other entities? Transformer is one example. Not sure if recent advancement in quantum computing is another.


The fact that the calculation would take 47 years on a classical computer is only a break through if the calculation is something we actually want to do. Otherwise it just means that we have created an unnecessary problem designed to be solved quickly by a quantum computer.

That isn't to say that this quantum computer isn't impressive and a step forward. Just that the comparison isn't really meaningful unless the problem being calculated transcends computer architecture--meaning it is actually interesting to solve regardless of the architecture being used.


No one is claiming that general purpose quantum computers are useful right now. Instead, people are trying to prove that certain physical devices they have built behave like the theoretical model of a quantum computer.

To help prove that, people have found contrived problems that don't care about errors but that theoretical quantum computers can solve exponentially faster than classical computers. If these physical devices can solve those problems, that is proof that they are on the right path to one day solving actually interesting problems like Shor's algorithm.

There are of course some caveats. For one, the test problems discovered so far are believed to take exponentially more time on a classical computer, but there is no rigorous proof yet (in fact, it is not yet proven that there doesn't exist a classical algorithm that could run as fast as Shor's). The second is that the current quantum computers are so small compared to classical ones that, even with an exponential advantage, they can't always solve things fast enough to prove without a shadow of a doubt that they have this advantage. So, improvements in this time difference compared to the largest known classical computers is still a worthwhile goal.


Can anyone recommend a few videos, papers, or books that "start at the beginning" for how quantum computing works--the physics parts like entanglement. I'm a computer scientist, so I'm good with the algorithm/programming aspects. Should I start with quantum annealing? Ideally I guess I'd like some pointers to good approachable resources on the ways qubits are made, maintained, written to and read from--start me off at the quantum version of the transistor. Thanks HN.


This may be a bit on the theoretical side, but it's the best reference for quickly getting up to speed on how quantum computing and entanglement works that I know of: https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.pdf

As for how actual physical qubits are maintained, that topic is tough to find a good explainer for, since every group has a different approach. Some groups use trapped ions, some use two-dimensional quasiparticles, some use spin states of photons... and no one has a clue which approach will be most practical in the end. It's analogous to not knowing whether clockwork or vacuum tubes will be the better way to implement a Turing machine.


If I understand correctly, the limited amount of "expressiveness" a quantum computer has restricts its ability to solve useful problems.

Couldn't you simply express useful problems as a function of problems that the quantum computer can already solve? Doing so might be extremely in-efficient, but give that there's so much performance leeway, it still might end up being similar to super computers of today?


Seems like more of what we've seen: a quantum system is really good at...simulating a quantum system. Water us wet, whoopie.


Quantum computers make it practical for us to solve problems at much faster speeds, but at what cost? Quantum computers change the quantum state of particles in parallel universes, and likewise quantum computers in other universes change the quantum state of particles in our Universe. What is the effect of this? We should expect to see random otherwise unexplainable changes, and that’s exactly what we’ve seen over the last 10 years, it’s called the “Mandela Effect.”

But despite the name referring to an important historic figure most “mandela effects” are tiny almost insignificant changes such as the spelling of The Bernstein Bears or the presence of a Cornucopia in Fruit of the Loom logo, the kind of thing that quantum changes could alter. I realize this is probably too far out there for most Hacker News readers and I’ll get down-voted for it, but I believe there may be a real danger here, and it’s something we need to watch out for as we develop quantum computers further. If we don’t take the threat seriously the Universe we live in now may become different from the one we remember, and we will need to provide safe-guards to retain the consistency of our Universe.


Does someone know for reference how big of a number that computer can factor? Or some other complex problem that is not highly theoretical...


Roughly 15 (all current attempts to factor bigger numbers cheat in various ways). The basic problem is that you need at least a thousand qbits to do anything remotely resembling useful (especially if you need any error correction).


So this is similar to the FizzBuzz post where they are trying to make an algorithm to make it the fastest, but really is useless.


I'm sure it's faster than simulation, but in what sense is this a computer and in what sense is that a computation?


Imagine trying to explain this article to a caveman, or even someone from 200 years ago.


So it can generate lots of random numbers very fast... cool. /s


How exactly does this make their search engine better?


You forget that their real business is ads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: