Hacker News new | past | comments | ask | show | jobs | submit login

The fact this prize exists is admitting that no one has figured out a use for quantum computers.

I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."

Whether this capability is useful is up in the air.

Note that in practice, classical computers are going to be better at factoring numbers for the foreseeable future.




There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.

Whether or not quantum computers have practical applications, the prize itself is not evidence of that.


> There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.

All of which are based on existing technologies that have been delivering for decades if not an entire century (vehicle efficiency). Even something as nebulous as "AI systems" has been around for twenty years in the form of Google's original semantic search capabilities.

This "Quantum AI" prize, however, is a solution in search of a problem.


Ther are plenty of well documented uses for quantum computers, the hardware is just too nascent to fully accommodate them. The most powerful quantum computers today still only have just over 1,000 qbits.


I don't think this is totally accurate.

If you have significantly better quantum computers, you can solve realistic problems, yes.

But what's not being spelled out here is that as far as we know classical computers will still totally smoke them unless you allow a large probability of inaccurate results.

And if you are fine with inaccurate results, classical randomized algorithms make it a much more difficult deadline to beat.


What is the benchmark to have something useful for real-world use in number of qbits?


20 million physical qubits to break RSA 2048: https://quantum-journal.org/papers/q-2021-04-15-433/.


Physicist here. It highly depends on a bunch of factors (the type of qubits, the error correcting code, the error rate, the algorithm…), but a ballpark number for practical usefulness is 1 million physical qubits.

Keep in mind that qubit requirements keep tumbling down as people work hard to squeeze out as much as possible from a limited number of qubits.


Assuming what the public knows about is the state of the art, of course, which I doubt is a good assumption to make. I'm sure major governments have been funneling billions for years into secret projects to be the first to be able to break the (non-post-quantum) communications of everyone else.


Einstein did not have GPS in mind when he was developing his theories of relativity.


The theory of relativity does not in any way enable GPS. GPS is subject to (some) relativistic effects, but that is merely a source of bias, which could be corrected for with just an experience-based correction factor even if we did not understand relativity. If relativity did not exist as a physical concept, GPS would be easier, not harder or impossible. (I guess this misconception comes from xkcd in some form?)

A perhaps more relevant example: Einstein did not have the cell phone camera in mind when developing his theory of the photoelectric effect.


Heh. This is an interesting comment. Imagine if we didn't know about relativity - we would have discovered it as an annoyance/weird quirk instead as we ran into it.

Reminds me of that story about the self-evolving chip that was tasked to learn how to classify tones and instead took advantage of specific flaws in its own package.


A more relevant example would be that Einstein did not predict how to make a laser when he discovered the theory of the stimulated emission of radiation (the "SER" in "LASER").

The photoelectric effect had been well known for decades, Einstein has just given a good explanation of its behavior that was already known from experiments. It would have been equally easy for the designers of the first video camera vacuum tubes, which were used in the early television, to design them based only on the known experimental laws, ignoring Einstein's explanation.

On the other hand, the formulae of the stimulated emission of radiation, complementing the previously known phenomena of absorption and spontaneous emission, were something new, published for the first time by Einstein in 1917. They are the most original part of Einstein's work, together with the general relativity, but their practical applications are immensely more important for now than the applications of general relativity, which are limited to extremely small corrections in the results of some measurements with very high resolutions.

The inventions of the masers and lasers after WWII would not have been possible without knowing Einstein's theory of radiation.


I’m pretty sure humans still knew about the speed of light/radio waves being limited to c which is all you need to know to develop GPS. Time running slower on GPS would be an issue eventually though. Relativity does make it easier


How could he? there were no cell phones.


Yes, that was the point of the parent comment.


Thanks for reaffirming Poe's law. I was amused by how 'cell phone' was taken as a given, when talking about a CCD sensor.


I believe that most, if not all, cell phone cameras have cheaper CMOS sensors, not CCD sensors (which have a lower image noise, but they need a more expensive manufacturing process, less compatible with modern digital logic and more similar to the manufacturing processes used for DRAM).

AFAIK the CCD technology continues to be used only in large-area expensive sensors inside some professional video cameras, in applications like astronomy, microscopy, medical imaging and so on.


Quite true even full-frame DSLRs typically use CMOS sensors for some time now.

CCD was the first thing that came to mind as 'charge' is right in the name.

Out of curiosity, looked up invention dates for CCD 1969 and CMOS 1963 and CMOS sensor 1993 (quite a gap). I was playing with DRAM light sensitivity in the lab in the late 80's. I'm guessing CMOS had too much noise to be useful for a long while or something.


No, it was not taken as a given, it was an example of a very common product that digital image sensors enabled. I could have chosen e.g. digital cinema cameras, but they would not nearly have the same profound effect as cell phone cameras have had on society.


Survivorship bias. There is a lot more science that has not panned out.

I'm not saying quantum computing won't pan out, but if it has to there's some fundamental piece that is missing so far.

In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.


That's not the point. The point is that a lot of discoveries and inventions wouldn't have happened if it weren't for researching just for curiosity's sake. Research results will often be useless for product development or capitalism in general. However, focusing research on achieving specific goals only might actually take you further away from your goals. You can't focus on something you don't know exists, you have to discover it first.

Maybe, when we have quantum computers, one nerd makes an accidental discovery that enables us to build a room temperature superconductor, and maybe not. But if we don't let people research freely what they're interested in and only things that will pan out, we're going to lose out on a lot of things.


I agree.

I didn't say quantum computing research is useless.

My point is that we are not at stage that we can offer a small prize and find monetizable uses for it.

Fundamental research requires a lot more funding than this.


> In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.

The theory of relativity was discovered decades before GPS. Similarly, the theory of quantum computing was discovered in the 1990s.

I agree with the sentiment: trying to find applications for a technology (Large fault tolerant quantum computer) that doesn't exist yet. I just think relativity is the wrong comparison. I do not think that this effort if not worth it due to not having fault tolerant quantum computers. Theory alone can take one very far.


The origins of quantum computing give it a clear use: simulation of many-body systems.

Number factorization and anything else in BQP is also an use for them.


Looking forward to when leetcode problems require BPQ complexity analysis


We might never retire but at least we’ll grind out leetcode in our 60’s while on adderall, ozempic, lions mane etc…and won’t feel a day over 55!


Don't tempt me with a good time!


N body problems are usually non linear.

Quantum-everything is linear, how is this distinction overcome?

Also, doesn't solving this problem hint at quantum gravity?


From Feynman's 1982 talk on quantum computing:

"[N]ature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."

0. https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-...


>and if you want to make a simulation of nature, you'd better make it quantum mechanical

That is an absurd claim, if taken by itself. Most of nature relevant to us behaves classically. If you want to do simulations of a building or a car or an earthquake or the climate, modeling it as a quantum system would be absurd.


He's obviously talking about the nonclassical regime, where the fundamental quantum nature of reality matters.


You can simulate quantum mechanics with classical computers pretty well, as long as you stick to the copenhagen interpretation.


No even simulation of pure quantum states scales exponentially with the number of degrees of freedom, that’s irrespective of any interpretation or invoking of non-unitary evolution like measurements, just pure simulation of the Schrödinger equation. If you simulate an environment to e.g. incorporate wave function collapse or measurement operations you’ll work with a master equation that also grows linearly with the complexity of the density matrix simulation.


Feynman's lecture explains why classical computers are terrible at simulating quantum systems.

The basic problem is that the number of states grows exponentially with the size of the system. You very quickly have to start making approximations, and it takes an enormous amount of classical computing power and memory to handle even relatively small systems.


Yes, you have to make approximations and deal with (estimates of) errors.

However, quantum computers also have to deal with noise and errors. So far, that's not very different.

(If we manage to build error-correcting quantum computers, that might change.)


The approximations don't just introduce small errors. To simulate quantum systems classically, you need to make drastic assumptions that fundamentally change the nature of the system.

This is very different from, say, an approximation that adds in a small amount of noise that you can estimate. The approximations in simulating quantum systems classically can radically change the behavior of the system, in ways that you might not understand or be able to easily estimate.


Huh? Your simulation doesn't care about your interpretation. All interpretations of quantum mechanics make the same predictions.


We currently can't even simulate a hydrogen atom.


We can absolutely simulate the hydrogen atom. This paper lists the equations and fundamental constants that allow calculating the hydrogen energy levels with around 13 digits of accuracy: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.93....


That is not a simulation and those are not fundamental constants.

Because a simulation is too difficult, there are approximate formulae for computing the quantities of interest, like the energy levels of the spectrum of the hydrogen atom.

These approximate formulae include a large number of constants which are given in the paper linked by you and which are adjusted to match the experimental results.

A simulation of the hydrogen atom would start from a much smaller set of constants: the masses of the proton and of the electron, the magnetic moments of the proton and of the electron, the so-called fine structure constant (actually the intensity of the electromagnetic interaction) and also a few fundamental constants depending on the system of units used, which in SI would be the elementary electric charge (determined by the charge of a coulomb in natural units), the Planck constant (determined by the mass of a kilogram in natural units) and the speed of light in vacuum (determined by the length of a meter in natural units).


The inputs of the formulas for the hydrogen energy levels in the paper are: The Rydberg constant, the fine structure constant, the electron-to-proton mass ratio, the electron-to-muon mass ratio, the Compton wavelength of the electron, and some nuclear properties (charge radius, Friar radius, and nuclear polarizability). All inputs except the nuclear properties are as fundamental as it gets according to our current understanding of physics (note that the Rydberg constant and Compton wavelength are simple combinations of other physical constants). Nuclear physics is dominated by quantum chromodynamics which is not nearly as well developed as QED.

The constants are determined by fitting the theory to the best available measurements (not only in hydrogen). This is exactly what fundamental constants do: They convert unit-less theory expressions into measurable quantities.


We know how to simulate it, but we can't do it. Those equations though require too much computation if you solve them with any known classical algorithm.


This is completely wrong. My laptop can solve the equations in fractions of a second. I believe that with some optimizations it should be trivial to do the calculations on a 1960s mainframe.


That is not true.

You can solve such equations in fractions of a second only for very low precisions, much lower than the precision that can be reached in measurements.

For higher precision in quantum electrodynamics computations, you need to include an exponentially increasing number of terms in the equations, which come from higher order loops that are neglected when doing low precision computations.

When computing the energy levels of the spectrum of a hydrogen atom with the same precision as the experimental results (which exceed by a lot the precision of FP64 numbers, so you need to use an extended precision arithmetic library, not simple hardware instructions), you need either a very long time or a supercomputer.

I am not sure how much faster can that be done today, e.g. by using a GPU cluster, but some years ago it was not unusual for the comparisons between experiments and quantum electrodynamics to take some months (but I presume that the physicists doing the computations where not experts in optimizations, so perhaps it would have been possible to accelerate the computations by some factor).


I believe you might be confusing the QED calculations of hydrogen with those of the electron g-factor. Just have a look into the paper I linked (section VII). Most of the QED corrections are given analytically, no computers involved at all. You could in principle calculate this with pen-and-paper (and a good enough table of transcendental functions).

The most accurate hydrogen spectroscopy (of the 1S-2S transition) has reached a relative accuracy of a few parts in 1E15 which is around an order of magnitude above the precision of FP64 numbers.


The "few parts in 1E15" claim is applicable only to the absolute value of the frequency of the 1S-2S transition, which is 1 233 030 706 593 514 Hz.

That absolute frequency is computed from the ratio between an optical frequency and the 9 GHz frequency of a cesium clock, which is affected by large uncertainties due to the need for bridging the gap between optical frequencies and microwave frequencies.

The frequency ratios between distinct lines of the hydrogen atom spectrum or between lines of the hydrogen atom spectrum and lines in the optical spectra of other atoms or ions can be known with uncertainties in parts per 1E18, one thousand times better.

When comparing a simulation with the experiment, the simulation must be able to match those quantities that can be measured with the lowest uncertainty, so the simulated values must also have uncertainties of at most parts per 1E18, or better per 1E19.

This requires more bits than provided by FP64. The extended precision of Intel 8087 would barely be enough to express the final results, but it would not be enough for the intermediate computations, so one really needs quadruple precision computations or double-double-precision computations, which are faster where only FP64 hardware exists.

I have not attempted to compute the QED corrections myself, so I cannot be certain how difficult that really is.

Nevertheless, the section VII from this CODATA paper and also the previous editions of the CODATA publications, some of which had been more detailed, are not consistent with what you say i.e. with them being easy to compute.

For each correction there is a long history of cited research papers that would need to be found and read to determine how exactly they have been computed. For many of them there is a history of refinements in their computations and of discrepancies between the values computed by different teams, discrepancies that have been some times resolved by later more accurate computations, but also some where the right value was not yet known at the date of this publication.

If the computations where so easy that anyone could do them with pen and paper there would have been no need for several years to pass in some cases until the validation of the correct computation and for a very slow improvement in the accuracy of the computed values in other cases.


The accuracy of the hydrogen 1S-2S measurement was mainly limited by the second-order Doppler shift of the moving atoms (and to a lesser degree the AC Stark shift of the excitation laser and the 2S-4P quench light). The comparison between the laser frequency and the Cesium fountain clock was done with an optical frequency comb which introduces a negligible uncertainty (< 1E-19).

Isn't it fun to get your own field of expertise (wrongly) explained to you on the internet?

Edit: I never said that it is easy to derive the corrections listed in the CODATA paper. However, it is relatively easy to calculate them.


> I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."

And, hopefully, other quantum systems in general!

I can see that helping with material science. That can have huge multiplier effects on the rest of the economy.

But I agree with you, that other serious applications of quantum computers seem to be thin on the ground.


I thought one of the main advantages of QC was that it could (theoretically) solve existing problems that have exponential time complexity with more efficiency. Isn’t the idea that it could make everything faster? Or did I fall for the marketing.


> Isn’t the idea that it could make everything faster?

If that is your understanding, then yes, you have unfortunately fallen for the very mistaken reporting on this.

There are specific algorithms that Quantum Computing can solve faster than regular computers. Some of these algorithms are incredibly important, and a faster solution to them would cause serious changes to the world, namely Shor's algorithm, which would make factoring large numbers much faster. This would effectively break many/most encryption schemes in the world. So it's a big deal!

Nevertheless, this isn't true in general - not every algorithm is known to have such a speedup. (I'm not an expert, I'm not sure if we know that there isn't a general speedup, or think there might be but haven't found it.)


Public key crypto is vulnerable to period finding, but symmetrical key cryptography is pretty safe from quantum computing advances.


Quadratic speedup, IIRC - a 128-bit key can be found by brute force in (roughly) 2^128 steps by a normal computer, or 2^64 steps by a quantum computer. This applies to all brute force algorithms, so just make your keys and hashes twice as long as you think they should be, and you're good.


This might be true (I'm not that up to date on whether there are symmetrical algorithms that negate the advantage of QC), but most of the internet / world commerce relies on public key crypto.


Huh yeah I guess I need to learn more. My layman’s assumption was that it would help with a lot of NP problems that involved recursion or backtracking algorithm would benefit from it. From some quick googling it seems like they have already designed QC algorithms for traveling salesman etc. Isn’t that sort of meaningful or am I missing something?

I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.


Well they are physically impractical now, but they might get practical in the future.

But no, I'm fairly sure that QC in general isn't known to be able to solve NP problems. And since Traveling Salesman is NP complete (iirc), I don't think there's a QC alg to solve traveling salesman in P (otherwise that would imply QC would solve all of NP in P, which I know isn't true). Where did you see indication otherwise?

FWIW, my favorite CS blogger is Scott Aaronson, and the subtitle of his blog has always been: "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel." This reflects the very common misunderstanding of how QC works.


Hah ok point taken. Here’s the paper I found about traveling salesman - looks like a version of Grover’s algorithm so I think that means it provides a speed-up but not polynomial time? Paper is paywalled: https://link.springer.com/article/10.1134/S1063776123080095#....


> Huh yeah I guess I need to learn more. My layman’s assumption was that it would help with a lot of NP problems that involved recursion or backtracking algorithm would benefit from it.

Recursion is more of a property of how you write your algorithm, than of the problem.

Eg Scheme or Haskell as languages have no built-in facilities for iteration, no for-loops, no while-loops; the only thing you get is function calls. (Well, that and exceptions etc.)

However you can built for-loops as libraries in both Scheme and Haskell. But they will be built on top of recursion.

To make matters even more interesting: once you compile to native code, all the recursion (and also all the for-loops and other fancy 'structured programming' constructs) go away, and you are back to jumps and branches as assembly instructions.

None of this changes whether a given class of problems is in P or NP or (almost!) any other complexity class. Just like getting a faster computer doesn't (usually!) change the complexity class of your problems.

> I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.

Quantum computers are impractical now. But as far as we can tell, that's "just" an engineering problem.

In the long run one of two things will happen:

- Either engineering improves and we will be able to build good quantum computers (though perhaps still not better than classical computers for the same amount of effort) - Or, we discover new principles of quantum mechanics.

Basically, quantum mechanics is one of our most successful physical theories, if not the most successful theory. And as far as we understand it, it allows quantum computers to be built.

So either we will eventually manage to built them, or (more excitingly!) that's not possible, and we will discover new physics that explains why. We haven't really discovered new physics like that in a while.


It could make somethings faster, not everything, but so far the number of useful somethings that people have come up with is very small.

A quantum computer is not a general purpose computer than can be programmed in the way we're used to - it's more like an analog computer that is configured rather than programmed. It's only useful if the problem you are trying to solve can be mapped into a configuration of the computer.


The basic idea is that for a certain class of problems, you can have the quantum computer skip certain incorrect paths on the calculation by having their probability amplitudes cancel each other out.


With emphasis very much on '_certain_ class of problems'. It's only a precious few problems quantum computers actually help with as far as we know, even theoretically.


If a problem can be reduced to efficiently sampling from the summation of an exponentially large number of FFT's, then a quantum computer will destroy a classical computer.

If a task can't be efficiently reduced to such a problem, then a QC probably won't ever help at all; the square root time advantage from grover's algorithm is too easily overwhelmed by simple engineering factors.


What's stopping classical computers from doing this sampling?

If it's sampling, you don't have to deal with the exponential here.


See https://www.scottaaronson.com/papers/optics.pdf

“We give new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical comput- ers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum com- putation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.”

Which is the basis for the experiment discussed here: https://scottaaronson.blog/?p=5122


You got a useful response already, so let me give you a good response: https://www.smbc-comics.com/comic/the-talk-3


LOL


This is true in theory but don’t think it’s ever been proven in practice


Which theory are you talking about?

See https://www.smbc-comics.com/comic/the-talk-3


It's a "solution looking for a problem." Nothing wrong with that. Lasers were theorized in 1917 by Einstein and invented at Bell Labs in 1958, but it took another 20 years before anyone had any idea what the heck to do with them. Now they are the backbone of the internet, among thousands of other applications. Patience, grasshopper.


Wouldn't the D-Wave kinda-but-not-really quantum computer that came out a decade ago be ideal for AI? Annealing sounds like exactly the kind of problem that needs to be solved for ML training?


I can't find mention of it online, but back in 2013 Lockheed Martin purchased a D-Wave machine because they wanted to use it for "AI", which it turned out meant software verification (of fighter jets?) I believe by searching for the possibility of some kind of invalid program state in a large program, which IIRC they couldn't manage to solve with standard solvers. But in that case the number of qubits in a D-Wave machine appears to me far too few for that to be possible, although I don't know the task exactly.

If by "AI" you include operations research (as opposed to statistical machine learning), yes, adiabatic quantum annealing makes sense for certain optimisation problems which you can manage to naturally formulate as a QUBO problem. By 'naturally' I mean it won't blow up the number of variables/qubits, as otherwise a far simpler algorithm on classical computer would be more efficient. I know someone who published QUBO algorithms and ran them on a NASA D-Wave machine, while I myself was using a lot of annealing for optimisation, I didn't want to get involved in that field.

But if you want to do machine learning on a large amount of data using quantum annealing, no, that's terribly badly matched, because the number of physical qubits needed is proportional to the amount of data you want to feed in.


Well, but to be useful, it would need to be better at this annealing than a classical computer that just uses good old (pseudo) random numbers.


"better". There currently isn't a practical way to brute force factor a 4k rsa key with classical computing. doesn't shor's algorithm on a quantum computer make that feasible? that would be a big driver for quantum computing though maybe my understanding is off.


It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.


> It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.

This statement should delimit between theory and experiment.

Theoretically, the question of building a quantum computer with low enough noise to run Shor's has been solved. In fact it was solved by Shor himself in the 1990s: https://arxiv.org/abs/quant-ph/9605011.

Experimentally, we are just getting started with building quantum computers with low enough noise: https://blogs.microsoft.com/blog/2024/04/03/advancing-scienc....

Experimentally, it will always be open whether a quantum computer can run Shor's until it actually run Shor's. The point is that progress in the field has not stagnated since it's founding.


That paper of Shor just shows how a quantum computer with a large number of bad qubits can be the equivalent of a quantum computer with a small number of good qubits.

The paper does not prove anything about the upper limit for the number of bad qubits that are physically realisable.

There are doubts that this upper limit, which is unknown yet, is high enough for most practical applications.


> The paper does not prove anything about the upper limit

Nothing can prove how many qubits can be realizable except trying to realize them. There will never be a theorem that says "The maximum number of qubits that can ever be controlled in a lab is X". That's what experiment is for.

I will say, it's difficult to doubt that the upper limit to the number of qubits we can realize is infinity. We can now trap ~10 million atoms and efficiently control hundreds of them: https://www.nature.com/articles/s41586-023-06927-3. The question is not "Could we ever realize billions of qubits?". It's "When can we realize billions qubits?". The answer could be decades or centuries but as long as people are building these devices, it will happen eventually.


Yeah well you can rephrase my statement to say that the theory underlying quantum computers still hasn’t been validated experimentally, so it’s still open whether it models reality sufficiently well to allow us to run the algorithms it predicts on physical machines.


I've been going down this rabbit hole for a few weeks. Here is what I now think.

Cai 2023 (https://pages.cs.wisc.edu/~jyc/Shor-Algorithm-with-Noise.pdf) showed that there is a definite noise floor for Shor's and gave a way to determine this.

New algorithms are better than Shor's the in that they are smaller in space and time and more resilient to noise. The state of the art is Ragavan et-al https://arxiv.org/abs/2310.00899. The insight in Cai's about the QFT applies to this algorithm but is less damaging as is scales the noise floor at n^3/2 not n^2. It does seem that it is believed that error correction can be implemented to bring the effective noise down, but it seems that this will be very expensive for large numbers - probably at the scale of previous estimates that indicate that more than 20 million gates will be required to be operational for about 8 hours. The gates required will have to be much better than the current ones (my reading is about an order of magnitude) but this is believed to be on the table for the future. I think these two assertions by the community are debatable but honestly these folks are the experts and we have to go with what they say and respect them on this until it turns out that they were wrong.

From what I read it was never the case that Shor's was thought to be any more noise tolerant than other algorithms, but Cai proved it, which is different. There is some debate about the insight, because a chunck of the community is like "this is not helpful and will make us even more misunderstood than we already are" but personally I find this attitude really irritating becuase it's part of the gradual reveal that QC people do about the practicality of the tech. I have no respect for anyone working in QC who doesn't say something like "there is no chance of a practical application in the next 30 years and it's likely that there will be no practical application for 50 years at least. But, this is really interesting and important physics and the techniques we are developing may help us build new classical devices or other quantum devices in a shorter life time." This rider should be mandated for all grant applications and in every popular interview. Instead of which we hear (repeatedly) "whooo whooo QC will crack RSA any day now and help us do machine learning better than you can ever imagine". These folks say "whelp I never said that" but the reality is that they use the idea to seduce politicians and CEO's to give up significant money that they would not if they had a clear idea of the technical problems with QC and which could be used to do a lot of good if spent on things like feeding kids and stopping climate change.

This is introducing new security issues because things like QKD and post quantum are problematic in new ways. QKD has end points that are silently vulnerable to people with guns, pliers and evil intents. Post quantum is very computationally expensive and errr suspicious in errr various ways. Introducing it is going to create gaps, and those gaps are unnecessary if current encryption is not threatened.

Quantum ML is another thing that make me really annoyed. The algorithms need data to be loaded (surprise) which is extremely problematic and slow, and they need quantum memory - which exists only on a very small scale and the tech used just seems wildly impractical to me. Yet, folks are out there talking about it as if it's coming in the next 5, 10, 20 years! The reality is that we are about 6 major breakthroughs from doing this and once we have those 6 breakthroughs expect that this is going to take 10 years for a practical implementation at least Again - I have no problem with the theoretical exploration of the topic, but to simulate circuits and make claims about how they will work without mentioning that you don't have a clue how to implement the system that will implement it is pretty bad behaviour.

All they need to do is put a single line in the paper like "This technique has theoretical interest in Computer Science and Physics but major open questions prevent it being practical for real world problem solving in the near or medium term." I have total respect. 100%. And I think that the work is interesting

But no, because "want money" and "don't give a shit about the damage we are doing".


Quantum computers SHOULD destroy regular computers at knapsack style problems and other combinatorial explosions. The problem there is that decomposing real world problems into 128bit combinatorial selection is really hard.


There's no reason to think that quantum computers will have any fundamental advantage at knapsack problems; the √n advantage from grovers is not substantial when classical computers are going to be many orders of magnitude bigger than quantum computers for the foreseeable future.

If a problem can be reduced to efficiently sampling from the summation of an exponentially large numbers of FFT's, then a quantum computer will destroy a classical computer.

It's largely an open research problem whether there are useful quantum algorithms between those two problem classes.


>Quantum computers SHOULD destroy regular computers at knapsack style problems and other combinatorial explosions.

You can get a Turing award if you can show this, even theoretically.


Also hilariously they mark stage 3 as "useful quantum computation" as in, none has happened yet. Because, no quantum computer actually exists yet.


I don't get the joke. Has Google ever said anything to imply their current quantum hardware can do anything useful?


The big one is simulation of quantum systems, I don’t get how that’s not enough by itself? Classical computers will never be able to simulate quantum systems efficiently so we need quantum computers for that. In general achieving arbitrary precision control of quantum states is something that will open avenues in many areas like sensing as well, so just from that angle alone a working quantum computer would open a new branch of technological progress.

But yeah it’s not guaranteed or even likely that quantum computers will be very useful for many computer science problems, though I’m also optimistic about that given the progress made in the last 30 years. Physics / science isn’t guaranteed to be easy or progress at a steady pace.


> In general achieving arbitrary precision control of quantum states is something that will open avenues in many areas like sensing as well, [...]

And developing new materials.


What does it mean to “simulate” a quantum system and why is that useful


Simulate roughly means designing a quantum computer that mimicks the dynamics of a real quantum system and then run a simulation of that to learn how it would behave under certain conditions e.g. to optimize its parameters. Similar to how we simulate planes or complex circuits today. Quantum systems are everywhere e.g. lasers, cold atoms, transistors or other complex semiconductors, superconductors, enzymes, … I’d say if we don’t develop the ability to understand these systems through simulation we will never progress beyond a certain technological boundary, and quantum computers are a necessary technology for that, just like classical computers were instrumental for the past scientific revolution.


That sounds… wrong.

A simulation of a plane is a model, implemented by humans, of how physics works.

A simulation of a quantum system is still a human implementation of mechanics. A quantum computer per my understanding should not be able to add anything to that. You’re not able to just say well this thingy has quantum mechanics and this is a quantum computer so it’s better able to do that. Quantum computers are about speeding up calculations by structuring specific problems such that wrong answers are not calculated. Not emulating quantum mechanics and then pulling the answer from the ether.

So… what am I missing here? How could quantum computer aided search spaces specifically aid simulating quantum systems that are not quantum computer aided search spaces?


“Wrong answers are not calculated” is a popular simplification of this, but it's misleading you here.

Quantum computers are a means of systematically creating and modifying complicated sums of exponentially large FFT's, and then efficiently sampling from the resulting distribution.

Note that you typically still need to sample many times to get a meaningful answer, which is where the “wrong answers are not calculated” ultimately comes from: if you can arrange for most or all of the factors corresponding to “wrong” answers in the sampled distribution to cancel out (such as the term for the number 4 when trying to factor 15), then when you sample several times from that distribution, very few or perhaps none of those samples will have been drawn from the “wrong” part of the distribution, and so you waste less time testing bad samples.

A quantum computer is potentially useful for simulating quantum systems because the _models_ for those systems are ridiculously complex in _exactly_ this way. It won't help if the model is wrong, but our problem is currently that we can't really run the calculations our current models imply beyond slightly-larger-than-toy examples.


> so you waste less time testing bad samples.

How is this not “wrong answers are not calculated”? You gave a lot more detail on the mechanics of how these probability amplitudes are canceling each other out but the answer seems the same?

I don’t follow how this maps to helping simulate the quantum systems. Quantum computers are good at finding solutions to problems efficiently. But the quantum systems we are describing are not solution seeking systems. They’re going to be just interacting components with entanglement whatever’s going on. How would the avoidance of bad samples aid the simulation of a system like that?


For simulation, it's not about the bad samples: the point is information about the distribution itself.


But simulating a process is not the same thing as efficiently routing space exploration. Quantum computing grants you the latter. Why does it impact the former?


No that’s not how it works, quantum mechanics cannot be simulated efficiently on a classical computer, the state space grows exponentially with the number of degrees of freedom, every degree has relative phases to every other degree even when only looking at pure states, that’s why even the largest super computers cannot simulate more than 50 quantum degrees of freedom currently (see quantum supremacy).


I’m not claiming quantum mechanics can be efficiently simulated on a normal computer. I’m questioning whether arbitrary quantum mechanics systems can be effectively simulated by a quantum computer.


I have not seen any description of a general-purpose programmable quantum computer, which could simulate an arbitrary quantum system, like a digital computer.

All the examples given that I have seen were for making a hardwired simulator for a concrete quantum system, e.g. some chemical macromolecule of interest, to be used much in the same way as analog computers were used in the past for simulating systems governed by differential equations too complex to be simulated by the early digital computers in an acceptable time.


Are hard coded quantum simulators even the same thing though? Like, if we’re saying that the goal of a quantum simulator is merely to run a high dimensional problem and exploit the nature of qubits to make it simpler to handle those imaginary vectors and what not…

Does that actually follow the same framework as “traditional” “quantum computing”, e.g. struggles with error correction / problem formulation specifically designed to avoid unnecessary calculations?

It feels like although a quantum simulator could viably work to simulate a specific system, it shouldn’t make it any easier to actually understand the system it is simulating and could maybe just indicate how complex by the amount of variation in simulation outcomes (which isn’t useless). Is that accurate?

Excuse my terminology here


Simulation usually involves randomized and sampling algorithms, not a full state space exploration.

A good part of theoretical chemistry today relies on simulating quantum systems.


I had a guy I knew in a PhD in quantum computing and he told me that he was working on making an algorithm to perform edge detection on an image in O(1), apparently it's possible to do some operation much quicker with quantum computing

A paper i found on edge detection https://arxiv.org/html/2404.06889v1#:~:text=Quantum%20Hadama....


This might be a good candidate for Google's prize, unless I'm missing something?


I have literally 0 experience in this field, so I can't comment


> no one has figured out a use for quantum computers.

It is my understanding that a use is very straightforward: quickly solving problems in the Big(O) Factorial Class (n!).

I could be misunderstanding QC though.


Yes, you are. Quantum computers are only believed to be faster than classical computers for a handful of very specific problems. And even those are not 100% proven (we don't have a proof that a faster classical algorithm doesn't exist for any of them, except Grover's search, which is only a quadratic speedup, not an exponential one).


> only believed to be faster than classical computers for a handful of very specific problems

But aren't these problems all in the factorial and exponential classes?


I think they are believed to be, but the opposite is not true. There are a lot of exponential or factorial problems that have no efficient QC algorithm, and are not beleieved to be likely to have one.

Also, at least the most famous problem which has an efficient QC algorithm, integer factorization, has no proven lower complexity bound on classical computers. That is, while the best algorithm we know for it is exponential (or, technically, sub-exponential), it is not yet proven that no polynomial classical algorithm exists. The problem isn't even NP-complete, so it's not like finding an efficient algorithm would prove P=NP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: