What do you mean? The original 2019 supremacy experiment was eventually simulated, as better classical methods were found, but the followups are still holding strong (for example [4] and [5]).
There was recently a series of blog posts by Dominik Hangleiter summarizing the situation: [1][2][3].
Agree. Scott is exactly correct when he just straight calls it crap.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
That slide deck is complaining that correct work on quantum attacks should be seen as negligible priority or as distractions. TFA is complaining that JVG isn't even correct. They are pretty different concerns.
To be clear, I think that slide deck will be looked back upon as naive. In particular, it makes the classic mistake of assuming the size of number factored should be growing smoothly. That's naive because 15 is such a huge cost outlier and because quantum error correction has frontloaded costs. See [1] and [2] for details.
The very first demonstration of factoring 15 with a quantum computer, back in 2001, used a valid modular exponentiation circuit [1].
The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".
would other mersenne numbers admit the same trick? if so, factoring 2047 would be really interesting to see. it's still well within the toy range, but it's big enough that it would be a lot easier to believe that the quantum computer was doing something (15 is so small that picking an odd number less than sqrt(15) is guaranteed to be a correct factorization)
No, 15 is unique in that all multiplications by a known constant coprime to 15 correspond to bit rotations and/or bit flips. For 2047 that only occurs for a teeny tiny fraction of the selectable multipliers.
Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.
For each chick they do 24 trials divided into 4 blocks with retraining on the ambiguous shape and actual rewards after each block. During the actual tests they didn't give rewards. In figure 1 they show the data bucketed by trial index. It's a bit surprising it doesn't show any apparent effect vs trial number, e.g. the first trial after retraining being slightly different.
I have to admit I'm super skeptical there's not some stupid mistake here. Definitely thought provoking. But I wish they'd kept iteratively removing elements until the correlation stopped happening, so they could nail down causation more precisely.
I do agree my skepticism level rises extremely high in any experimental psychology experiment. There’s just so many ways to bias results, in addition to “do enough experiments and one of them will get a statistically unlikely result” problem.
This group does a lot like this https://www.dpg.unipd.it/en/compcog/publications … so that’s tempting to think they keep trying things until something odd happens (kind of like physicists who look for 5th forces… eventually they find something odd but often it’s just an experimental issue they need to understand further).
> Using simple simulations,we show that this pattern arises naturally from collider bias when selection into elitesamples depends on both early and adult performance. Consequently, associationsestimated within elite samples are descriptively accurate for the selected population,but causally misleading, and should not be used to infer developmental mechanisms
Is that paper in print? I can't seem to find if it was peer reviewed.
If the paper is true, then, yeesh! That's a pretty big miss on the part of Güllich et al.
Reading through the very short paper there, it seems to not have gone through review yet (typos, mispellings, etc). Also, it's not clear that the data in the tables or the figure are from Güllich's work or are simulations meant to illustrate their idea (" True and estimated covariate effects in the presence of simulated collider bias in the
full and selected samples"). Being more clear where the data is coming from may help the argument, but I likely just missed some sentence or something.
I'll be interested to see where this goes. That Güllich managed to get the paper into Science in the first place lends credence to them having gone through something as simple as Berkson's Paradox and have accounted for that. It's not everyday you get something as 'soft' as that paper into Science, after all. If not, then wow! standards for review really have slipped!
> By late 2024 the biggest numbers that had been factored by an actual digital quantum computer had 35 bits (citing https://arxiv.org/pdf/2410.14397v1 )
This is incorrect. The cited reference says "N <= 35". That N is the number being factored, not the number of bits in the number. Also, footnote a of that paper points out (correctly) that the circuits that were used likely needed knowledge of the factors to create (e.g. as explained in https://arxiv.org/abs/1301.7007 ). As far as I know, only N=15 has been factored on a quantum computer in a no-shenanigans way.
It's conceivable that current ion trap machines could do a no-shenanigans N=21.... but anyone judging progress in quantum computing by largest-number-factored is looking at the wrong metric (for now). You won't see that metric move meaningfully until quantum error correction is done spinning up.
It's fame comes from the simplicity of its construction rather than its utility elsewhere in mathematics.
For example, Graham's number is pretty famous but it's more of a historical artifact rather than a foundational building block. Other examples of non-foundational fame would be the famous integers 42, 69, and 420.
Yes, speed matters. No, quantum computers can't do everything instantly even with unbounded qubits.
A well studied example is that it's impossible to parallelize the steps in Grover's algorithm. To find a preimage amongst N possibilities, with only black box access, you need Ω(sqrt(N)) sequential steps on the quantum computer [1].
Another well known case is that there's no known way to execute a fault tolerant quantum circuit faster than its reaction depth (other than finding a rewrite that reduces the depth, such as replacing a ripple carry adder with a carry lookahead adder) [2]. There's no known way to make the reaction depth small in general.
Another example is GCD (greatest common divisor). It's conjectured to be an inherently sequential problem (no polylog depth classical circuit) and there's no known quantum circuit for GCD with lower depth than the classical circuits.
This is perhaps not clear enough, but the title refers to a pattern. For classical bits on a quantum computer this pattern is already playing out (as shown in the cited experiments), and for quantum bits I think it's about to play out.
Classical storage of classical bits is still far more reliable, of course. Hell, a rock chucked into one bucket or another is still more reliable. We'll never beat the classical computer at storing classical bits... but the rock in a bucket has some harsh competition coming.
I should maybe also mention that arbitrarily good qubits are a step on the road, not the end. I've seen a few twitter takes making that incorrect extrapolation. We'll still need hundreds of these logical qubits. It's conceivable that quantity also jumps suddenly... but that'd require even more complex block codes to start working (not just surface codes). I'm way less sure if that will happen in the next five years.
I don’t really expect fancier codes to cause a huge jump in the number of logical qubits. At the end of the day, there’s some code rate (logical bits / physical bits) that makes a quantum computer work. The “FOOM” is the transition from that code rate changing from zero (lifetime of a logical bit is short) to something that is distinctly different from zero (the state lasts long enough to be useful when some credible code). Say the code rate is 0.001 when this happens. (I haven’t been in the field for a little while, but I’d expect higher because those huuuuge codes have huuuuge syndromes, which isn’t so fun. But if true topological QC ever works, it will be a different story.) The code rate is unlikely to ever be higher than 1/7 or so, and it will definitely not exceed 1. So there’s at most a factor of 1000, and probably less, to be gained by improving the code rate. This isn’t an exponential or super-exponential FOOM.
A factor of 1000 may well be the difference between destroying Shor’s-algorithm-prone cryptography and destroying it later, though.
I'll add some nuance here. In a classical computer, computing with coded bits is not particularly difficult. We've known how to do it with some degree of mathematical rigor for decades -- IIRC John von Neumann was interested in this topic. And we barely even need to do it explicitly: computers have accurate enough gates without explicit coding that merely error-correcting RAM (ECC-style) and data links (Ethernet, PCIe, etc) is good enough for almost all applications. Even in aerospace, usually we just have one extra majority vote over a few different computers.
Quantum computers are different. It seems quite unlikely that anyone build the equivalent of, say, an XOR gate that takes two single physical qubits in and spits out two physical qubits (this is quantum land -- the standard gates neither create nor destroy qubits, so the number of inputs and outputs is the same) that works well enough to actually represent that particular operation in whatever software is being run. Instead each logical operation will turn into multiple physical operations that work like an error correcting code. The easy classical trick where your transistor is janky at 0.9V so you run it at 1.0V amounts to moving more electrons around per operation, and this approach is analogous to correcting bit flips but not phase errors, and it makes your quantum computer stop being quantum.
And here's where it gets messy. The physical qubit technologies that are best for longish-term data storage may not be the same as the technologies that are good for computation, and those may not be the same technologies that are good for communication at a distance. (For example, photons are pretty good for transmitting quantum states, but transferring a qubit from a different technology to a photon state and back is not so easy, and demonstration of computation with photons have been pretty limited.) As an extreme example, one can, in principle, store quantum states quite robustly and even compute with them if one can find the correct kind of unobtanium (materials with the appropriate type of non-Abelian anyon surface states), but, last I heard, no one had much of an idea how to get the qubit states off the chip even if such a chip existed.
So it is possible that we'll end up with a quantum computer that doesn't scale, at least for a while. There might be 20k physical qubits, and some code rate, and some number of logical quantum operations you can do on the logical qubits before they decay or you get bored, and very little ability to scale to more than one computer that can split up a computation between them. In that case, the code rate is a big deal.
But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical? I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits). Happy to be proven wrong of course.
> But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical?
Not GP but yes. I'm reasonably confident that we will have quantum computers that are large and stable enough to have a real quantum advantage, but that's mostly because I believe Moore's law is truly dead and we will see a plateau in 'classical' CPU advancement and memory densities.
> I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits)
I don't think people appreciate or realize that a good chunk of the innovations necessary to "get there" with quantum are traditional (albeit specialized) engineering problems, not new research (but breakthroughs can speed it up). I'm a much bigger fan of the "poking lasers at atoms" style of quantum computer than the superconducting ones for this reason, the engineering is more like building cleaner lasers and better AOMs [0] than trying to figure out how to super cool vats of silicon and copper. It's outside my area of expertise, but I would expect innovations to support better lithography to also benefit these types of systems, though less directly than superconducting.
Source: I worked on hard-realtime control systems for quantum computers in the past. Left because the academic culture can be quite toxic.
I don’t know how people claim the science is solved and “it’s just engineering” when scaling up to no trivial quantum circuits is literally the problem no one has solved and hand waving it away as an “engineering problem” seems really disingenuous. Foundational science needs to be done to solve these problems.
Classical CPUs have slowed but not stopped but more importantly quantum machines haven’t even been built yet let alone been proven possible to scale up arbitrarily. Haven’t even demonstrated they can factor 17 faster than a classical computer.
Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.
> Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.
Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?
I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?
Under a Moore's law-like scenario, factoring 21 happens only about ~5 years before factoring a 1024 bit number. With all the optimizations, factoring an n bit number only requires ~n logical qbits, but most of those optimizations only work for large n, so 21 is only ~5 doubles away from 2^1024.
the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.
[1]: https://quantumfrontiers.com/2026/01/06/has-quantum-advantag...
[2]: https://quantumfrontiers.com/2026/01/25/has-quantum-advantag...
[3]: https://quantumfrontiers.com/2026/02/28/what-is-next-in-quan...
[4]: https://arxiv.org/abs/2303.04792
[5]: https://arxiv.org/abs/2406.02501
reply