Relevant quote from the briefing, which indicates they have not yet reached error correction takeoff velocity:
> It is known in the field that, when the physical error rate of qubits is high, the probability of logical error increases with increasing system size, whereas when physical error rates are low, increasing the system size leads to the desired exponential suppression of logical error. We feel that we are currently in a ‘crossover’ regime between these scenarios, in which increasing system size initially suppresses the logical error per cycle, but would, with increasing size, later increase error rates. Therefore, it is imperative that we continue to improve both qubit performance and system scale.
So this result is not yet the major breakthrough that would be required to build a scalable quantum computer.
I wonder why are such sayings NOT persecuted as libelous; then again I think they have bigger fish to fry now (i.e. bigger crimes to punish i.e. scihub)
but it makes sense that they should pursue such sayings because prestigious magazines are all about 'status' and 'signaling' and what people think; specially now that technology has made their former 'logistical' contributions redundant (referring to the printing of the stuff and getting it to where it's needed) and also how the actual peer reviews are essentially volunteer labor, again because they have their prestige.
> I wonder why are such sayings NOT persecuted as libelous
Because studies of the replication crisis have actually borne this out. Journals with higher impact publish more results that fail replication than journals with less impact, because those results are counterintuitive and "sexy". Being first to publish such novel or unintuitive findings increases their profile/impact because they'll get cited more.
Even if I published my statement (which I believe to be factual, not merely humorous), Nature would not sue me for libel. All that would do is bring more attention to the Nature racket.
Anyway I think there is a role for "sexy but wrong" journals- but that role is limited to extremely competent scientists working at the state of the art of their quantitative field. I don't think anybody should take what gets published in Nature and just sort of naively share it on social media with the claim it proves/doesn't prove something. The context required to evaluate a Nature paper on its merits is absolutely enormous.
> which I believe to be factual, not merely humorous
“Factual” and “humorous” aren’t opposites. I think “publishing in Nature is a strong signal that the results are wrong” is likely to be determined to be an opinion, regardless of whether you mean it serious or as a joke.
The basic dividing line the court has drawn between “factual claim” and “protected opinion” is whether the claim is objective and can be proven true or false.
In general it seems (to me, a non lawyer) that your signal claim isn’t an objective one. There’s no hard line about when a journal would be a “strong signal” vs a “weak signal” vs “no signal” about something being wrong. It’s not really a statement that can be proven to be true or proven to be false. Which is why I think it would be considered to be your opinion about Nature (even if a very serious opinion)
are you suggesting that scientists in UK can't say that Nature is a crap journal that publishes mostly errors? Or that Nature Publishing Group would sue an individual scientist for making such a statement?
Don't really see the point to invoke imperial hegemony in your criticism, it just makes you sound petty.
Since Science is published by the American Association for the Advancement of Science, it feels reasonable to presume US law on this front.
I will say, I have a strong preference for US libel law, and aversion to UK libel law, but that’s probably mostly my cultural upbringing and familiarity speaking.
> On January 1st, 2014 the Defamation Act 2013 came into force, requiring plaintiffs who bring actions in the courts of England and Wales alleging libel by defendants who do not live in Europe to demonstrate that the court is the most appropriate place to bring the action. Serious harm to an individual's reputation or serious financial harm to a corporation must also be proven. Good faith belief that a disclosure was in the public interest was made a defense.
Also, a libel case does not work when the accusation is true. In this case, it depends on what a "strong" signal is. That it is a signal is quite obvious and has been shown (as a general principle) in meta-studies looking at replication.
> It's entirely possible Scott was one of the anonymous submission reviewers.
Considering that Scott works in theoretical quantum complexity theory, I highly doubt that he reviewed this experimental quantum error correction paper.
* In an error correction code, you encode a logical bit/qubit into a set of physical bits/qubits.
* Error correcting codes come in families, parameterized by integer distance d. Incrementing d, leads to a code with more physical bits/qubits, n, but also the ability to correct errors on a larger number of bits/qubits, j.
* If the error probability on each qubit is p, then on a code of size n, there will be on average n*p errors. It should be immediately clear that if p is small, then n*p<j and the code can correct errors that occur, but if p is large then n*p>j and there will be errors that the code can't correct.
* If the code corrects any physical errors that do occur, then there won't be a logical error (value of logical bit/qubit unchanged), otherwise there will be a logical error. In summary, given a p, you have to pick the right sized code from your family so that n*p<j, and you don't incur any logical errors.
* Another way of saying the same thing is that if p in your hardware becomes small enough, then as you increase your distance d, your logical error rate will go down.
These guys are claiming that their p is small enough that the distance 5 code has a smaller logical error rate then the distance 3 code, which is indeed a breakthrough (if correct). No one has done something like this before to my knowledge.
# Criticism
* The results are limited to storage errors. All they are doing is initializing the logical qubit in some initial state and repeatedly doing error-correction on it, to simulate a qubit at rest while the computation is happening elsewhere on some hypothetical other logical qubits. They have not attempted to do any experiments with applying gates to the qubits. Those will likely yield a much larger error rate. In particular, they are only testing a single logical qubit here, but the interesting gates would be two-qubit gates between two logical qubits, which are necessary to do any non-trivial computation.
* The experiment is limited to 25 cycles of error-detection. This means that their experiment shows that their device could hypothetically implement a depth ~25 circuit. As you might realize, useful circuits have depth many orders of magnitude larger, so this continues to be toy device.
The above is what immediately springs to mind, but I am sure the actual experts will soon chime in. My subjective opinion is that the technical achievements of just running the experiment are very impressive. This is a long journey to useful QCs, but this is nice milestone along the way.
In the surface code the gates are all variations on idling. First you idle one way, then you idle another way for a bit, and the result is a gate. (The technical term is lattice surgery.) Because of that, it would be extremely surprising if the gates had notably different error rates from storage. Idling is already a very busy state of affairs.
In 15 years as a TCS researcher I have followed one heuristic: if something has the word "quantum" in it, I ignore (articles, surveys, conference talks, projects, etc.).
This is more from personal ignorance/laziness and convenience than strong conviction: you cannot follow all areas, you have to make some choices how you spend your time, and this is one particular area that is easy to delimit. (EDIT: and if it does turn out to be a dead-end, I can be glad I made the right call.)
At times this policy has been quite hard to follow, and I may reconsider it sometime, but so far it has served me well.
It would be one thing to say yeah, you don’t think the field is interesting and is likely to be a dead end.
But that’s not what you’re saying. Instead you are saying that you don’t follow it because well, you can’t follow everything. And I agree with that. But in that case, you could go to every single HN topic and post “I don’t follow <insert topic here>, I’m just posting to tell you that I can’t follow everything. So far it has served me well to not follow everything.”
Which doesn’t seem particularly useful or contributory in any way.
Sorry, you are right. I didn't want to post something too negative about it. Obviously, I have to decide what I don't follow, and underlying such a decision is also a belief that the field is more likely to be a dead end than others. But I have no solid argument to back that up.
Within the field of CS, this is not a bad approach.
However, if you work in: biology, physics, or chemistry: quantum is a frequently used word. It covers far more than QC, entanglement, coherence, tunnelling, or any other crazy bits of quantum. It forms the basis for our atomic theory of matter and has led to extraordinary engineering and science projects.
I used to be excited by DNA computing but it became clear quickly that regardless of any stated advantages of DNA computing, they were tiny compared to the modern working digital computer and the global infrastructure dedicated to improving it year after year (even after Moore's law putters out).
Interesting that the top rated comment on a post is related to how not reading the post (or any posts in the category) is optimal from a time management perspective.
Ironic that writing this comment as well as reading it is considered a worthwhile time expenditure.
But this is quite common -- on most bitcoin-related topics the top comment usually points out (correctly in my opinion) how it is all bogus. I don't have the knowledge to say the same about quantum computing, but my heuristic points in that direction.
> In 15 years as a TCS researcher I have followed one heuristic: if something has the word "quantum" in it, I ignore (articles, surveys, conference talks, projects, etc.).
No, that's years in the future at least. Factoring 21 without any compilation tricks requires doing a modular exponentiation under superposition. The best known way to do that requires two registers of workspace (10 qubits), plus a teensy bit of breathing room (2 qubits), so call it a dozen logical qubits. If all compilation tricks are banned, even the ones that are reasonable for huge numbers but work a bit too well for small numbers such as using small lookup tables to fuse some of the multiplications together, the overall computation takes on the order of 10000 gates. If you require it to work in one or two shots (otherwise even random coin flipping will work), then those gates need to have error rates below one in a hundred thousand and your storage needs error rates per round below one in ten million.
The experiment being announced here is testing different ways storing 1 error corrected qubit, to show that making it bigger can make it better. On an absolute scale, that logical qubit is still not good enough. It needs to be made even bigger. And there needs to be a dozen of them instead of one. And it's barely breaking even; you want strong gains in quality from adding quantity not just minor gains. This means the underlying physical qubits still need more improvement. There's a lot to do!
Disclaimer: am on google quantum team, opinions are my own.
Sounds like you are just talking about Shor's algorithm. Presumably if you wanted to factorise 21 naively you could do it with fewer qbits, you just wouldn't demonstrate any kind of quantum speedup.
A topic I've been interested in lately is quantum machine learning[0]. Qubits as neurons make sense as a natural architecture to me. Though, as I understand we are still somewhat early in terms of the number of available qubits being useful.
While reading about the actual advantages of quantum machine learning over classical machine learning something that came up was a type of error correction you can do in quantum computing that would make backpropagation faster.
Does anyone who understands this better know if this breakthrough might theoretically apply for that application (in the future, with more qubits of course)?
Maybe I missed it, but the ft article claimed the improved error rate was due to improved cooling and better components rather that through better error correction. That’s not what the title says…
Nature is the premiere science journal on the planet. The purpose of publishing in a journal like nature is to share progress and methods with your peers, and be able to get credit for an idea or an observation.
The audience is scientists in the field, not consumers, not the public, and not investors.
Articles in Nature tend to get hype for two reasons: 1) it's the most prestigious journal in the world 2) they tend to only accept papers about important, novel progress in a field.
If you can't contextualize the information in Nature, I suggest you wait for the popular press to digest it for you.
(There are admittedly some critiques to be made of nature, that it biases towards flashy results, but no one in their right mind would prefer to be published by PNAS.)
There's lots of things to criticize google for, doing fundamental research isn't one of them. I hope the recent chatbot hype + market pressure isn't going to force them to change strategy and drop more R&D in favor of products, though that seems to be what happens historically
This result is notably more open than most papers. The circuits executed and measurement data collected are available on Zenodo: https://zenodo.org/record/6804040 . You can do your own analysis of the claim.
https://www.nature.com/articles/d41586-022-04532-4
The FT article is a bit fluffy. here are links to the paper and briefing in Nature.