I find this an interesting case because most of us probably can't rely on very much of our own reasoning to evaluate, and rely on beliefs about Schneier or Aaronson or other posters here.
At least I noticed I was less skeptical than I otherwise would be due to the earlier post coming from Schneier, despite some very critical comments of the paper here, and then also noticed this slight hesitation to be skeptical went away completely with Scott's post.
It looks like the paper specifically claims to create an optimized quantum algorithm for SR pair finding which is most time consuming part the Schnorr fatoring algorithm. It starts with a classical computational lattice problem, defines the optimization problem in a specific form (3), and then maps it to a Hamiltonian to represent a lattice matrix and a PauliZ matrix ((1, 0), (0, -1)) (4) and if we have a Hamiltonian, we can create a QC.
The proof of the reduced memory complexity is linked in section 31 but it is really long winded however, it seems the classical Schnorr lattice reduction problem for factoring integers has a space complexity of O((logN)^(α)/α*loglogN)) which according to the author, since the paper can map SR pair finding (the time consuming piece of Schnorr's algorithm) to a Hamiltonian, we then have a quantum factoring algorithm of the same complexity.
I don't know why you're downvoted. I think it's not unusual given the field. Certainly some fields have much larger average number of authors than others
This is not experimental particle physics, where CERN just publishes everyone in CMS or ATLAS.
Apparently the most recent ones just mention the * Collaboration, but I distinctively remember seeing papers with hundreds of names in the author list.
I did graduate work in Materials Science and the understanding in our lab was to be very skeptical of papers from China, particularly from the State Key Labs as they were very often impossible to replicate and ended up being useless. Seems relevant to this paper as well.
There are tons, literally hundreds of Chinese-Americans and Chinese-Europeans and Chinese immigrants living all over the world who do research, and their work is not at all suspect. The reason why Chinese papers are suspect has to do with the academic and financial incentives which exist in the modern day People's Republic of China. With that being said, it's definitely possible that Chinese papers are perceived as suspect for racist reasons as well.
Can you explain, in detail, why you believe that making a statement about the scientific output of a country, as measured by paper quality, is necessarily racist? In particular, I'm going to challenge you on this because the "racism" argument has been used as an easy cop-out to denigrate an opinion all-too-commonly.
Please just explain your thinking of why it's "racist". Assume your reader is aware of the effects of socioeconomic status, national ambitions to become a science powerhouse, and the incentives to publish groundbreaking work.
The criticism was on the country, representing its system of academic publications, not on Chinese people and characteristics unique to their genetics or similar.
Your argument is similar to calling anybody an antisemite who criticizes Israels politics. You're rightfully being downvoted.
One of the most fun (?) parts of academia is the unique blend of frustration and satisfaction that results when a shoddy paper somehow clears peer review only to get eviscerated when it lands on the desk of an actual expert.
"the detailed exploration of irrelevancies" LOL, if that isn't a "time-tested" method for making a paper sound academic, I don't know what is.
It hasn't cleared peer review, it's a preprint (which is pretty common in cryptography, peer review usually happens when results are already old news).
This is not always the case. SA and Sabine, are the hyper rare outliers in this regard. Fields like biology have no such folks for variety of reasons (I’m counting out some celebrities involved in pointing out actual misconduct instead of sloppy work).
As a former researcher in the sciences, I get the feeling that "cargo cult" research is not all that uncommon, especially when a idea gets embedded in the popular stream of thought. There is a whole cluster of cargo cult publishing when it comes to evolutionary theory because of the misunderstanding of natural selection. In mathematics there is Godel's incompleteness theorems, and in computer science we have quantum computing.
With regard to quantum computing, I find it as irritating as the author of the blog. And I honestly think the internet is to blame, because it has increased the reward for being notorious, at least in the sense that there is more attention bestowed upon you for making spurious claims. And I believe the internet is also to blame for people desiring such attention, because the technology of the internet has displaced genuine, social interaction with superficial fluff that bears little resemblance to what humans actually need to feel socially fulfilled.
Why would they need to have done so? The blog post argues there's no evidence this is any faster than classical Schnorr's algorithm, and classical Schnorr's algorithm can easily factor a 48-bit number.
And as the post also makes very sure to point out, Schnorr != Shor.
<< If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.
This is about the only downside ( and simultaneous benefit ) of staying at home. I am genuinely glad he weighed in. I do not have enough background knowledge to evaluate the initial claims.
While it's indeed true that the top two threads' root comments are sceptical, replies to them, and further threads, present counter-arguments — one could easily come away from reading the HN discussion thinking that there's something to this paper. (And indeed, see https://news.ycombinator.com/item?id=34261645)
Scott Aaronson is not a quantum-computing skeptic, quite the opposite, he has always seemed optimistic about its prospects and his work has been fundamental to progress in the field. He just does not like hyped-up work that has little substance. If anything, Bruce Schneier should have been a bit more skeptical - this type hyped-up unreviewed preprint papers appear very frequently these days.
I think he was fairly skeptical but said the proof would come when someone tries to implement it; he now links to Aaronson's take in his original blog post.
Just wanted to point out that Scott Aaronson is not one of those anti quantum curmudgeons. Besides bashing on companies with questionable claims, I think he's been pretty open minded about looking for problems and algorithms with actual quantum advantages.
As someone who lived though one AI winter and seriously fears another, I view fighting hype is important to allowing useful research to happen.
'Super position' and 'entanglement' have been overly mystified in QM in general and the implications of probabilistic Turing machines vs the typical non-deterministic Turning machine examples is important.
While probabilistic Turing machines are non-deterministic, they don't work the way that most examples offer.
Same problem with elementary descriptions of entropy so it isn't just the quantum fields problem.
We are perhaps in something of a "quantum winter" at the moment, given the activity on the financial side of the QC industry as far as reverse listings, SPACs, and mergers have gone.
I work in this field as I find it compelling and the challenges of finding a worthy business case that applies beyond the fantastical potentials is one I feel worth my efforts/years. But equally open to failure where such yields an advance in our learning.
Aaronson is always an entertaining voice in the industry, although his focus on AI means less than I would hope to nudge us along at times. But he was in fine form at Q2B conference in Santa Clara recently, and I'm not anywhere near close to my contributions to the industry to ignore his thoughts as valuable to the discourse. Especially when pushing back on the emotional velocity we might have at times.
Scott Aaronson is more than willing to praise research whenever there has been actual progress made. There is just a lot of bullshit in his field (or maybe there is a normal amount of bullshit but it is amplified more by the media).
The bullshit ratio in quantum computing is the highest I've seen in any field, followed only by ML (the difference being that there is ML non-bullshit, while quantum computing hasn't been shown to be useful for any real problem, but there have been huge advances in the underlying technologies).
To my knowledge the only "practical" application of quantum sensing so far has been the use of squeezed light in some gravitational wave detectors. However, this has absolutely nothing to do with quantum information science. Getting an actual advantage out of quantum effects has so far been remarkably difficult.
Answering here, but this really covers the sibling comment from dekhn.
To me it seem quite wild to have quantum sensing separated from the rest of quantum information science. It would be like saying that classical SNR considerations are unrelated to Shannon's introduction of error correcting codes (the birth of information science). But if that is your preference, it does not make much sense to argue. Either way, most scientist who work on quantum information science also see their work apply specifically to sensing.
Similarly, it seems strange to me to insist on specifically focusing on quantum computing, when the majority of technology developments in quantum information science apply both to sensing and to computing (one of which is simply easier thanks to its analog nature).
I think it's a stretch to relate that to quantum computing, which normally describes making systems that can... well, compute! Quantum sensing is more an application of quantum theory and exploitation of quantum effects to improve photonics applications.
I find this an interesting case because most of us probably can't rely on very much of our own reasoning to evaluate, and rely on beliefs about Schneier or Aaronson or other posters here.
At least I noticed I was less skeptical than I otherwise would be due to the earlier post coming from Schneier, despite some very critical comments of the paper here, and then also noticed this slight hesitation to be skeptical went away completely with Scott's post.