> It is unclear how exactly one can verify that a "quantum code" actually runs on a quantum computer (instead of a classical node inserted between the cloud and the QC provider), and there is a huge window for fraud there.
I would like to know what the article's author would take as proof of this. All I can offer presently is my personal assurance as a team member helping to keep D-Wave Leap running. Every day, I work with a team of talented scientists, engineers, devops, and developers to help ensure everything from pipeline performance to monitoring cryogenics, and if there's one thing that I am certain of, it's that our end user submissions run on real hardware at a few millikelvin about absolute zero.
I am certain beyond any shadow of a doubt that we are using quantum effects to achieve low-energy solutions to difficult problems using annealing. I'm also certain that we're making huge progress; from our massive lead in terms of raw qubit count (5000+) to our work making each of those qubits connect to more and more of their neighbours with less noise over time. There are exciting things coming....
If other companies are getting away with anything less and promising they're doing real-time quantum computing in the cloud, (1) it would be a huge surprise, and (2) their lives must be a lot cushier than ours, because it is a lot of work keeping something like this running. You want to talk about the woes of having to manage on-prem and hybrid cloud workloads, well, does your datacenter have plumbing for liquid helium? You monitor the temperature on a few server racks, but do you have to measure ten thousand different datapoints about air and fluid temperature and pressure?
Honestly, it's a lot of fun, it's an exciting thing to be working on, and I don't agree with the author when he complains about brain drain. You want brain drain, go and look at the infinitum of startups hawking SaaS grift-ware like it's the next best thing since sliced bread. Sorry if we find it more interesting to work on this than on the next B2B way of slicing off a chunk of someone else's revenue for providing something obvious. "We do these things because they are hard", as the saying goes.
Obviously it’s in the realm of possibility that a company could fake, but I think if anybody was caught doing that, they’d tank their reputation within the community extraordinarily quickly.
I haven’t heard of any serious or reputable company doing this.
As for other things you’ve said, I definitely disagree with you and agree with the article that there is brain drain. That’s not to say every commercial entity is fully or continuously responsible for it, but DWave, IBM, Google, and every other company that currently or formerly over-promises or outright lies has drawn people out of academia into frequently senseless industrial positions.
In those industrial positions, these people are afforded a place to do scientific research that is outside the university proper, but where they can still publish papers, collaborate, release the results of their research to the world, etc.
I think it's valuable, as not every researcher wants to work in the confines of academia. I worked in academia myself, albeit doing far more HN-poster typical things than a researcher would do, and while it can be an amazing experience, it's also a much lower salary, and you're not at all surrounded by the same energy, sense of purpose, and rapid pace of change. You're also dealing with lower budgets and lower expectations when the goal is purely to publish papers instead of creating actual, functioning devices that are not only workable, but eventually useful (which is a pre-requisite to being profitable in the long term).
As someone totally unfamiliar with this world, I'm wondering why it's painfully obvious? Slow?
The noise characteristics are pretty signature-like. It’d be an engineering effort unto itself to produce realistic-looking noise models and simulations.
Optimizers may also change which operations are occurring at the same time, or insert spin echo, or add other unexpected error mitigations. All of these things can make the circuit work better, so on the one hand they're nice wins. But on the other hand they can violate user expectations, and make debugging and modelling nightmares.
At the moment state of the art supercomputer can simulate 47 qubits. For 46 the necessary resources are about 4 times smaller. So with handful you meant order of 30 then yes. Only a handful of qubits.
A density operator representing a mixed state of n qubits grows as 2^(n^2), or a system with leakage into the second and third excited states grows as 4^(n^2).
For just 4 qubits, this would be a 4 billion complex numbers, so 8 billion floating point numbers.
Even this isn’t a time-domain solution, where you might in practice need to solve the Lindblad master equation.
This is all assuming we have a model for the noise, which in practice is highly non-trivial and very dependent on both the implementation of the qubits as well as their geometrical and material construction, and would take a good deal of science and engineering work to do accurately in a way that it reflects both control dynamics and a specific manufactured sample.
Yes there are solvers on the market in Python and Julia, but they don’t give you realistic noise computations for “free”.
I am also don't understand the point of simulating noise correctly. If you get the result from computation on some vendor black-box black box quantum computer can you really know with what type of noise you would have obtained the result?
Another skepticism about the hype is that the result of quantum computation is just a matrix product on initial state from predefined gates. It is just linear algebra on large unitary matricies. Nothing quantum about it.
However, actual quantum computers propagate state vectors, not density matrices, as far as I know. You'd need to run it a large number of times, or have a large ensemble of identical sets of qbits being propagated with the same algorithm to need a density matrix to describe it, I would have thought. So saying you'd need to simulate a density matrix to simulate a quantum computer confuses me.
I know quantum physics but not quantum computing specifically, so I could be wrong - the grandparent comment sounds like they know what they're talking about otherwise.
This is a wonderful time to be a younger person in the industry; there are tons of job opportunities and every hiring manager is hungry for talented, dedicated people.
That said, we do tend to be hiring for more senior roles, considering the small size of the company. If your goal is to land in QC eventually, make sure to have a really broad skill set, get some experience working on hard projects, and apply every year or so until you find a role in the field. No matter which firm you end up at, it's an exciting industry where you can feel like you're actually helping with a grand human effort to push the state of the art.
The organizers dont seem to know specifics about D-wave when Ive asked, but do you think these kinds of simulations will ever be able to be run on D-wave hardware?
From my reading the article's point was really to note the risk of fraud. It did not claim that this kind of fraud is actually happening now.
I also think that the article is more nuanced on the topic of brain drain than you make it out to be. Is your argument not just whataboutery? And what do you think of the article's claim that "it may not be a zero-sum game"?
In my post, I literally said: "I would like to know what the article's author would take as proof of this."
What, honestly, would it take? I have been thinking about what evidence I personally would want if I were in your position of incredulity. It's different when you sit next to the things and see the people passionately building them and keeping them working day in and day out, I suppose; I can see with my own eyes that there is no fraud taking place, but I can't exactly bring everyone in the world in for a lab tour.
This is not about you or D-wave. Instead it is like there is a shop with so little supervision that every customer has to be trusted to not steal things. So if a customer asks: "What can I do to convince the world that I am not stealing?" then the answer is clear: you either show your shopping bag to everyone in the world, or we need an entirely different kind of shop.
The point of this part of the article is that, for quantum computing, there is an equivalent structural problem. Until it is absolutely manifest that quantum computers do useful things that classical computers cannot, the potential for fraud remains real.
It isn't like we've not seen bugs show up in hardware and software pertaining to RNGs or other types of math related issues. Why should we not give the same benefit of the doubt to Quantum compute services by credible compute providers? You should know you are in for a 'as correct as we currently understand it' system provided to you. That has been the case for decades. This all reminds me when the Pentium 1 FDIV defect showed up. Today, however, we've abstracted that to compute hosting providers and we have less tech to qualify our results against.
Well, that's what patents are for, right?
You can also try to fake good results (or even have truly good results!), and trust me, the scientific community will require unambiguous proof. DWave went through the wringer pretty thoroughly some years ago for their claims.
There’s another angle too: If the service actually does something commercially useful or better, in some sense, it might not matter what the specifics of the implementation are. Ultimately customers are going to look at price and performance and make decisions that way.
If a scientist or company that purportedly does science doesn't do that, they’re not taken seriously by other members of the scientific community. No one is truly believed at face value. I don’t see any significant probability of bamboozling the community of scientists through abject fraud. And there hasn’t been any such issue yet. (There have been retracted published claims, but the retractions happened as a result of scientific scrutiny.)
There is a push to use AI and Quantum such that in order to get funding or publish papers you need to say that you’re applying XYZ AI technique to solve a well known engineering problem.
Because funding agencies want to sell to their investors or government managers that they are in the new hot trend, if you want to get funding money you need to have something related in your proposal. Of course, having previously published papers on the topic helps so that motivates people to send papers on the topic. The journal editors know that the topic is hot so they prioritize papers on this topic as their metrics will increase.
The result is tons of rushed papers saying “Applying XYZ AI technique to well known engineering problem” usually without examining previous research methods or proper benchmarks.
At the end the only barrier for this bro to happen is the individual moral standing of each researcher. Unfortunately, careerism usually trumps over this.
Sorry if this was too bleak.
How different is AI from good old fashioned stats again?
As a physicist who knows a little bit about quantum computing, my understanding is that we’re far far away from building usable quantum computers (it’s still at an applied research stage, and nowhere close to “just an engineering/design problem”) — all hype be damned.
Is the issue with quantum computers somewhat similar? I know next to nothing about the mechanical aspects of them, but based on the what I've read it is considered a breakthrough whenever another qubit is added.
While analog computers are almost entirely forgotten now, they were widely used even into the 1970s. They could solve differential equations almost instantaneously, while digital computers needed to chug through calculations before producing the answer. But digital computers steadily became faster until they could generate answers in real-time, but more accurate and easier to program.
One difference is that (in principle) it is possible to do quantum error correction. Essentially this turns a number of imperfect "physical" qubits into a perfect "logical" qubit. However, this requires extremely low error rates of the physical qubits to begin with and creates a lot of overhead. All existing quantum computers are much too small and noisy to implement quantum error correction except for some proof-of-principle experiments. I am somewhat pessimistic that any of the current technologies can be improved enough to make it possible in practice.
if you don't think these are computers then you just don't know what a computer really is
they're not useful at all but they're still real actual unadulterated computers.
It’s limited because it has only 3 bits, but if you play with the input bits, the output bits change!
btw the circuits in quantum circuits clearly aren't just combinational since they evolve in time.
“It’s not very useful but it can make computations” is a very low bar to pass, and very basic discrete (classical) logic can clear that without problems.
wut? different QEC produce differently sized logical qubits and there are absolutely machines with enough physical qubits to amount to a logical qubit:
and realized QEC is definitely not far off
>for quantum computer to be of any use it should have >10k logical qubits
i'm aware and yet it's false to claim that these things don't compute. for the time being they're noisy computers but they're still computers.
10 qubits is enough to simulate a pair of particles interacting already.
No squinting required.
There has been also very little if any actual research in other fields powered with quantum computing.
We can keep moving the goal posts and claim that we have made it, but the fact is that QC keeps overpromising and under delivering.
Well, great; that's an opinion. The thing is, if a device uses the quantum-mechanical properties of the universe to do calculation, then it is a quantum computer; asserting that it isn't one is a matter of semantics and categorization, since what you're really doing is redefining the term "quantum computer" to inherently include "gate model" as part of it, which is not a foregone conclusion yet.
I believe, from what I've read, that at this point current and projected gate-model quantum computers will not be competitive on optimization problems where quantum annealing will be, so there is definitely utility in continuing to pursue this research and development exercise.
> There has been also very little if any actual research in other fields powered with quantum computing.
Here's our latest:
As a physics playground, annealers are very compelling, and materials research will definitely benefit from these devices.
Except this definition includes classical computers as well. At the scales of current transistors, quantum effects are required to explain their inner workings, and they are used to perform computations.
(To be clear, there’s a ton to be excited about in quantum computing, and there are truly legitimate careers to be had both as a scientist and as an engineer. But what’s exiting currently isn’t very marketable or fashionable!)
Science is about asking questions and forming hypotheses to answer those questions and trying to falsify these. An influx of money is good for that process as the worst case here is that it will pile up a lot of documented falsification and thus lead to better questions. Which is still a good outcome. Once you have people asking the right questions and forming better hypotheses, everybody wins.
Just because there are a lot of bad quantum computing startups doesn't mean that there aren't some more serious ones that are actually making progress. We've seen the same with all the smoke and mirrors AI BS coming out of silicon valley. Lots of glorified if .. else .. logic that gets peddled as deep learning. But in between all the BS, there are a few companies actually doing cool stuff and making some genuine progress.
The AI hype didn't lead to documented falsification or better questions, but a mountain of bullshit literature that was designed to get grants and satisfy the "publish or perish" imperative.
I've heard claims that quantum computers "connect to alternate timeline versions of themselves" and would allow us to communicate with people from parallel universes. I've heard that they'll let you bypass traditional cryptography with such ease that you could steal all of the bitcoins in circulation in an afternoon. I've heard that it could guarantee a lottery win with only 100 picks.
A bunch of high-concept nonsense that is simply not what Quantum computing is going to enable.
I don't think China has the same problems we do that people are worried about "hype" and all this nonsense.
Jian-Wei Pan said they want to use their photon QC device and boson sampling to solve graph theory problems.
I don't understand boson sampling enough to know if it makes sense for graph theory but obviously they have some ideas in mind. Quantum graph theory, quantum network science ideas I suppose.
The video I watched also has 600 views and the only comment is if you can use a photon quantum device to mine bitcoin.
That is a cultural problem we have, not a problem with science that humans have.
There is plenty of experimental research, and early practical results, being achieved in quantum computing. There is also lots of snake oil being peddled by sleazy entrepreneurs. This is true for all developing fields.
Hype causes people not familiar or well informed into the matter to get into it, hoping for big returns. Of course, they'll come out severely disappointed, but science and technology as a whole would have advanced, thanks to their efforts.
Going blind into something with a lot of hype is often an "ice-breaker" for humanity into new areas of study.
1. Brain drain of talent
2. Ponzi schemes
3. Damage to the reputation of science
Another case of "for the love of money is the root of all kinds of evil".
ML already has a massive impact on industry and society as a whole. The future of many careers will forever be altered even by current ML application, let alone future developments.
From automated face recognition, to customer service, job interviews, risk assessment and protein folding, ML has become part of our daily lives already to varying degrees (of both impact and success).
It's a field that won't go away and will only grow and probably change quite a bit in the next decades. Admittedly we're far from a Butlerian Jihad-situation, but there's no denying that ML is much more than just hype.
AGI, now that's a different story.
Nowadays even language skills are assessed automatically:
The products have problems, though:
 Those aeroplanes that can fly from London to Australia in 2 hours
 Cold Fusion
 Quantum computing
Quantum Computing though is already here, it's just not practical for much outside of a lab setting.
There is perhaps some more debate about D-Wave's device,both its status as a QC and its usefulness.
"U.K.-based Reaction Engines is developing technology for Synergetic Air-Breathing Rocket Engines (SABRE), which could one day allow aircraft to fly up to five times faster than the speed of sound — that’s Mach 5 or 3,836 miles per hour.
At that speed, hypersonic flights between London and Australia could be over in just four-and-a-half hours."
"FLIGHTS FROM UK-AUSTRALIA COULD TAKE JUST FOUR HOURS BY 2030"
If you downvote, please also include a link to something that proves a quantum computer exists (outside of theoretical papers); I'm genuinely interested in being proven wrong.
Maybe your definition of “quantum computer” doesn’t agree with the field at large. What’s your definition?
What do you think about Google’s supremacy experiment? Do you have objections to their results? 
This is one of many papers by Google, IBM, Rigetti, and many other quantum computer manufacturers.
A linear congruential generator from Knuth programmed on a classical computer produces controlled pseudorandom numbers. So what? Whether a LCG or a program to produce controlled samples from a goofy Porter-Thomas distribution, they’re both coming from machines that were programmed to do a job. If the machine was neither a computer nor programmable, then the job could not be done.
You haven’t refuted the point of the published existence of a computer. The paper includes both the results of a program running on a quantum computer, and a comparison of the results as simulated by a classical computer, the latter taking several orders of magnitude to complete at several orders of magnitude increased cost.
The randomness is a total red herring and doesn’t contribute to the discussion as to whether a programmable computer performed a computation or not. It did, and it was verified as such.
Google’s experiment isn’t “the state of quantum computing”. It’s a single scientific experiment among thousands. The particular experiment was for demonstrating a then hitherto unconfirmed claim about whether a quantum computer can do a computer science problem more efficiently. The theory already said it was true, but the experiment wasn’t yet demonstrated.
It’s also an experiment lay people, and the HN tech crowd even, doesn’t care about. Because it’s deep and complicated science, not a sales pitch.
Formulating problems into the Ising model is still a difficult task, further into the realm of mathematics than the average HN poster usually goes, but it is indeed proving useful for real-world applications, e.g. with logistics and optimizing the order things are done in.
The article makes lots of very good points. QC is in a hype stage, it does not mean that there is no valid research or actual physical machines doing "computing" with a "small" number of bits, it means that the distance between what it is promised now explicitly or implicitly and what it can be achieved realistically from now to the next 20 years is so huge that it can easily be considered a lie.
Current QCs are only good for one thing and one thing only: "Simulating" quantum systems. But in that same sense, a wind-tunnel is a fantastic "fluid-dynamics" computer providing a "realistic simulation" of how wind flows through a wing. Nobody is solving any actual optimization problems with a QC in any meaningful sense and even less there is even 1 single problem in OR that can be actually solved now only by using a QC.
BTW the company you work for has a long history of making grandiose claims which are later totally refuted forcing you to backtrack, as anyone who has read Scott Aaronson's blog knows.
That's only true of gate-model machines; for example, Google's recent claims of supremacy seem to boil down to something like that. Premature, to say the least, though the work by their team is nonetheless impressive. The pressure to publish is very real! Investors (even when it's a company-internal effort like Google's) still expect some kind of progress even if ROI is not yet being delivered.
Quantum annealers are already solving optimization problems well. One of the goals here is to democratize access to high-quality, high-performance optimization capabilities.
> Nobody is solving any actual optimization problems with a QC in any meaningful sense
Not quite true. Check out what we did in Lisbon:
Clearly this is early days; but the way that detractors talk about this effort is perhaps best server through a metaphor. Imagine you were interested in Babbage or Turing's work early on, but your question for them was, "but how will I use this device to get dinner delivered?"
It's easy for us with a hundred years of progress to see how a connected world enables those kinds of questions to have very robust answers, but it would have been very difficult to see that when you're standing in front of an Analytical Engine full of inscrutable steampunk complexity. If the applications we enjoy so much on the modern Internet were the actual end goal of the technology, it would have been considered unfit for purpose for a very long time indeed! Intermediate, small-scale applications are what paved the way forward, and so too they will be what enables quantum computing research to continue for the likely decades that it will take before it is integrated into apps that consumers use on a day to day basis.
That said, the sort of applications that we talk about around here are often in the realms of science-fiction - imagine the kinds of scheduling and coordination problems that people and machines find difficult today being solved cheaply using open-source software and widely available cloud QPU capabilities. Imagine traffic routing where every single person's destination is calculated together as a tremendous optimization problem in near-realtime to maximize the throughput of a city's streets, or where large-scale resource distribution problems can be solved in maximally equitable ways to help deal with the kinds of challenges we all know are looming on the horizon.
> the company you work for has a long history of making grandiose claims
The company I work for has a long history of delivering tremendous technological advancements, from fabrication to algorithms to production-grade online systems that anyone can access for free. We haven't let Aaronson's opinion stop us from working on that.
This article has a very elitist tone, which I can mostly forgive because of the subject matter. Hell, I even understand where they're coming from with regards to how poorly AI was marketed/integrated into our modern workflows. However, I think the conjecture that you're reaching is that 'transparency matters', which is true (albeit not particularly profound). The best solution that I can imagine is ensuring that the next generation of programmers has access to quantum runtimes.
Why do we need to ensure access to quantum computers to programmers?
Seriously, any programmer can fire up a quantum simulator for any number of the quantum instruction languages and be more productive than with a real quantum computer.
Because of the hype, we’ve all been led to believe that we are “ready” to program quantum machines and we just need to train more people through boot camps, hackathons, and summer schools. It’s simply not true.
The quantum computers of today are programmable (barely), and the programs do run (though you can only run a dozen or so “statements” before you get junk results), but they’re so wildly bad compared to what you’d expect out of a textbook that you easily conclude “the scientists have work to do”.
Scientists do have more work to do, but it seems like every month there’s a perfectly respected scientist who gets a $15MM series A and starts spouting the same misinforming junk that quantum computing is going to help FedEx with logistics, or steel mills with operations planning. Then they hire a bunch of good academic people, pay them software-engineer salaries, and string them along to help them perpetuate the fundraising machine—not by actually doing science of course—hoping to also have a quantum computer/software/applications/algorithms be built as a by-product.
Money is very attractive to people, especially physicists who frequently find themselves jumping ship for an alternative, higher-paying career. There must be around 100 quantum companies now, most of them startups, and—to my knowledge—zero of them providing anything demonstrated to be a valuable commercial product. Some of them are definitely doing good work here and there, but in the bigger picture, the profit motive—whether shareholder value or venture capital returns—consistently undermines their ability to do research.
There is also a good chance that QC will remain a small niche in the computing landscape even with fully functional QCs, similarly to DSP programming or hardware or real-time code. QCs algorithms have classical parts that run on classical computers, very little of the actual logic of the program is related to quantum effects, even for something like Shor's algorithm.