Hacker News new | past | comments | ask | show | jobs | submit login
Quantum computing hype is bad for science (linkedin.com)
136 points by nkurz on July 22, 2021 | hide | past | favorite | 126 comments

I take issue with one part of this. Disclaimer, I work for D-Wave....

> It is unclear how exactly one can verify that a "quantum code" actually runs on a quantum computer (instead of a classical node inserted between the cloud and the QC provider), and there is a huge window for fraud there.

I would like to know what the article's author would take as proof of this. All I can offer presently is my personal assurance as a team member helping to keep D-Wave Leap running. Every day, I work with a team of talented scientists, engineers, devops, and developers to help ensure everything from pipeline performance to monitoring cryogenics, and if there's one thing that I am certain of, it's that our end user submissions run on real hardware at a few millikelvin about absolute zero.

I am certain beyond any shadow of a doubt that we are using quantum effects to achieve low-energy solutions to difficult problems using annealing. I'm also certain that we're making huge progress; from our massive lead in terms of raw qubit count (5000+) to our work making each of those qubits connect to more and more of their neighbours with less noise over time. There are exciting things coming....

If other companies are getting away with anything less and promising they're doing real-time quantum computing in the cloud, (1) it would be a huge surprise, and (2) their lives must be a lot cushier than ours, because it is a lot of work keeping something like this running. You want to talk about the woes of having to manage on-prem and hybrid cloud workloads, well, does your datacenter have plumbing for liquid helium? You monitor the temperature on a few server racks, but do you have to measure ten thousand different datapoints about air and fluid temperature and pressure?

Honestly, it's a lot of fun, it's an exciting thing to be working on, and I don't agree with the author when he complains about brain drain. You want brain drain, go and look at the infinitum of startups hawking SaaS grift-ware like it's the next best thing since sliced bread. Sorry if we find it more interesting to work on this than on the next B2B way of slicing off a chunk of someone else's revenue for providing something obvious. "We do these things because they are hard", as the saying goes.

To add to this as a former employee of a different quantum company that provides cloud services, there was absolutely no funny business about faking results. If you asked for quantum computer results, you got them, and it’s painfully obvious too.

Obviously it’s in the realm of possibility that a company could fake, but I think if anybody was caught doing that, they’d tank their reputation within the community extraordinarily quickly.

I haven’t heard of any serious or reputable company doing this.

As for other things you’ve said, I definitely disagree with you and agree with the article that there is brain drain. That’s not to say every commercial entity is fully or continuously responsible for it, but DWave, IBM, Google, and every other company that currently or formerly over-promises or outright lies has drawn people out of academia into frequently senseless industrial positions.

> but DWave, IBM, Google, and every other company that currently or formerly over-promises or outright lies has drawn people out of academia into frequently senseless industrial positions.

In those industrial positions, these people are afforded a place to do scientific research that is outside the university proper, but where they can still publish papers, collaborate, release the results of their research to the world, etc.

I think it's valuable, as not every researcher wants to work in the confines of academia. I worked in academia myself, albeit doing far more HN-poster typical things than a researcher would do, and while it can be an amazing experience, it's also a much lower salary, and you're not at all surrounded by the same energy, sense of purpose, and rapid pace of change. You're also dealing with lower budgets and lower expectations when the goal is purely to publish papers instead of creating actual, functioning devices that are not only workable, but eventually useful (which is a pre-requisite to being profitable in the long term).

There’s some truth to what you say but I think understanding that truth requires a lot of context and nuance, which I don’t think is easy to do here on a comment forum. On the flip side, it’s not all roses in the quantum industry. There’s more pressure (implicitly and often explicitly) to demo, dazzle, and deliver on “innovation”. Mass layoffs, re-orgs, change of management, hyper-pressurized funding rounds, investor and board pressure, CEO “realignment”, etc. all happen in this sector of the quantum biz. Entire departments—even whole companies—of scientists are funded to work with “industry partners” to solve “industrial problems”, problems for which there’s no published evidence of advantage. A scientist in such a department squeezes out a paper that’s of little, and dare I say completely inconsequential value from time to time.

> To add to this as a former employee of a different quantum company that provides cloud services, there was absolutely no funny business about faking results. If you asked for quantum computer results, you got them, and it’s painfully obvious too.

As someone totally unfamiliar with this world, I'm wondering why it's painfully obvious? Slow?

The results look terrible and incredibly noisy. Throw more than a handful of instructions at any quantum computer that’s publicly accessible today, and you’ll get noisy mumbo jumbo.

The noise characteristics are pretty signature-like. It’d be an engineering effort unto itself to produce realistic-looking noise models and simulations.

Sometimes the line can get a bit blurry. For example, suppose that as a convenience feature a service will automatically decompose sequences of unsupported gates into supported gates. And suppose the user happened to be providing sequences of gates equal to the identity (e.g. as part of doing randomized benchmarking to determine how noisy the processor is). Then the optimization pass would happily replace every single test, all of which are made up of equivalent-to-identity sequence, with the empty sequence. Fortunately it's kind of blatantly obvious when something like that happens, because the benchmarking results end up making no sense.

Optimizers may also change which operations are occurring at the same time, or insert spin echo, or add other unexpected error mitigations. All of these things can make the circuit work better, so on the one hand they're nice wins. But on the other hand they can violate user expectations, and make debugging and modelling nightmares.

That's fine, but until the results outshine the conventional solutions there is no way for an outsider to tell how they were obtained.

Even this is becoming less true with access to deeper levels of various vendors’ stacks. It’s possible to actually do pulse-level experiments on various platforms, where the results will match theoretical predictions. Again, to fake that has no benefit to anyone and in fact would take an enormous amount of work to create a time-domain solver. For just a handful of qubits, it’s not even feasible, at least to do it accurately.

There is no enormous work needed to create a time domain solver. Libraries like `QuantumOptics.jl` and analogous on Matlab, Python and Mathematica let's you define the Hamiltonian of an idealized system and solve it. For 16 qubits the matrix size (dimension) is 2^16=65536, can be solved very quickly on a local machine. Furthermore the Hamiltonian matrix is sparse enabling more optimizations.

At the moment state of the art supercomputer can simulate 47 qubits. For 46 the necessary resources are about 4 times smaller. So with handful you meant order of 30 then yes. Only a handful of qubits.

A supercomputer cannot simulate a realistic noise model of 30-40 qubits. It can simulate stochastic Pauli channel noise, but that’s a toy, and it would be ungodly slow anyway.

A density operator representing a mixed state of n qubits grows as 2^(n^2), or a system with leakage into the second and third excited states grows as 4^(n^2).

For just 4 qubits, this would be a 4 billion complex numbers, so 8 billion floating point numbers.

Even this isn’t a time-domain solution, where you might in practice need to solve the Lindblad master equation.

This is all assuming we have a model for the noise, which in practice is highly non-trivial and very dependent on both the implementation of the qubits as well as their geometrical and material construction, and would take a good deal of science and engineering work to do accurately in a way that it reflects both control dynamics and a specific manufactured sample.

Yes there are solvers on the market in Python and Julia, but they don’t give you realistic noise computations for “free”.

I never had seen the scaling 2^(n^2) before. Can you provide a reference for that?

I am also don't understand the point of simulating noise correctly. If you get the result from computation on some vendor black-box black box quantum computer can you really know with what type of noise you would have obtained the result?

Another skepticism about the hype is that the result of quantum computation is just a matrix product on initial state from predefined gates. It is just linear algebra on large unitary matricies. Nothing quantum about it.

They're talking about propagating a density matrix instead of a state vector. If you're not familiar, a density matrix is an n x n matrix where n is the number of elements in the corresponding state vector, and contains information akin to a probability distribution over state vectors.

However, actual quantum computers propagate state vectors, not density matrices, as far as I know. You'd need to run it a large number of times, or have a large ensemble of identical sets of qbits being propagated with the same algorithm to need a density matrix to describe it, I would have thought. So saying you'd need to simulate a density matrix to simulate a quantum computer confuses me.

I know quantum physics but not quantum computing specifically, so I could be wrong - the grandparent comment sounds like they know what they're talking about otherwise.

Sounds like Theranos.

Unrelated to the article, but as a young SWE (finished undergrad in December 2020) who’s always had a side-interest in physics, how do I end up working at a QC company like DWave? Right now I’m working in embedded systems programming at a semiconductor manufacturing equipment company, and I assume at a company like DWave that also makes hardware the possible SW roles are somewhat similar (embedded, metrology, monitoring, algorithms, etc.). Do you look for physics knowledge in your SWE applicants?

Apply! We have plenty of job openings. Physics and electrical engineering backgrounds are critical for some teams, but for other teams it's not as important.

This is a wonderful time to be a younger person in the industry; there are tons of job opportunities and every hiring manager is hungry for talented, dedicated people.

That said, we do tend to be hiring for more senior roles, considering the small size of the company. If your goal is to land in QC eventually, make sure to have a really broad skill set, get some experience working on hard projects, and apply every year or so until you find a role in the field. No matter which firm you end up at, it's an exciting industry where you can feel like you're actually helping with a grand human effort to push the state of the art.

Please clarify one question for me. From other sources I've understood that what we call today quantum computer basically does something random which can't be reasonably reproduced on conventional computers and because of that we declare that this process was quantum computing. But you can't do specific calculations on them - have some input data, some algorithm and receive usable and verifiable result faster than on conventional computer. Is this in general correct or not?

Were currently doing lattice calculations for a class im in, using "big cloud company"s quantum computers. That eats qubits quick, so 5000 qubits sounds luxurious!

The organizers dont seem to know specifics about D-wave when Ive asked, but do you think these kinds of simulations will ever be able to be run on D-wave hardware?

How close are you to running programs that have exponential speedups with quantum computing? Wouldn't it be trivial to verify that the algorithm ran on real quantum hardware in that case?

That’s a good mentality to have, working on hard things because they are hard. Have there been any major roadblocks that required a change in direction, or at least reevluation of goals?

If all you can offer is your "personal assurance" then that seems to exactly confirms the author's point.

From my reading the article's point was really to note the risk of fraud. It did not claim that this kind of fraud is actually happening now.

I also think that the article is more nuanced on the topic of brain drain than you make it out to be. Is your argument not just whataboutery? And what do you think of the article's claim that "it may not be a zero-sum game"?

> If all you can offer is your "personal assurance" then that seems to exactly confirms the author's point.

In my post, I literally said: "I would like to know what the article's author would take as proof of this."

What, honestly, would it take? I have been thinking about what evidence I personally would want if I were in your position of incredulity. It's different when you sit next to the things and see the people passionately building them and keeping them working day in and day out, I suppose; I can see with my own eyes that there is no fraud taking place, but I can't exactly bring everyone in the world in for a lab tour.

I am not incredulous and neither, I believe, is the article's author. You are arguing besides the point.

This is not about you or D-wave. Instead it is like there is a shop with so little supervision that every customer has to be trusted to not steal things. So if a customer asks: "What can I do to convince the world that I am not stealing?" then the answer is clear: you either show your shopping bag to everyone in the world, or we need an entirely different kind of shop.

The point of this part of the article is that, for quantum computing, there is an equivalent structural problem. Until it is absolutely manifest that quantum computers do useful things that classical computers cannot, the potential for fraud remains real.

Couldn't you say the same thing about conventional CPU's and the ability to create RNG's before ERATA show up by the vendor or found by groups doing deep dives on how correct the hardware RNG is? Or any other issue in a mainstream CPU?

It isn't like we've not seen bugs show up in hardware and software pertaining to RNGs or other types of math related issues. Why should we not give the same benefit of the doubt to Quantum compute services by credible compute providers? You should know you are in for a 'as correct as we currently understand it' system provided to you. That has been the case for decades. This all reminds me when the Pentium 1 FDIV defect showed up. Today, however, we've abstracted that to compute hosting providers and we have less tech to qualify our results against.

> you either show your shopping bag to everyone in the world

Well, that's what patents are for, right?


The thing is, the hardware doesn’t do anything useful. So you can in theory fake bad results… but that doesn’t seem so dangerous. If it’s bad, there are few people to defraud, except maybe investors that are bad at due diligence.

You can also try to fake good results (or even have truly good results!), and trust me, the scientific community will require unambiguous proof. DWave went through the wringer pretty thoroughly some years ago for their claims.

There’s another angle too: If the service actually does something commercially useful or better, in some sense, it might not matter what the specifics of the implementation are. Ultimately customers are going to look at price and performance and make decisions that way.

It might not matter commercially, but it certainly matters from a scientific POV.

The scientific stakeholders aren’t proving themselves with an opaque public cloud API. They’re writing detailed research papers with data that go into peer reviewed journals. The data is pretty profoundly scrutinized by the community.

If a scientist or company that purportedly does science doesn't do that, they’re not taken seriously by other members of the scientific community. No one is truly believed at face value. I don’t see any significant probability of bamboozling the community of scientists through abject fraud. And there hasn’t been any such issue yet. (There have been retracted published claims, but the retractions happened as a result of scientific scrutiny.)

All I have to say is ... /THREAD!!!

This has also harming other fields. I work as a researcher in a traditional engineering field with a very long tradition and well stablished methods.

There is a push to use AI and Quantum such that in order to get funding or publish papers you need to say that you’re applying XYZ AI technique to solve a well known engineering problem.

Because funding agencies want to sell to their investors or government managers that they are in the new hot trend, if you want to get funding money you need to have something related in your proposal. Of course, having previously published papers on the topic helps so that motivates people to send papers on the topic. The journal editors know that the topic is hot so they prioritize papers on this topic as their metrics will increase.

The result is tons of rushed papers saying “Applying XYZ AI technique to well known engineering problem” usually without examining previous research methods or proper benchmarks.

At the end the only barrier for this bro to happen is the individual moral standing of each researcher. Unfortunately, careerism usually trumps over this.

Sorry if this was too bleak.

> There is a push to use AI and Quantum such that in order to get funding or publish papers you need to say that you’re applying XYZ AI technique to solve a well known engineering problem.

How different is AI from good old fashioned stats again?

It's clearly an extension, but in the same way that computer science is an extension of boolean logic.

AI is what you do when you don't know stats

That this is a post on LinkedIn shouldn’t take away from the important points it is making :-)

As a physicist who knows a little bit about quantum computing, my understanding is that we’re far far away from building usable quantum computers (it’s still at an applied research stage, and nowhere close to “just an engineering/design problem”) — all hype be damned.

I like to draw an analogy to slide rules. A typical slide rule has a precision of about 3 - 4 significant digits (it gets worse at the higher end of the scale, better at the lower end). To get another digit out of it, you need one 10 times long, or a way to make the markings 10 times more accurate. So effectively you have an intractable problem trying to build a slide rule that is precise to a large number of significant digits.

Is the issue with quantum computers somewhat similar? I know next to nothing about the mechanical aspects of them, but based on the what I've read it is considered a breakthrough whenever another qubit is added.

As a historical note, this is a primary reason why digital computers replaced analog computers. If you want another digit of accuracy out of an analog computer, you need components that are 10 times as accurate, requiring expensive precision resistors and capacitors. But if you want another digit of accuracy out of a digital computer, you just process four more bits and you can still use cheap, inaccurate components.

While analog computers are almost entirely forgotten now, they were widely used even into the 1970s. They could solve differential equations almost instantaneously, while digital computers needed to chug through calculations before producing the answer. But digital computers steadily became faster until they could generate answers in real-time, but more accurate and easier to program.

This is a pretty good analogy for the state of quantum computers right now!

One difference is that (in principle) it is possible to do quantum error correction. Essentially this turns a number of imperfect "physical" qubits into a perfect "logical" qubit. However, this requires extremely low error rates of the physical qubits to begin with and creates a lot of overhead. All existing quantum computers are much too small and noisy to implement quantum error correction except for some proof-of-principle experiments. I am somewhat pessimistic that any of the current technologies can be improved enough to make it possible in practice.

It’s certainly not a breakthrough when another qubit is added. It’s currently typically a breakthrough when another 9 to the fidelity is added, or a 0 to the qubit lifetime is added.

Quantum computers are like that until you get to 1%-0.1% error rates, at which point error correction becomes possible [1].

1: https://en.wikipedia.org/wiki/Quantum_threshold_theorem

I think its much too early to tell. We know really well about slide rules. We are still learning what the best way to build quantum computers are, and how different methods scale.

So far the empirical evidence is pretty unanimous: None of the methods scale.

I like to tell people that actual, programmable quantum computers do exist (which is a very important point—they’re not vaporware), but exactly like you say, in order to make them useful and scalable, more “actual science” needs to happen.

They really don’t though. We have some things that if you squint a little, and are willing to stretch the words, look like a quantum computer.

They really do, and there are mountains of published experiments unambiguously verifying such.

They "exists" in the same sense a hydraulic computer exists because I plumbed 5 valves at my house.

they do and you can get time on one (5 qubits) right now


if you don't think these are computers then you just don't know what a computer really is


they're not useful at all but they're still real actual unadulterated computers.

By this logic a 74LS138 is a “digital computer”.

It’s limited because it has only 3 bits, but if you play with the input bits, the output bits change!

you're trying to say because a decoder maps n -> 2^n that it's comparable to a QC. lol. my friend you clearly don't understand interference and entanglement.

btw the circuits in quantum circuits clearly aren't just combinational since they evolve in time.

No I’m not, all I’m saying is that by OPs logic, I can claim to have a computer with only a part of it.

“It’s not very useful but it can make computations” is a very low bar to pass, and very basic discrete (classical) logic can clear that without problems.

you do realize that ENIAC, i.e. the computer that arguably helped the US win ww2, only had about 15 bits right?

pls note a difference between a 'logical qubit' and a 'physical qubit'.. currently they don't have even 1 logical qubit, and for quantum computer to be of any use it should have >10k logical qubits...

>currently they don't have even 1 logical qubit

wut? different QEC produce differently sized logical qubits and there are absolutely machines with enough physical qubits to amount to a logical qubit:


and realized QEC is definitely not far off


>for quantum computer to be of any use it should have >10k logical qubits

i'm aware and yet it's false to claim that these things don't compute. for the time being they're noisy computers but they're still computers.

Me and my classmates were running code on real quantum computer literally earlier today.

10 qubits is enough to simulate a pair of particles interacting already.

You can access a quantum computer - to be precise, a quantum _annealer_ right now, for free, via D-Wave Leap. It may not be gate model, but it does compute using quantum effects, and it is useful for optimization problems, materials research, and other applications.

No squinting required.

Except there’s a lot of people (myself included) who don’t consider a quantum annealer to be a quantum computer.

There has been also very little if any actual research in other fields powered with quantum computing.

We can keep moving the goal posts and claim that we have made it, but the fact is that QC keeps overpromising and under delivering.

> Except there’s a lot of people (myself included) who don’t consider a quantum annealer to be a quantum computer.

Well, great; that's an opinion. The thing is, if a device uses the quantum-mechanical properties of the universe to do calculation, then it is a quantum computer; asserting that it isn't one is a matter of semantics and categorization, since what you're really doing is redefining the term "quantum computer" to inherently include "gate model" as part of it, which is not a foregone conclusion yet.

I believe, from what I've read, that at this point current and projected gate-model quantum computers will not be competitive on optimization problems where quantum annealing will be, so there is definitely utility in continuing to pursue this research and development exercise.

> There has been also very little if any actual research in other fields powered with quantum computing.

Here's our latest:


As a physics playground, annealers are very compelling, and materials research will definitely benefit from these devices.

> The thing is, if a device uses the quantum-mechanical properties of the universe to do calculation, then it is a quantum computer;

Except this definition includes classical computers as well. At the scales of current transistors, quantum effects are required to explain their inner workings, and they are used to perform computations.

this is misleading. we use quantum to explain transistors, but classical computers don't exploit any quantum phenomenon such as coherence, tunnelling, or entanglement.

Is there some fair comparison of D-Wave annealer vs classical methods on optimization problems? I remember seeing papers where it was compared to some naive methods or the runtime of D-Wave approximation algorithm was compared to the runtime of classical exact algorithm -- obviously apples vs oranges.

My feeling is there's as much anti-hype as there is hype these days. I'll continue with the maxim "progress is always slower than you think in the short term, and faster than you think in the long term."

The quantum computing industry definitely had a lot more hype than anti-hype. There is a very small minority of scientists who actually speak up against false or misleading claims, but a majority are either silent (why gratuitously jeopardize your own career or your funding avenues?) or amplify the hype (because their newfound startup depends on lay investors being excited for any reason so they’ll continue to put tens of millions of dollars in).

(To be clear, there’s a ton to be excited about in quantum computing, and there are truly legitimate careers to be had both as a scientist and as an engineer. But what’s exiting currently isn’t very marketable or fashionable!)

Hmm, I guess I stand corrected. Going through these threads, I had no idea that there was so much hype about it. I thought everyone knew anything useful was still a long way off.

Dumb investors losing money because they buy into hype without doing their homework (i.e. their job) is a good thing. It's a form of Darwinism. Happens all the time. So, let nature take its course and all will be fine. This is just the normal hype cycle playing out.

Science is about asking questions and forming hypotheses to answer those questions and trying to falsify these. An influx of money is good for that process as the worst case here is that it will pile up a lot of documented falsification and thus lead to better questions. Which is still a good outcome. Once you have people asking the right questions and forming better hypotheses, everybody wins.

Just because there are a lot of bad quantum computing startups doesn't mean that there aren't some more serious ones that are actually making progress. We've seen the same with all the smoke and mirrors AI BS coming out of silicon valley. Lots of glorified if .. else .. logic that gets peddled as deep learning. But in between all the BS, there are a few companies actually doing cool stuff and making some genuine progress.

I studied a "hype" field a few years back (Machine Learning and its application to Brain-Computer Interfaces), and can't help but disagree with you.

The AI hype didn't lead to documented falsification or better questions, but a mountain of bullshit literature that was designed to get grants and satisfy the "publish or perish" imperative.

If you have an hour to spare and are interested in what quantum computers might be good for in the long run, I recommend this interview(podcast, Sean Carrol’s Mindscape):


The hype is bad in its own right, but it's a symptom of how science is being funded and rewarded, which is a much bigger problem.

Quantum computing deserves hype, but I'd like to see the stupid hype die down.

I've heard claims that quantum computers "connect to alternate timeline versions of themselves" and would allow us to communicate with people from parallel universes. I've heard that they'll let you bypass traditional cryptography with such ease that you could steal all of the bitcoins in circulation in an afternoon. I've heard that it could guarantee a lottery win with only 100 picks.

A bunch of high-concept nonsense that is simply not what Quantum computing is going to enable.

The AI winter didn't seem to hurt AI


The AI field pre-winter has been obliterated by the winter. What is now called AI are the parts of the field that got less attention and funding back in the day, mainly prediction by statistical analysis of past data.

The AI winter of the 1980s destroyed quite a few careers. Many of the TAs when I was an undergrad graduated into that winter.

It destroyed many people's careers, but it didn't destroy AI. There's a difference.

It destroyed a particular vision of AI - that of Knowledge Engineering.

I'd say "not so fast".

AI is very useful, even if the pioneers had to give up on the dream of human-level or human-like intelligence. There is quantum cryptography, but other than that I find it hard to find practical value for quantum computers before they reach the limit, maybe around 80 qubits fit for general computation, that they start to be faster than classical computers. Faster for solving meaningful problems, not useless problems designed for the quantum computer to solve fast.

From talks I have watched, China is dumping money into quantum computing because Jian-Wei Pan and his team are genius.

I don't think China has the same problems we do that people are worried about "hype" and all this nonsense.

Jian-Wei Pan said they want to use their photon QC device and boson sampling to solve graph theory problems.

I don't understand boson sampling enough to know if it makes sense for graph theory but obviously they have some ideas in mind. Quantum graph theory, quantum network science ideas I suppose.

The video I watched also has 600 views and the only comment is if you can use a photon quantum device to mine bitcoin.

That is a cultural problem we have, not a problem with science that humans have.

All hype is "bad" for science. Hype implies emotional investment or faith in an expectation, and science is specifically about challenging expectations (i.e. hypotheses) via practical experimentation.

There is plenty of experimental research, and early practical results, being achieved in quantum computing. There is also lots of snake oil being peddled by sleazy entrepreneurs. This is true for all developing fields.

Is it though?

Hype causes people not familiar or well informed into the matter to get into it, hoping for big returns. Of course, they'll come out severely disappointed, but science and technology as a whole would have advanced, thanks to their efforts.

Going blind into something with a lot of hype is often an "ice-breaker" for humanity into new areas of study.

The article talks about this in good detail and why hype leads to:

1. Brain drain of talent

2. Ponzi schemes

3. Damage to the reputation of science

The Science industry is built on using buzzwords to get research $$$. Probably the same with most industries that reach a certain institutional scale. Institutional involvement leads to politics which leads to buzzwords & right-speak.

Another case of "for the love of money is the root of all kinds of evil".

> hype is bad

> linkedin.com

coughmachine learningcough

There's absolutely no comparing the two fields.

ML already has a massive impact on industry and society as a whole. The future of many careers will forever be altered even by current ML application, let alone future developments.

From automated face recognition, to customer service, job interviews, risk assessment and protein folding, ML has become part of our daily lives already to varying degrees (of both impact and success).

It's a field that won't go away and will only grow and probably change quite a bit in the next decades. Admittedly we're far from a Butlerian Jihad-situation, but there's no denying that ML is much more than just hype.

AGI, now that's a different story.

interviews? Oo

cv scanning?

sneeze Alphafold2 sneeze

Things that are always 10 years away:

[1] Those aeroplanes that can fly from London to Australia in 2 hours

[2] Cold Fusion

[3] Quantum computing

Cold fusion is still a hypothesis. As for that airplane ride, I dunno who was selling you that. What kind of fuel would the jet even use?

Quantum Computing though is already here, it's just not practical for much outside of a lab setting.

QC is here in theory, but it is not practical for anything - the experiments so far were only intended to prove that the thing was an actual quantum computer in the complexity theory sense. But it is impossible to actually use Google's device or the one in China to compute anything at all, even something like factoring 4 using Shor's algorithm is beyond the current capabilities.

There is perhaps some more debate about D-Wave's device,both its status as a QC and its usefulness.

The distance from London to Sydney is 17.000km. The current flight airspeed record is 3.500km/h. I would doubt that anything close to 8.500km/h is physically possible without using rockets. That's the speed of the fastest missile.

The X-15 has recorded 7,274 km/h airspeeds. I'm no airspeed record expert but did some reading.

The X-15 was a rocket.

Just because it has a rocket engine, that doesn't mean it's a "rocket", it's a rocket-propelled aircraft.

I mis-spoke. 4.5 hours:

"U.K.-based Reaction Engines is developing technology for Synergetic Air-Breathing Rocket Engines (SABRE), which could one day allow aircraft to fly up to five times faster than the speed of sound — that’s Mach 5 or 3,836 miles per hour.

At that speed, hypersonic flights between London and Australia could be over in just four-and-a-half hours."


Exactly. That would be the point.

But I would claim that nobody has ever said "flight times from London to Sydney will go down to 2-3 hours in 10 years". Si the above example does not really make sense.

Oh look: https://www.independent.co.uk/travel/news-and-advice/flights...


I really don't understand your list. Are you trying to lump QCs in with pseudo-science gobbledygook like cold fusion or Musk's surface to surface starship rides?

Is anyone (not selling snake oil) claiming that large scale QC is actully only 10 years away? I certainly havent heard that.

I work in this field, and I would say anything between 20% and 70% percent of my colleagues believe that useful error corrected qubits will exist in less than 10 years (depending on what exactly you ask). So I guess the answer is yes, there are respected scientists that would wager that we will have enough useful qubits for interesting chemistry simulations in 10 years.

Google just announced they plan to have a million qubits in less than 10 years. They’ve not yet demonstrated the ability to go past 100.

well, Google's CEO also made a furore by stating that in 10 years quantum computer would break currently used encryption.. But guess what? In 10 years Sundar won't be with the company anyway ;)

Quantum computers don't exist; qubits as they exist today are simply sources of entropy. So every time someone does a big fanfare announcement of this many qubit "computer" I chuckle a bit; ok, you got a bigger "random number generator", cool cool cool :)

If you downvote, please also include a link to something that proves a quantum computer exists (outside of theoretical papers); I'm genuinely interested in being proven wrong.

Quantum computers do more than produce random numbers. They follow predictable statistics predicted by quantum mechanics. These computers also run programs.

Maybe your definition of “quantum computer” doesn’t agree with the field at large. What’s your definition?

What do you think about Google’s supremacy experiment? Do you have objections to their results? [0]

This is one of many papers by Google, IBM, Rigetti, and many other quantum computer manufacturers.

[0] https://www.nature.com/articles/s41586-019-1666-5

Just to be pedantic, that experiment literally generates certifiable random numbers; it is also unclear if they have a real physical device; it appears their "Sycamore" design is theoretical and actually simulated on the Jülich (classic) supercomputer.

When I said they do more than “produce random numbers”, I meant that they do more than such than that which is out of their control (i.e., due to noise). By the physics of a quantum computer, at their very foundation, they’re random number generators. What you program on a quantum computer is, more or less, the shape of the distribution from which they sample.

A linear congruential generator from Knuth programmed on a classical computer produces controlled pseudorandom numbers. So what? Whether a LCG or a program to produce controlled samples from a goofy Porter-Thomas distribution, they’re both coming from machines that were programmed to do a job. If the machine was neither a computer nor programmable, then the job could not be done.

You haven’t refuted the point of the published existence of a computer. The paper includes both the results of a program running on a quantum computer, and a comparison of the results as simulated by a classical computer, the latter taking several orders of magnitude to complete at several orders of magnitude increased cost.

This is a pretty weak argument. The default state of matter is entropy: if "quantum supremacy" entails being less predictable than a traditional computer, of course it will win. But it's also competing with a vast amount of pseudorandom values that can be polled at will to create a more plausible, usable random number. If this is actually the current state of QC then consider me a little scared.

It’s not competing on being less predictable. It’s competing on performing a calculation better or more accurately via a program written for the quantum computer (as opposed to a specialized machine for that singular task). The quantum computer was able to do that.

The randomness is a total red herring and doesn’t contribute to the discussion as to whether a programmable computer performed a computation or not. It did, and it was verified as such.

Google’s experiment isn’t “the state of quantum computing”. It’s a single scientific experiment among thousands. The particular experiment was for demonstrating a then hitherto unconfirmed claim about whether a quantum computer can do a computer science problem more efficiently. The theory already said it was true, but the experiment wasn’t yet demonstrated.

It’s also an experiment lay people, and the HN tech crowd even, doesn’t care about. Because it’s deep and complicated science, not a sales pitch.

They've factored 15 into 5 by 3. It's a real computer, even if it's too small to do anything useful. https://arxiv.org/abs/quant-ph/0112176

Thank you for including a link to that experiment, it's pretty cool. My naive definition of a quantum computer would be a general purpose machine that can execute quantum programs on controlled inputs and produce valid outputs (expanded from the definition of a classical computer: "a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations.").

I'm sitting thirty feet away from one, friend. It exists. The pulse tube cooler is making a comforting squelchy sound. There is more entropy in your post.

If there's more entropy in their post, then we may have circumvented quantum supremacy after all!

So, what does it actually do besides squelching and producing random numbers?

You'd be better served by looking at our introductory content. It can solve optimization problems, and while the hardware graph is still not billions of qubits, our hybrid systems enable you to solve much bigger problems while still heavily involving the QPU in the work.

Formulating problems into the Ising model is still a difficult task, further into the realm of mathematics than the average HN poster usually goes, but it is indeed proving useful for real-world applications, e.g. with logistics and optimizing the order things are done in.

I am sorry but nothing about your posts give me any re-assurance. From your username (What is next?, John-Facebook and Louis-Microsoft?, that is idea for you budding SF writers, usual surnames will be replaced by corporation names) to the the condescension you have treat outsiders in the comments.

The article makes lots of very good points. QC is in a hype stage, it does not mean that there is no valid research or actual physical machines doing "computing" with a "small" number of bits, it means that the distance between what it is promised now explicitly or implicitly and what it can be achieved realistically from now to the next 20 years is so huge that it can easily be considered a lie.

Current QCs are only good for one thing and one thing only: "Simulating" quantum systems. But in that same sense, a wind-tunnel is a fantastic "fluid-dynamics" computer providing a "realistic simulation" of how wind flows through a wing. Nobody is solving any actual optimization problems with a QC in any meaningful sense and even less there is even 1 single problem in OR that can be actually solved now only by using a QC.

BTW the company you work for has a long history of making grandiose claims which are later totally refuted forcing you to backtrack, as anyone who has read Scott Aaronson's blog knows.

> Current QCs are only good for one thing and one thing only: "Simulating" quantum systems.

That's only true of gate-model machines; for example, Google's recent claims of supremacy seem to boil down to something like that. Premature, to say the least, though the work by their team is nonetheless impressive. The pressure to publish is very real! Investors (even when it's a company-internal effort like Google's) still expect some kind of progress even if ROI is not yet being delivered.

Quantum annealers are already solving optimization problems well. One of the goals here is to democratize access to high-quality, high-performance optimization capabilities.

> Nobody is solving any actual optimization problems with a QC in any meaningful sense

Not quite true. Check out what we did in Lisbon:


Clearly this is early days; but the way that detractors talk about this effort is perhaps best server through a metaphor. Imagine you were interested in Babbage or Turing's work early on, but your question for them was, "but how will I use this device to get dinner delivered?"

It's easy for us with a hundred years of progress to see how a connected world enables those kinds of questions to have very robust answers, but it would have been very difficult to see that when you're standing in front of an Analytical Engine full of inscrutable steampunk complexity. If the applications we enjoy so much on the modern Internet were the actual end goal of the technology, it would have been considered unfit for purpose for a very long time indeed! Intermediate, small-scale applications are what paved the way forward, and so too they will be what enables quantum computing research to continue for the likely decades that it will take before it is integrated into apps that consumers use on a day to day basis.

That said, the sort of applications that we talk about around here are often in the realms of science-fiction - imagine the kinds of scheduling and coordination problems that people and machines find difficult today being solved cheaply using open-source software and widely available cloud QPU capabilities. Imagine traffic routing where every single person's destination is calculated together as a tremendous optimization problem in near-realtime to maximize the throughput of a city's streets, or where large-scale resource distribution problems can be solved in maximally equitable ways to help deal with the kinds of challenges we all know are looming on the horizon.

> the company you work for has a long history of making grandiose claims

The company I work for has a long history of delivering tremendous technological advancements, from fabrication to algorithms to production-grade online systems that anyone can access for free. We haven't let Aaronson's opinion stop us from working on that.

The hype around QC is going to be essential in getting more people interested. If we can encourage people to learn the concepts of entropy, state and other fundamentals while their minds are still plastic, I say go for it!

This article has a very elitist tone, which I can mostly forgive because of the subject matter. Hell, I even understand where they're coming from with regards to how poorly AI was marketed/integrated into our modern workflows. However, I think the conjecture that you're reaching is that 'transparency matters', which is true (albeit not particularly profound). The best solution that I can imagine is ensuring that the next generation of programmers has access to quantum runtimes.

Why do more people need to be interested?

Why do we need to ensure access to quantum computers to programmers?

Seriously, any programmer can fire up a quantum simulator for any number of the quantum instruction languages and be more productive than with a real quantum computer.

Because of the hype, we’ve all been led to believe that we are “ready” to program quantum machines and we just need to train more people through boot camps, hackathons, and summer schools. It’s simply not true.

The quantum computers of today are programmable (barely), and the programs do run (though you can only run a dozen or so “statements” before you get junk results), but they’re so wildly bad compared to what you’d expect out of a textbook that you easily conclude “the scientists have work to do”.

Scientists do have more work to do, but it seems like every month there’s a perfectly respected scientist who gets a $15MM series A and starts spouting the same misinforming junk that quantum computing is going to help FedEx with logistics, or steel mills with operations planning. Then they hire a bunch of good academic people, pay them software-engineer salaries, and string them along to help them perpetuate the fundraising machine—not by actually doing science of course—hoping to also have a quantum computer/software/applications/algorithms be built as a by-product.

Money is very attractive to people, especially physicists who frequently find themselves jumping ship for an alternative, higher-paying career. There must be around 100 quantum companies now, most of them startups, and—to my knowledge—zero of them providing anything demonstrated to be a valuable commercial product. Some of them are definitely doing good work here and there, but in the bigger picture, the profit motive—whether shareholder value or venture capital returns—consistently undermines their ability to do research.

QC represents a fresh start for computer scientists to make their mark, much like machine learning was the past decade.

Computer scientists had QC to “make their mark” for the past 30 years, and will continue to have it for the next 30. It’s available irrespective of an industry full of cash-grabbing and misleading marketing.

Why do they need a fresh start? Has the hype machine for machine learning started to come apart?

So with all that being said, your solutions is to ostracise more people? That checks out.

I didn’t propose a solution. But right now, as it stands, money is motivating the perpetuation of misinformation. I’m OK with that ending.

Who is ostracizing anyone? The only solution I see suggested is to take a quick look at the teeth of the horse you are being sold.

There are many more decades of research to be done before there will be any kind of need for quantum computation programmers. Right now the field needs quantum physicists and computer science mathematicians to actually develop the physical computers and basic operation concepts, not programmers per se.

There is also a good chance that QC will remain a small niche in the computing landscape even with fully functional QCs, similarly to DSP programming or hardware or real-time code. QCs algorithms have classical parts that run on classical computers, very little of the actual logic of the program is related to quantum effects, even for something like Shor's algorithm.

I think DSP programming is a good analogy. There will be a handful of codecs/waveforms/algorithms that you treat as a black box that you load into the QC, and then the other 99.9% of your system will be classical.

Interested in what, precisely? Why do you believe usable quantum computers will exist in those people's lifetimes?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact