Hacker News new | comments | show | ask | jobs | submit login
Google, D-Wave, and the case of the factor-10^8 speedup (scottaaronson.com)
312 points by apsec112 656 days ago | hide | past | web | 110 comments | favorite



I found this remark from the linked pdf[1] very interesting:

"...it has also been shown that for the class of problems used in initial benchmarks,the performance of the devices also correlates well with a classical(meanfield) approximation of quantum annealing [4],which raises questions about whether the devices can outperform classical optimizers."

In other words, if you ran a simulation of this device on a normal computer and gave it the same problem to solve, you will get basically the same efficiency.

[1] http://www.scottaaronson.com/troyer.pdf


Not only that, the classical simulation of the machine is faster than the real thing, because the simulation simulates an ideal error-free machine, which is not the case


The question is can they still say that when D-Wave reaches say 4,000 qubits? What I'm asking is will classical optimizers and classical computers always be able to be "roughly as fast" as D-Wave for that work, or will D-Wave inevitably surpass them by orders of magnitude in future iterations?


That is the central open question. That's what Scott means when he talks about asymptotics, and the conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).

This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave's computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn't help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can't beat an even better classical algorithm (Selby's) at all, even in a way that won't scale.

Scott's central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it's possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott's central gripe with D-Wave's approach is that they don't have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn't seem like a solution. My personal frustration with D-Wave's marketing is that the fact that their computers are super specialized often gets lost; these computers fail the equivalent quantum notion of a complete set of digital logic gates.


But that's the wrong comparison. The right one would be a special purpose FPGA or ASIC based computer built specifically for the simulated annealing problem.


We mostly care about asymptotic speed ups, so that doesn't really make a difference, and using a more specialized classical computer has a similar kind of effect to just using a bigger one. In any case our best algorithms running on mediocre classical hardware can still beat the D-Wave, so we don't really need to stack the deck so much.


> The question is can they still say that when D-Wave reaches say 4,000 qubits?

Yes. See the 4-item list near the end of the post. It's not about the number of qubits. It might be possible to make the D-Wave approach effective. But that's going to involve either being lucky on an open research problem (whether it's even possible to get a quantum speedup using stoquastic Hamiltonians only), or fundamental improvements to their approach. It's not simply a case of scaling up what they already have.


So there are huge algorithmic speedups just sitting there for the taking? And no one has used these? Color me skeptical.


No, they're saying it's not actually faster. The purpose of this research is to experiment with problems and push the hardware to discover limitations that can inform their design of the next generation of hardware.

Quantum computing is a lot like fusion research. Nothing we know how to build will break even but we still need to build something so we can learn how to build something that could break even.


Why?

We have plenty of better energy technologies than what we have, but we don't implement those, so why bother building an even better technology that we also likely won't build.

It's like the dev team that won't refactor but thinks its new design will be perfect and also not need refactoring. The new project invariably needs refactoring which they don't do, leading to another ground up rewrite.

I'm pretty sure by the year 2200, we'll have a coffee cup sized zero point energy device but still get our power from coal.


> We have plenty of better energy technologies than what we have, but we don't implement those,

Well, why don't you do it then? The potential profits are absurd; don't you want to be the world's first trillionaire?

> I'm pretty sure by the year 2200, we'll have a coffee cup sized zero point energy device but still get our power from coal.

You're suggesting that people will choose to consume expense goods and services over otherwise identical cheaper goods and services? This does not seem to mesh well with observed human behaviour.


I mean I don't really agree with fleitz necessarily, but any power source can be very cheap/free at the consumer/market level when the government is subsidizing all the costs.

The government is not really a rational actor in economic markets and is often guided by other factors like popular vote, corruption, etc.

It is hard to explain why we don't use more nuclear power from an economic standpoint, but it is easy to understand how politicians and lobbyists have capitalized on fear mongering to create our current energy markets.


It is hard to explain why we don't use more nuclear power from an economic standpoint

It's very easy to explain why we don't use more nuclear power from an economic standpoint.

See, for example:

Government subsidies to the nuclear power industry over the past fifty years have been so large in proportion to the value of the energy produced that in some cases it would have cost taxpayers less to simply buy kilowatts on the open market and give them away, according to a February 2011 report by the Union of Concerned Scientists.

http://www.ucsusa.org/nuclear-power/cost-nuclear-power/nucle...

It's something of an anti-pattern on HN to think that nuclear power is somehow this secret source of cheap energy and it is only those damn anti-science greens who conspire to give it a bad name.

It's not. Nuclear power is expensive.

Some will blame unfair (?) government regulation for that. It is true that regulation adds to the costs, but regulation seems reasonable given the fairly large costs associated with accidents.

Some will claim that the dangers associated with Nuclear power are overstated, and that coal kills more people. I agree that coal should be replaced, but not with nuclear.

Some will claim that only nuclear power can provide base-load capability without greenhouse gases. I disagree, and point to the range of energy storage capabilities available now, the rapidly dropping costs of batteries, and the increasing decentralization of the power grid.


85% of my energy is renewable and has been in my area since the 60s, there's no nuclear involved at all.


In my country (New Zealand), the number is around 75%; we're lucky to have some good geothermal sources, plus a lot of good rivers suitable for hydro, all serving a fairly small population.

In our nearest neighbour (Australia), with an extremely similar culture and history, it's about 17%; they don't have any good geothermal sites, and few good hydro sites, combined with a significantly larger population.

In the rich world, we've basically tapped every major viable hydro source (and learned a lot about the environmental costs of doing so, and sharply revised our definiton of "viable" too). You claim that there's energy tech we're just not using, but hydro is absolutely not an example of that. The difference between your country and Australia is geography, not technology.

If your answer to climate change is "let's all be more like Norway", you may need to rethink your understanding of the problem.


What sources are being used? Hydro? Geothermal?

Unfortunately, in my area only about 23%-30% of the power generation is composed of renewable sources, primarily hydro.


What are the renewable sources?


Hydro, mostly dams some run of river


I am not an expert, and so I ask that you read the below comment and let me know how (and where, that's important too) I'm wrong. Honestly, I realize below might be ignorant/controversial/silly, but please help me learn if you know the field/explanation.

Admittedly, I'm making numbers up, but I can really only see a gov't skewing by 10-200% before unsustainability. In a free society, lower bound. Higher end, something like Venezuela.

That doesn't seem like the kind of range that would be a practical concern. Suppose we solve AGW, we face petroleum shortage in the medium term, it's probably going to require a different scale in change of emissions and petroleum extraction to matter.

Governments are inefficient, but those as inefficient as the orders-of-magnitude difference we're talking... those we call revolutions! [/dramatic effect]

Apologies for the comedy if that's not your style, but I'm curious to hear your thoughts.


I don't see a free market situation in which we'd ever have a shortage of petroleum... I mean we're well past 'peak oil' and we have the cheapest oil in a decade

Solving AGW doesn't solve the next ice age...


Depends on what you mean by cheap. Oil costs less than 1/2 as much in 1998, sure it's down from the 9x spike, but it's much higher than inflation adjusted prices from say 1940 -1970, and it seems unlikely to stay even this low for the next 5 years as some producers are flooding the market for political reasons.

We are not going to be extracting oil at the current rate for the next 10,000 years or even the next 200. We, could start manufacturing it because from an energy standpoint refining, transport, and extraction cost so much oil is basically just a battery at this point. (See: Canada's oil sands.)


That doesn't mean we'll face shortages... Oil will increase in price until we stop using it.


That's a pedantic definition of shortage. Abundance does not mean you can get something if your willing to pay enough money.


Funny, I don't see any articles in the papers about whale oil shortages...

Did you know it's virtually impossible to get whale oil these days? Do you care? Or do you just use a whale oil substitute?

The ironic thing is increasing price actually increases supply.


Increasing prices did not bring an extinct species back.

ex: https://en.wikipedia.org/wiki/Passenger_pigeon

As to whale oil, the Inuit still have access to whale oil. And there was a clear shortage in 1861, but there where also existing substitutes before modern whaling become so efficient. 'Oil' showed up after the fact, it's only after the fact people think 'Oil' showed up in time to save people from the whale oil crash.


Barriers to entry, that said if I had the capital to construct power plants I'd probably invest in other more profitable enterprises.


> You're suggesting that people will choose to consume expense goods and services over otherwise identical cheaper goods and services? This does not seem to mesh well with observed human behaviour.

Well, there's this company called Apple that produces a range of products...


The difference is that even if we had the entire world put all production efforts towards building a quantum computer, we don't know what to build. It's not a matter of cost or resources; it's a matter of not even knowing exactly what to construct.


If I understand "annealing" techniques (1) correctly, they are fast as they combine randomness with hill climbing. So they are not exhaustive, and thus not guaranteed to find the globally best answer.

https://en.wikipedia.org/wiki/Simulated_annealing


Correct, although it is argued, that given "enough" time Simulated Annealing will have search the entire solution space and thus return the global optimum.

Simulated Annealing is a great tool for optimization problems, because it is almost trivial to implement -- If you have a combinatorial problem, that you evaluate the objective function on (preferable evaluate changes easily), then SA is a good starting point. It is a meta-heuristic that can be applied to almost any problem.

However, Simulated Annealing is almost never the best way. Better methods (more efficient/better results) are often either based on complex meta heuristics or custom heuristics for approximate results, or branch and bound or dynamic programming for guaranteed optimum.

If quantum computing enables us to perform a massive speed-up with a simple SA-like meta-heuristic, then that would make "solving" NP-hard problems a much more common task.

You could just insert your problem into this "black box", and a few seconds later you'll get the result.


So ... more proof that D-Wave is fluff.


Or proof that classical computers and algorithms have decades of optimizations behind them and quantum computers and algorithms only have years.


They're measuring asymptotic performance (scale), not wall clock time, and testing quantum algorithms with known better asymptotic performance than the equivalent classical ones.

They could run the classical algorithms a factor of 1,000,000× slower and reach the same conclusion, that the D-Wave's quantum annealing process is asymptotically equivalent (scales equivalently) to classical algorithms.


I understand the difference between the two(yay, CS degree! and/or wikipedia), but from what I could read of the two posts, there is no indication that the asymptotic performance was being measured to any significant extent.


That should not matter. Quantum computers promise to do things that classical computers will never be able to do. A 'proper' QC with on the order of a few thousand qubits can perform calculations that even a known-universe-sized classical computer will find intractable. Optimisation quickly becomes irrelevant when you scale these problems just a little bit more.


> Quantum computers promise to do things that classical computers will never be able to do. A 'proper' QC with on the order of a few thousand qubits can perform calculations that even a known-universe-sized classical computer will find intractable.

Your first statement is correct, but Shor's algorithm (the one I suspect you are referring to when you talk about a universe sized computer) is not an example of one. We do not know that there is not a classical factoring algorithm that is as good or better than Shor's, and we have no quantum algorithms as of yet that can give an exponential speedup over any possible (but possible yet unknown) classical algorithm (which is needed for the universe sized computer thing). We have some that give more modest speed ups that are known to be unbeatable by classical algorithms; Grover's search algorithm is such an example, but it is only quadratic. This means you have a quantum computer of size N, you only need a classical computer of size N^2 to match it.

> Optimisation quickly becomes irrelevant when you scale these problems just a little bit more.

The problem with quantum computers isn't optimization of algorithms, but it's still optimization of strategies for dealing with errors (which sometimes look a lot like algorithms). Building reasonably accurate logic gates was figured out with a lot less effort when we started even thinking of building such things, and their reliability didn't fall off with increasing size very quickly. Our technology for quantum computers, on the other hand, all have crippling flaws as of yet. The most common one (superconducting qubits and ion trap based designs both suffer from this) is that when we try to make the computer bigger, it needs an unscalable amount of error correction an eventually stops working altogether. Some other approaches (linear optical quantum computer for example) can scale up without getting worse per se, but the gates we can build are already so unreliable that we still need too much error correction to scale. So optimizing our error correction strategies is one possible avenue that is being explored.


I do not believe there are any problems known to be solvable by QC, and unsolvable by classical computers.

edit: I was trying to be polite, but Scott Aaronson has spilled quite a lot of blog ink denouncing remarks like the parent post as utter nonsense.


I do not believe there are any problems known to be solvable by QC, and unsolvable by classical computers.

The space between unsolvable by classical computers and solvable practically by classical computers is... significant.


You might have misunderstood my comment (or Scott's writings?)

Classical computers are limited by what they can compute because the universe has limits. A 10,000 qubit quantum computer can factor fairly large numbers. 10,000 qubits is pretty small, one day hopefully it will fit inside a small room.

To factor those same numbers using a classical computer you'd have to make a computer the size of the entire known universe and run that computer until the heat death of the universe. Obviously this is not possible, even in principal.

Theoretically any QC computation can be simulated by a classical computer but in our limited universe you quickly run into the wall where the classical computer just becomes too big (bigger than the entire universe) and too slow (slower than the life of the universe).


I definitely reread your comment several times and kept stewing on the word "intractable" since I'd seen it used somewhere else to talk about problems in NP-Complete. I assumed if I was misinterpreting what you were saying, it would hinge on that word.

Aside from factoring, what kinds of things do we think will meaningfully change if we get general QC? Cryptographers are already preparing for the post-quantum world. Who else needs to be preparing?

Everything I think I understand about QC suggests that a practical breakthrough will not cause any changes in society, other than the abandonment of RSA. Am I missing something?


Since you seem to already familiar with integer factoring, isn't factoring large integers something that "solvable by QC, and unsolvable by classical computers"?


Classical computers can factor large integers.

Before this thread, I knew that Shor's algorithm and Grover's algorithm were two very important QC results. I knew that Shor's algorithm means that a QC would be able to decrypt anything that was ever encrypted with RSA. (ECC schemes are likely just as vulnerable, so cryptographers are looking at purely hash-based schemes: http://pqcrypto.org/hash.html)

What I learned this morning based on hints in this thread:

1) Grover's algorithm is a far more modest speedup compared to Shor's algorithm

2) Shor's algorithm only factors integers, but Grover's algorithm can more generally invert a function (which still threatens many currently used cryptographic functions)

So I'd guess that Grover's algorithm is the sort of thing people are talking about as useful in machine learning. There are probably other QC results that are worth getting excited about for people working with neural networks. The Google/Microsoft workshop this weekend has a number of sessions on quantum annealing, as well.

A big reason "unsolvable by classical computers" is such a silly way to phrase things is (paraphrasing Dr. Aaronson here) that simulated annealing techniques on classical computers are already producing such good results without QC. In the previous Shtetl-Optimized post to this one, he posts a Powerpoint deck for a talk he gave at the same conference, and it is quite instructive (but still enough over my head that I'm sitting on HN asking dumb questions).


> Classical computers can factor large integers.

I mean, since the best-known quantum algorithm for factoring integers is asymptotically faster than the best-known classical algorithm for factoring integers, isn't there some definition of "large" for which this is no longer true?

(I'm assuming you mean "classical computers can factor large integers effectively", since the class of all problems solvable by a classical computer is exactly the same as the class of all problems solvable by a quantum computer)


> the class of all problems solvable by a classical computer is exactly the same as the class of all problems solvable by a quantum computer

I started this sub-thread because I'm pretty sure this statement is true. But the details are tricky.

> the best-known quantum algorithm for factoring integers is asymptotically faster than the best-known classical algorithm for factoring integers

The only reason that current commercial cryptography works is that classical computers can't factor large integers effectively. But quantum speedups to factoring isn't particularly profound, since factoring is just a small part of computing.


> > the class of all problems solvable by a classical computer is exactly the same as the class of all problems solvable by a quantum computer

> I started this sub-thread because I'm pretty sure this statement is true. But the details are tricky.

Actually that is one of the first problems that was solved when we started looking into quantum computers. Our models of classical and quantum computation are both Turing complete and only Turing complete. Any classical computer can simulate a quantum computer, and any quantum computer can simulate a classical one. A quantum computer's only advantage is in speed, and we don't yet have proof that there is an exponential speed benefit that cannot be circumvented by finding a better classical algorithm.

Since asymmetric cryptography generally relies on an exponential barrier between the process of using the encryption and the process of breaking the encryption, that is what we really need to completely break modern asymmetric cryptography.


It might provide a rapid, general approach to non-convex optimization for neural nets.

And that changes everything, probably more than anything since (the iPhone|the internet|computers|penicillin|the industrial revolution|fire) depending on how optimistic you are. It'll change things a lot, anyway.

See http://research.microsoft.com/en-us/events/qml/


Imagine simulating a human brain. I'm not sure how massive a computer would need to be today to simulate the neurons, but an efficient implementation can be made to be the size of.. Well.. A brain.

I could see this causing massive changes in society. An artificial intelligent simulation of a super-smart human, that can be tuned toward a specific problem area and made much more focused and efficient than a purely biological brain could work...


Is there a specific reason a quantum computer would be better at simulating a human brain than a classical computer?


Well, to simulate the chemistry in the brain, would, I think, involve simulating some quantum mechanical things, which a quantum computer might be better equipped to simulate?

Is the argument that I've heard.

I don't know how that works when the qubits are being used to deal with other sorts of variables than bits though?


lololomg replied to you about Grover's algorithm.

https://en.wikipedia.org/wiki/Grover%27s_algorithm

For some reason, his comment was killed.


Grover's algorithm is a quantum algorithm for searching an unsorted database with N entries in O(N^1/2) time and using O(log N) storage space


Not proven. The best currently known algorithms for factoring on a classical computer take asymptotically longer than the best known quantum ones. That doesn't necessarily mean it's a fundamental limit of the universe. More generally we have no proof that BQP =/= P.


Scott Aaronson is fucking awesome. Whenever there's a new paper claiming breakthroughs about quantum computing or P=NP, I go to his blog and wait for him to clearly explain it.

He is the very rare case of a top notch scientist with a gift for explaining things well.


It isn't that clear but Scott Aaronson did make an admission in this article that I do not believe he has made before -- he admitted the D-Wave compute exhibited quantum effects:

"As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine"

I believe many of his previous blog posts had argued strongly that there was no proof of any quantum effects in D-Wave computers, but I could be misremembering.


It's been clear that the D-Wave machine is exhibiting quantum effects for several years now. See Aaronson's blog (http://www.scottaaronson.com/blog/?p=1400) and this Nature Communications paper (http://www.nature.com/ncomms/journal/v4/n5/abs/ncomms2920.ht...).

On the other hand, it's still not clear that this results in computationally relevant speedups. In my mind, this is not much of a shift in the status quo.


Scott Aaronson's entire professional life is essentially to distinguish what could and couldn't be a quantum effect in computation.

So yes, your memory is accurate that his previous blog posts argue strongly that nothing D-Wave computers had done proved any quantum effects. His acknowledgement that they've achieved quantum computation is a compliment to their engineering prowess. But he is complimenting new engineering results.


It's not an "admission" - there wasn't proof before (Aaronson never claimed that the effects weren't computationally relevant, only that they hadn't been proven to be) and now there is.


Can someone explain this to those of use who have no clue what these calculations are good for? If this machine is faster to do something, what would you use that something for?


The YouTube channel "In a Nutshell" released a good summary on this recently: https://www.youtube.com/watch?v=JhHMJCUmq28


That video is about Gate Quantum Computers; DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.

The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a 'quantum speedup'. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.

What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it's just not clear yet that that the type of quantum computer it is is an improvement over classical ones.


"If this machine is faster to do something..."

That's the point; is the machine faster? If the machine is actually doing quantum computing, it won't slow down as much for larger problems (for some problems). But they haven't been able to prove that it's doing quantum magic.


At the moment, it's only faster for the useless task of simulating itself. The hope is that that can be extended to more practical questions, but for now it's a pure research project.


D-Wave's computers work as intended and exhibit "computationally relevant quantum effects" but we still don't know whether the strategy they are using will ever yield practical advantages. Other groups (Google, University of Maryland) seem likely to achieve genuine quantum speedups over best-known classical algorithms within the next few years.


Yes

But that's the evolution of technology. First cars were not a significant advantage over horses

Sometimes it is much more efficient from the start (like transistors) but this is rare.


Not efficient at the start is a little different than "we have no reason to think it will ever be more efficient".


While Scott Aaronson is highly respected, I think we are rushing in to put down D-Wave guys. They seem very conservative in their claims. We should celebrate that we have come even this far that we are actually constructing these machines that were just research fantasies few decades ago and are at a point where we can even think about doing comparison with best we have built. I don't think anyone in their right mind thinks D-Wave or other QC will replace classical machines in next 5-10 years period. It's the progress that we need to be positive about and build on each other's work.


> They seem very conservative in their claims.

Hahaha!

D-Wave's reputation is one of exaggeration and putting marketing before science. That's how Scott ended up as de-facto "chief D-Wave skeptic": he kept complaining about their exaggerations and the exaggerations of journalists covering them.

(The paper being discussed now does actually seem conservative in its claims, but it's by a team at Google; not D-Wave.)


I'm wondering if this will allow for a huge speed up in 3D rendering? Specifically, unbiased Monte Carlo ray-tracing / path-traching renderers like Maxwell Render. Does anyone know of any work being done in this area of Quantum computing?


"Does anyone know of any work being done in this area of Quantum computing?"

Work in this area is still primarily focused on making it work at all. For instance, it isn't called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It's hard to imagine what they might be when we're still so early in the process, and still have no good idea what either the practical or theoretical limits are.


Quantum annealing could "potentially" provide a speedup for any optimization problem that can be reduced to a spin Ising form, particularly quadratic unconstrained binary optimization (QUBO).

Notice the emphasis on potentially, though. This paper only shows that 1) for a particular class of problems the quantum annealer has constant speedup over one current classical algorithms, and 2) the quantum annealer scales better for number partitioning than a few current classical algorithms.


for a particular class of problems the quantum annealer has constant speedup over one current classical algorithms on single core CPU.

I think it would have been at least have been meaningful if they had compared these algorithms against known parallel solutions on both CPU and GPU (and perhaps on FPGA too, such we could see how it would potentially compare against specialized ASIC solution).


It doesn't matter in this context.

Their conclusion is that in big O notation, quantum annealing and best classical algorithms are the same.

Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best, which specialized hardware would give you a constant factor boost without affecting the big O characteristics.

The problem is not "which is faster given a fixed n". Quantum computers are interesting for certain problems when n becomes larger and larger.

The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms or not, in big O notation. Remember, that is the reason why people are so interested in quantum computation. Not some constant speed up.

O(f(n)) vs O(log(f(n))) and O(f(n)/k) vs O(log(f(n))) are essentially the same in terms of their capabilities of solving NP problems.


Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best.

Exactly. The question is what would be the actual real life speedup. If it is not easily parallelizable then it becomes much more interesting finding.

I am not familiar with these algorithms, but name of QMC would suggest that this is an embarrassingly parallelizable problem, so my interest might be just from my ignorance.

The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms

Yes, I know and that was not the focus of my comment. Sorry.


If you give classical k cores, then to be fair you should give quantum k D-waves. If you are only making a scaling argument, then no need to include terms that cancel.

If you want to make a comparison between currently available hardware, then the metric should be some combination of speed, power usage, space, cost, etc.


That isn't a fair equivalence. A core costs about $100, while a D-Wave machine costs about $20 million.


For actual fairness, you'd need to consider all the research and development costs that got that core to $100.


People above are right about parallelization not being a useful benchmark.

On the other hand, benchmarking against other special purpose hardware (like an FPGA, ASIC, RQL, etc.) is definitely of interest.


Why it is not? It would show how easily this problem is actually parallelizable in practice.


Because both D-wave and the classical algorithms will benefit linearly with parallelization.

If they want to make a claim about the scaling ratio between D-wave and classical algorithms, then this linear term would cancel.


The question is what would be the actual real life speedup. If it is not easily parallelizable then it becomes much more interesting finding.

I am not familiar with the algorithms used, but name of QMC would suggest that this is an embarrassingly parallelizable problem, so my interest might be just from my ignorance i.e. I am looking for assurance that my assumption actually holds.

But if a quantum computer is demonstrated to have a huge constant speedup against a problem that could not be easily parallelized (i.e. not this case I assume) then cluster of classical computers could not catch up the difference.


The amount of time taken to solve a given problem in "real life" is irrelevant.

This is for problems where a brute-force solution on a classical machine needs O(2^n) computational steps. This is an exponential relationship; as the problem size (measured by n) becomes large, the number of steps required becomes vast. If each step takes a nanosecond, and n=30, then finding a solution will only take about a second. Double the problem size to n=60, however, and now it will take 36 years.

A quantum algorithm for the same problem might be able to run in subexponential time. It might still be something horrible like O(n^7) and it would still scale better than O(2^n): at n=30 it would take 22 seconds, but at n=60 it would only take about 45 minutes.

This is why computer scientists use Big-O notation; the clock speed of the computer is irrelevant if the algorithm scales badly. You never use bubble sort because it scales badly; almost anything else you can come up with will be better. Likewise, if you had a real quantum computer that could run Grover's algorithm then you'd never factor numbers using any other method; Grover's algorithm would always win.


The amount of time taken to solve a given problem in "real life" is irrelevant.

I thank you for your thorough answer.

I am not discussing the importance of the quantum speedup (that was not demonstrated) rather than the constant speedup compared to the single core CPU (and we know that in real life this difference does not even exist, but we can pretend that it does, ok?).

Google "google 100 million times faster than" and you can see already headlines poping up (for example this from Arstechica http://arstechnica.com/information-technology/2015/12/google... They even do not mention that actually the same problem can be solved on the single core CPU faster than on D-WAVE by using a different algorithm).

So I am dealing with a hypothetical situation where in fact some sort of quantum annealing computer could have a huge constant speedup compared to the single core CPU in solving of one very important problem.

Imagine that we live in a national state that does not have access to the quantum annealing technology (within reasonable time frame) but has state of the art silicon fab lab.

Could we build a classical cluster of the same speed? How many CPU cores we would need? What if we use GPUs? What if we build a problem specific chip (ASIC)?

What if there is no easily parallelizable solution? Could we then even find a match in problem solving speed?

I hope that it was clear that even without an asymptotic speedup there are specific cases where a huge constant speedup would matter.


There's no way to answer that question without going into the details of every specific problem you might want to solve, but since they've only demonstrated a constant speedup then yes, you can always build a normal computer which has comparable speed. Or you can improve your software; that's happened often enough already.


A single Dwave is not faster than a cluster of CPUs.

Is a cluster of Dwaves faster than a cluster of CPUs? Maybe at the two problems Google looked at.


Why do you think D-wave machines can be clustered at all? Unless you say it's operation is not essentially quantum, that would mean demonstrating coherence between a bunch of machines and a scalable quantum network!


they are more concerned with the fundamental computational complexity nature of the speedup (is it superlinear/exponential?) not the actual realworld hardware results.

All throwing more hardware at a problem does is (at best) is a linear increase.


10^8 is a lot.


If we compare single core CPU performance against heavily motivated special solution, then it is not (it does not follow that the same method can be applied in this case, but then again, a better classical algorithm on single core beats the reported quantum result according to the paper (PS. this is in the paper, not on the graph)).

Here is comparison http://bitcoin.stackexchange.com/questions/36412/what-is-the... between GPU and ASIC bitcoin mining and here is https://en.bitcoin.it/wiki/Non-specialized_hardware_comparis... GPU and CPU comparison.

From this the difference between between GPU and ASIC solution is about 10^4.

Google tested against Intel(R) Xeon(R) CPU E5-1650 single core, so it would be roughly 10^2 slower than GPU.

So one could say that ASIC solution for bitcoin mining is about 10^6 times faster than single core CPU solution.

Million times difference is of course not 100 million times difference, but it is still a lot and then again, alternative classical algorithms would beat current result: Based on the results presented here, one cannot claim a quantum speedup for D-Wave 2X, as this would require that the quantum processor in question outperforms the best known classical algorithm. ... This is because a va- riety of heuristic classical algorithms can solve most in- stances of Chimera structured problems much faster than SA, QMC, and the D-Wave 2X (from http://arxiv.org/abs/1512.02206).

I think that this is an important and an interesting result, but it is in my opinion not that impressive that it may appear to look.


The popular perception of quantum computers as "doing things in parallel" is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow "cancel out" the incorrect results leaving you with the single correct one. You can't actually run a bunch of calculations in parallel and measure all of the results; it's more like you can, in some limited cases, make it as though you'd "magically" made the right choice to start with.


The phrasing of "factor-10^8 speedup" threw me for a bit. Is that a negative speedup, meaning it's significantly slower? No, it's actually "10^8 faster".


(-10)^8 :P


They probably mean the - like "Factor-100 speedup".


Any book that gives an overview of quantum computing to a programmer? A sort of "thing explainer" for quantum computing?


What's a thing explainer? But unfortunately since quantum computing is still really only an academic pursuit, you're only going to find technical literature or dumbed down to the point of not being worth it.

Saying that, there is one book I like called Quantum Processes Systems, and Information by Schumacher (who coined the term qubit) and Westmoreland since it is an introductory text and has a good discussion of the basics of quantum computing. It does, however, assume you are a physics student with a good understanding of linear algebra.


This is a reference to Thing Explainer, the Randall Munroe book - https://xkcd.com/thing-explainer/

I think since so much quantum physics depends on maths, which in turn depends on highly specialised terminology or at least symbols, you are right that a Thing Explainer might have serious difficulty with quantum physics. A broad overview probably wouldn't have a serious advantage over the various lay explanations and thought experiments that are floating around at the moment. But as a non-physics student with an imperfect understanding of linear algebra, I would love to be proven wrong!


I really appreciate your recommendation.

I did not major in physics, but I have studied and applied linear algebra and have fairly good intuition for it.

It seems to me then that I should study some intermediate physics (books or courses) before studying the Schumacher book. Do you have any recommendations for that? Thanks in advance.


Learning a bit of Quantum Mechanics is going to be more helpful than anything else in physics to understand quantum computing.

Introduction to Quantum Mechanics by Griffiths is a standard undergrad text. The math is all "simple" (linear algebra and calculus). It's the learning a new way to think about the world that's the hard part.


Thank you!



I have this book and it's great, though it's not quantum computing "for a programmer" -- I don't think such a thing really exists. It was based on a series of quantum computing lectures he did for CS grad students.

If you have a solid undergraduate education in computer science theory (computability, computational complexity, etc.) and probability, you can get pretty far through the book and take away useful things. But simply being a programmer doesn't mean you'll get very far.

I got pretty far but got lost toward the end. He states a LOT of results and they become hard to remember without working through them more deeply. It's all theory -- there's absolutely nothing like "programming" in the book.


I, too, loved the book but think it's not what the asker was looking for. In particular, it assumes a lot of knowledge about quantum computing (it doesn't cover, say, Shor's algorithm or quantum Fourier transforms). It's aim is to present information about complexity theory, not "Learn to Program Quantum Computers in 24 Days".

There was a good course on Coursera some years back that I found quite enjoyable and was my first introduction to quantum computing. It serves as an OK intro to quantum mechanics, too, from the abstract point of view where we just assume we have qubits versus the physics point of view where we start by looking at atoms. Maybe the course is still available.


Thank you for that review. I should have phrased my question better, because I actually meant a book for CS grads that focuses on the computational side more than the physics theory. Programming was the wrong word.


Thank you for the recommendation. I didn't know the author of this post is himself a published author on the subject. Will check it out.


If you want to actually understand it (complete with exercises), Nielsen and Chuang (maybe go for a second-hand copy, the editions don't change that much and it's pretty expensive new). It's not ELI5 though.


Thank you for the recommendation. I'll look for this.


TL;DR

"Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer. But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108. In other words, this is a constant-factor speedup rather than an asymptotic one."


10^8, not 108!


Just for clarification (for anyone),

10^8 or 100,000,000 is still a constant factor =)


Well yes but Google spending some obscene amount of money for a 108x speedup is beyond farcical even if it's not asymptotic :)


Google isn't spending money for even a 10^8 speedup at this point, Google is spending money to figure out what will be possible in the future




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: