"...it has also been shown that for the class of problems used in initial benchmarks,the performance of the devices also correlates well with a classical(meanfield) approximation of quantum annealing ,which raises questions about whether the devices can outperform classical optimizers."
In other words, if you ran a simulation of this device on a normal computer and gave it the same problem to solve, you will get basically the same efficiency.
This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave's computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn't help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can't beat an even better classical algorithm (Selby's) at all, even in a way that won't scale.
Scott's central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it's possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott's central gripe with D-Wave's approach is that they don't have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn't seem like a solution. My personal frustration with D-Wave's marketing is that the fact that their computers are super specialized often gets lost; these computers fail the equivalent quantum notion of a complete set of digital logic gates.
Yes. See the 4-item list near the end of the post. It's not about the number of qubits. It might be possible to make the D-Wave approach effective. But that's going to involve either being lucky on an open research problem (whether it's even possible to get a quantum speedup using stoquastic Hamiltonians only), or fundamental improvements to their approach. It's not simply a case of scaling up what they already have.
Quantum computing is a lot like fusion research. Nothing we know how to build will break even but we still need to build something so we can learn how to build something that could break even.
We have plenty of better energy technologies than what we have, but we don't implement those, so why bother building an even better technology that we also likely won't build.
It's like the dev team that won't refactor but thinks its new design will be perfect and also not need refactoring. The new project invariably needs refactoring which they don't do, leading to another ground up rewrite.
I'm pretty sure by the year 2200, we'll have a coffee cup sized zero point energy device but still get our power from coal.
Well, why don't you do it then? The potential profits are absurd; don't you want to be the world's first trillionaire?
> I'm pretty sure by the year 2200, we'll have a coffee cup sized zero point energy device but still get our power from coal.
You're suggesting that people will choose to consume expense goods and services over otherwise identical cheaper goods and services? This does not seem to mesh well with observed human behaviour.
The government is not really a rational actor in economic markets and is often guided by other factors like popular vote, corruption, etc.
It is hard to explain why we don't use more nuclear power from an economic standpoint, but it is easy to understand how politicians and lobbyists have capitalized on fear mongering to create our current energy markets.
It's very easy to explain why we don't use more nuclear power from an economic standpoint.
See, for example:
Government subsidies to the nuclear power industry over the past fifty years have been so large in proportion to the value of the energy produced that in some cases it would have cost taxpayers less to simply buy kilowatts on the open market and give them away, according to a February 2011 report by the Union of Concerned Scientists.
It's something of an anti-pattern on HN to think that nuclear power is somehow this secret source of cheap energy and it is only those damn anti-science greens who conspire to give it a bad name.
It's not. Nuclear power is expensive.
Some will blame unfair (?) government regulation for that. It is true that regulation adds to the costs, but regulation seems reasonable given the fairly large costs associated with accidents.
Some will claim that the dangers associated with Nuclear power are overstated, and that coal kills more people. I agree that coal should be replaced, but not with nuclear.
Some will claim that only nuclear power can provide base-load capability without greenhouse gases. I disagree, and point to the range of energy storage capabilities available now, the rapidly dropping costs of batteries, and the increasing decentralization of the power grid.
In our nearest neighbour (Australia), with an extremely similar culture and history, it's about 17%; they don't have any good geothermal sites, and few good hydro sites, combined with a significantly larger population.
In the rich world, we've basically tapped every major viable hydro source (and learned a lot about the environmental costs of doing so, and sharply revised our definiton of "viable" too). You claim that there's energy tech we're just not using, but hydro is absolutely not an example of that. The difference between your country and Australia is geography, not technology.
If your answer to climate change is "let's all be more like Norway", you may need to rethink your understanding of the problem.
Unfortunately, in my area only about 23%-30% of the power generation is composed of renewable sources, primarily hydro.
Admittedly, I'm making numbers up, but I can really only see a gov't skewing by 10-200% before unsustainability. In a free society, lower bound. Higher end, something like Venezuela.
That doesn't seem like the kind of range that would be a practical concern. Suppose we solve AGW, we face petroleum shortage in the medium term, it's probably going to require a different scale in change of emissions and petroleum extraction to matter.
Governments are inefficient, but those as inefficient as the orders-of-magnitude difference we're talking... those we call revolutions! [/dramatic effect]
Apologies for the comedy if that's not your style, but I'm curious to hear your thoughts.
Solving AGW doesn't solve the next ice age...
We are not going to be extracting oil at the current rate for the next 10,000 years or even the next 200. We, could start manufacturing it because from an energy standpoint refining, transport, and extraction cost so much oil is basically just a battery at this point. (See: Canada's oil sands.)
Did you know it's virtually impossible to get whale oil these days? Do you care? Or do you just use a whale oil substitute?
The ironic thing is increasing price actually increases supply.
As to whale oil, the Inuit still have access to whale oil. And there was a clear shortage in 1861, but there where also existing substitutes before modern whaling become so efficient. 'Oil' showed up after the fact, it's only after the fact people think 'Oil' showed up in time to save people from the whale oil crash.
Well, there's this company called Apple that produces a range of products...
Simulated Annealing is a great tool for optimization problems, because it is almost trivial to implement -- If you have a combinatorial problem, that you evaluate the objective function on (preferable evaluate changes easily), then SA is a good starting point. It is a meta-heuristic that can be applied to almost any problem.
However, Simulated Annealing is almost never the best way. Better methods (more efficient/better results) are often either based on complex meta heuristics or custom heuristics for approximate results, or branch and bound or dynamic programming for guaranteed optimum.
If quantum computing enables us to perform a massive speed-up with a simple SA-like meta-heuristic, then that would make "solving" NP-hard problems a much more common task.
You could just insert your problem into this "black box", and a few seconds later you'll get the result.
They could run the classical algorithms a factor of 1,000,000× slower and reach the same conclusion, that the D-Wave's quantum annealing process is asymptotically equivalent (scales equivalently) to classical algorithms.
Your first statement is correct, but Shor's algorithm (the one I suspect you are referring to when you talk about a universe sized computer) is not an example of one. We do not know that there is not a classical factoring algorithm that is as good or better than Shor's, and we have no quantum algorithms as of yet that can give an exponential speedup over any possible (but possible yet unknown) classical algorithm (which is needed for the universe sized computer thing). We have some that give more modest speed ups that are known to be unbeatable by classical algorithms; Grover's search algorithm is such an example, but it is only quadratic. This means you have a quantum computer of size N, you only need a classical computer of size N^2 to match it.
> Optimisation quickly becomes irrelevant when you scale these problems just a little bit more.
The problem with quantum computers isn't optimization of algorithms, but it's still optimization of strategies for dealing with errors (which sometimes look a lot like algorithms). Building reasonably accurate logic gates was figured out with a lot less effort when we started even thinking of building such things, and their reliability didn't fall off with increasing size very quickly. Our technology for quantum computers, on the other hand, all have crippling flaws as of yet. The most common one (superconducting qubits and ion trap based designs both suffer from this) is that when we try to make the computer bigger, it needs an unscalable amount of error correction an eventually stops working altogether. Some other approaches (linear optical quantum computer for example) can scale up without getting worse per se, but the gates we can build are already so unreliable that we still need too much error correction to scale. So optimizing our error correction strategies is one possible avenue that is being explored.
edit: I was trying to be polite, but Scott Aaronson has spilled quite a lot of blog ink denouncing remarks like the parent post as utter nonsense.
The space between unsolvable by classical computers and solvable practically by classical computers is... significant.
Classical computers are limited by what they can compute because the universe has limits. A 10,000 qubit quantum computer can factor fairly large numbers. 10,000 qubits is pretty small, one day hopefully it will fit inside a small room.
To factor those same numbers using a classical computer you'd have to make a computer the size of the entire known universe and run that computer until the heat death of the universe. Obviously this is not possible, even in principal.
Theoretically any QC computation can be simulated by a classical computer but in our limited universe you quickly run into the wall where the classical computer just becomes too big (bigger than the entire universe) and too slow (slower than the life of the universe).
Aside from factoring, what kinds of things do we think will meaningfully change if we get general QC? Cryptographers are already preparing for the post-quantum world. Who else needs to be preparing?
Everything I think I understand about QC suggests that a practical breakthrough will not cause any changes in society, other than the abandonment of RSA. Am I missing something?
Before this thread, I knew that Shor's algorithm and Grover's algorithm were two very important QC results. I knew that Shor's algorithm means that a QC would be able to decrypt anything that was ever encrypted with RSA. (ECC schemes are likely just as vulnerable, so cryptographers are looking at purely hash-based schemes: http://pqcrypto.org/hash.html)
What I learned this morning based on hints in this thread:
1) Grover's algorithm is a far more modest speedup compared to Shor's algorithm
2) Shor's algorithm only factors integers, but Grover's algorithm can more generally invert a function (which still threatens many currently used cryptographic functions)
So I'd guess that Grover's algorithm is the sort of thing people are talking about as useful in machine learning. There are probably other QC results that are worth getting excited about for people working with neural networks. The Google/Microsoft workshop this weekend has a number of sessions on quantum annealing, as well.
A big reason "unsolvable by classical computers" is such a silly way to phrase things is (paraphrasing Dr. Aaronson here) that simulated annealing techniques on classical computers are already producing such good results without QC. In the previous Shtetl-Optimized post to this one, he posts a Powerpoint deck for a talk he gave at the same conference, and it is quite instructive (but still enough over my head that I'm sitting on HN asking dumb questions).
I mean, since the best-known quantum algorithm for factoring integers is asymptotically faster than the best-known classical algorithm for factoring integers, isn't there some definition of "large" for which this is no longer true?
(I'm assuming you mean "classical computers can factor large integers effectively", since the class of all problems solvable by a classical computer is exactly the same as the class of all problems solvable by a quantum computer)
I started this sub-thread because I'm pretty sure this statement is true. But the details are tricky.
> the best-known quantum algorithm for factoring integers is asymptotically faster than the best-known classical algorithm for factoring integers
The only reason that current commercial cryptography works is that classical computers can't factor large integers effectively. But quantum speedups to factoring isn't particularly profound, since factoring is just a small part of computing.
> I started this sub-thread because I'm pretty sure this statement is true. But the details are tricky.
Actually that is one of the first problems that was solved when we started looking into quantum computers. Our models of classical and quantum computation are both Turing complete and only Turing complete. Any classical computer can simulate a quantum computer, and any quantum computer can simulate a classical one. A quantum computer's only advantage is in speed, and we don't yet have proof that there is an exponential speed benefit that cannot be circumvented by finding a better classical algorithm.
Since asymmetric cryptography generally relies on an exponential barrier between the process of using the encryption and the process of breaking the encryption, that is what we really need to completely break modern asymmetric cryptography.
And that changes everything, probably more than anything since (the iPhone|the internet|computers|penicillin|the industrial revolution|fire) depending on how optimistic you are. It'll change things a lot, anyway.
I could see this causing massive changes in society. An artificial intelligent simulation of a super-smart human, that can be tuned toward a specific problem area and made much more focused and efficient than a purely biological brain could work...
Is the argument that I've heard.
I don't know how that works when the qubits are being used to deal with other sorts of variables than bits though?
For some reason, his comment was killed.
He is the very rare case of a top notch scientist with a gift for explaining things well.
"As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine"
I believe many of his previous blog posts had argued strongly that there was no proof of any quantum effects in D-Wave computers, but I could be misremembering.
On the other hand, it's still not clear that this results in computationally relevant speedups. In my mind, this is not much of a shift in the status quo.
So yes, your memory is accurate that his previous blog posts argue strongly that nothing D-Wave computers had done proved any quantum effects. His acknowledgement that they've achieved quantum computation is a compliment to their engineering prowess. But he is complimenting new engineering results.
The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a 'quantum speedup'. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.
What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it's just not clear yet that that the type of quantum computer it is is an improvement over classical ones.
That's the point; is the machine faster? If the machine is actually doing quantum computing, it won't slow down as much for larger problems (for some problems). But they haven't been able to prove that it's doing quantum magic.
But that's the evolution of technology. First cars were not a significant advantage over horses
Sometimes it is much more efficient from the start (like transistors) but this is rare.
D-Wave's reputation is one of exaggeration and putting marketing before science. That's how Scott ended up as de-facto "chief D-Wave skeptic": he kept complaining about their exaggerations and the exaggerations of journalists covering them.
(The paper being discussed now does actually seem conservative in its claims, but it's by a team at Google; not D-Wave.)
Work in this area is still primarily focused on making it work at all. For instance, it isn't called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It's hard to imagine what they might be when we're still so early in the process, and still have no good idea what either the practical or theoretical limits are.
Notice the emphasis on potentially, though. This paper only shows that 1) for a particular class of problems the quantum annealer has constant speedup over one current classical algorithms, and 2) the quantum annealer scales better for number partitioning than a few current classical algorithms.
I think it would have been at least have been meaningful if they had compared these algorithms against known parallel solutions on both CPU and GPU (and perhaps on FPGA too, such we could see how it would potentially compare against specialized ASIC solution).
Their conclusion is that in big O notation, quantum annealing and best classical algorithms are the same.
Parallelization would get you a factor 1/k (k being the number of cores) in favor of the classical algorithm at best, which specialized hardware would give you a constant factor boost without affecting the big O characteristics.
The problem is not "which is faster given a fixed n". Quantum computers are interesting for certain problems when n becomes larger and larger.
The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms or not, in big O notation.
Remember, that is the reason why people are so interested in quantum computation. Not some constant speed up.
O(f(n)) vs O(log(f(n))) and O(f(n)/k) vs O(log(f(n))) are essentially the same in terms of their capabilities of solving NP problems.
Exactly. The question is what would be the actual real life speedup. If it is not easily parallelizable then it becomes much more interesting finding.
I am not familiar with these algorithms, but name of QMC would suggest that this is an embarrassingly parallelizable problem, so my interest might be just from my ignorance.
The real issue is whether if there is an exponential difference between quantum annealing and classical algorithms
Yes, I know and that was not the focus of my comment. Sorry.
If you want to make a comparison between currently available hardware, then the metric should be some combination of speed, power usage, space, cost, etc.
On the other hand, benchmarking against other special purpose hardware (like an FPGA, ASIC, RQL, etc.) is definitely of interest.
If they want to make a claim about the scaling ratio between D-wave and classical algorithms, then this linear term would cancel.
I am not familiar with the algorithms used, but name of QMC would suggest that this is an embarrassingly parallelizable problem, so my interest might be just from my ignorance i.e. I am looking for assurance that my assumption actually holds.
But if a quantum computer is demonstrated to have a huge constant speedup against a problem that could not be easily parallelized (i.e. not this case I assume) then cluster of classical computers could not catch up the difference.
This is for problems where a brute-force solution on a classical machine needs O(2^n) computational steps. This is an exponential relationship; as the problem size (measured by n) becomes large, the number of steps required becomes vast. If each step takes a nanosecond, and n=30, then finding a solution will only take about a second. Double the problem size to n=60, however, and now it will take 36 years.
A quantum algorithm for the same problem might be able to run in subexponential time. It might still be something horrible like O(n^7) and it would still scale better than O(2^n): at n=30 it would take 22 seconds, but at n=60 it would only take about 45 minutes.
This is why computer scientists use Big-O notation; the clock speed of the computer is irrelevant if the algorithm scales badly. You never use bubble sort because it scales badly; almost anything else you can come up with will be better. Likewise, if you had a real quantum computer that could run Grover's algorithm then you'd never factor numbers using any other method; Grover's algorithm would always win.
I thank you for your thorough answer.
I am not discussing the importance of the quantum speedup (that was not demonstrated) rather than the constant speedup compared to the single core CPU (and we know that in real life this difference does not even exist, but we can pretend that it does, ok?).
Google "google 100 million times faster than" and you can see already headlines poping up (for example this from Arstechica http://arstechnica.com/information-technology/2015/12/google... They even do not mention that actually the same problem can be solved on the single core CPU faster than on D-WAVE by using a different algorithm).
So I am dealing with a hypothetical situation where in fact some sort of quantum annealing computer could have a huge constant speedup compared to the single core CPU in solving of one very important problem.
Imagine that we live in a national state that does not have access to the quantum annealing technology (within reasonable time frame) but has state of the art silicon fab lab.
Could we build a classical cluster of the same speed? How many CPU cores we would need? What if we use GPUs? What if we build a problem specific chip (ASIC)?
What if there is no easily parallelizable solution? Could we then even find a match in problem solving speed?
I hope that it was clear that even without an asymptotic speedup there are specific cases where a huge constant speedup would matter.
Is a cluster of Dwaves faster than a cluster of CPUs? Maybe at the two problems Google looked at.
All throwing more hardware at a problem does is (at best) is a linear increase.
Here is comparison http://bitcoin.stackexchange.com/questions/36412/what-is-the... between GPU and ASIC bitcoin mining and here is https://en.bitcoin.it/wiki/Non-specialized_hardware_comparis... GPU and CPU comparison.
From this the difference between between GPU and ASIC solution is about 10^4.
Google tested against Intel(R) Xeon(R) CPU E5-1650 single core, so it would be roughly 10^2 slower than GPU.
So one could say that ASIC solution for bitcoin mining is about 10^6 times faster than single core CPU solution.
Million times difference is of course not 100 million times difference, but it is still a lot and then again, alternative classical algorithms would beat current result: Based on the results presented here, one cannot claim
a quantum speedup for D-Wave 2X, as this would require
that the quantum processor in question outperforms the
best known classical algorithm. ... This is because a va-
riety of heuristic classical algorithms can solve most in-
stances of Chimera structured problems much faster than
SA, QMC, and the D-Wave 2X (from http://arxiv.org/abs/1512.02206).
I think that this is an important and an interesting result, but it is in my opinion not that impressive that it may appear to look.
Saying that, there is one book I like called Quantum Processes Systems, and Information by Schumacher (who coined the term qubit) and Westmoreland since it is an introductory text and has a good discussion of the basics of quantum computing. It does, however, assume you are a physics student with a good understanding of linear algebra.
I think since so much quantum physics depends on maths, which in turn depends on highly specialised terminology or at least symbols, you are right that a Thing Explainer might have serious difficulty with quantum physics. A broad overview probably wouldn't have a serious advantage over the various lay explanations and thought experiments that are floating around at the moment. But as a non-physics student with an imperfect understanding of linear algebra, I would love to be proven wrong!
I did not major in physics, but I have studied and applied linear algebra and have fairly good intuition for it.
It seems to me then that I should study some intermediate physics (books or courses) before studying the Schumacher book. Do you have any recommendations for that? Thanks in advance.
Introduction to Quantum Mechanics by Griffiths is a standard undergrad text. The math is all "simple" (linear algebra and calculus). It's the learning a new way to think about the world that's the hard part.
also free online
If you have a solid undergraduate education in computer science theory (computability, computational complexity, etc.) and probability, you can get pretty far through the book and take away useful things. But simply being a programmer doesn't mean you'll get very far.
I got pretty far but got lost toward the end. He states a LOT of results and they become hard to remember without working through them more deeply. It's all theory -- there's absolutely nothing like "programming" in the book.
There was a good course on Coursera some years back that I found quite enjoyable and was my first introduction to quantum computing. It serves as an OK intro to quantum mechanics, too, from the abstract point of view where we just assume we have qubits versus the physics point of view where we start by looking at atoms. Maybe the course is still available.
"Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer. But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108. In other words, this is a constant-factor speedup rather than an asymptotic one."
10^8 or 100,000,000 is still a constant factor =)