
D-Wave: Truth finally starts to emerge - YAYERKA
http://www.scottaaronson.com/blog/?p=1400
======
kruhft
I like that he addresses that his main fear that, if D-Wave does _not_ succeed
after riding the marketing hype-machine, a "QC Winter" would likely happen,
not unlike the "AI Winter" (<http://en.wikipedia.org/wiki/AI_winter>) that
occurred in the late 80's. The AI Winter was caused by a huge amount of over
hyped claims and promises by the companies taking part which were never (and
could never be) met, leading to the eventual collapse of the whole industry
which is now only starting to see a practical recovery with the resurgence of
Machine Learning.

D-Wave might be the belle of the ball today, but once the crowd turns, it
could ruin an entire industry for a very long time. That's why skepticism is
important when it comes to their claims.

~~~
Retric
AI had huge successes though the 80's an early 90's despite the two "AI
Winters". However, people generally stop calling something AI once it starts
to work. Consider path-finding sometimes it's AI and sometimes not. It's true
there was a shifts in research funding (1974–80 and 1987–93) but plenty of
things that where under the AI umbrella had moved on to their own separate and
a lot of crap that was never going to work was cut.

~~~
ableal
There's also the fact that some "AI" consisted of redoing - poorly, out of
ignorance - what had been done previously in fields such as operations
research, optimal control, computer aided design, etc.

------
bnegreve
For the ones that don't want to read the whole post, the _Truth_ is that the
3600 speedup claimed in this article: _Commercial quantum computer leaves PC
in the dust_ [1](HN discussion: [2]) isn't worth much.

Quoting Scott Aaronson's post: _As I said above, at the time McGeoch and
Wang’s paper was released to the media [...] the “highly tuned implementation”
of simulated annealing that they ask for had already been written and tested,
and the result was that it outperformed the D-Wave machine on all instance
sizes tested. In other words, their comparison to CPLEX had already been
superseded by a much more informative comparison—one that gave the “opposite”
result—before it ever became public. For obvious reasons, most press reports
have simply ignored this fact._

[1] [http://www.newscientist.com/article/dn23519-commercial-
quant...](http://www.newscientist.com/article/dn23519-commercial-quantum-
computer-leaves-pc-in-the-dust.html)

[2] <https://news.ycombinator.com/item?id=5697619>

~~~
ghshephard
Geordie's (Dwave's CTO) rebuttal: [1]

"The majority of that post is simply factually incorrect.

I used to find this stuff vaguely amusing in an irritating kind of way. Now I
just find it boring and I wonder why anyone listens to a guy who’s been wrong
exactly 100% of the time about everything. Update your priors, people!!

If you want to know what’s really going on, listen to the real experts. Like
Hartmut. [2]"

[1] [http://dwave.wordpress.com/2013/05/08/first-ever-head-to-
hea...](http://dwave.wordpress.com/2013/05/08/first-ever-head-to-head-win-in-
speed-for-a-quantum-computer/#comments)

[2] [http://googleresearch.blogspot.ca/2013/05/launching-
quantum-...](http://googleresearch.blogspot.ca/2013/05/launching-quantum-
artificial.html)

~~~
archgoon
[1] Contains no explanation about how Aaronson is 100% wrong. It is pure FUD.
He won't actually argue points because that would allow Aaronson to rebut his
rebuttal.

[2] has little information, and no numbers, or comparisons to other
techniques. All it says is "Quantum Computing is Cool! And we're working on NP
hard problems." It does say that they have a quantum computer from Dwave, but
they make no claims as to it's actual power.

Adding emphasis to the main objectives from [2] "to study how quantum
computing _might_ advance machine learning." They're still researching things.
[1] is a clear attempt to leverage the respectability of Google without
actually addressing Aaronson's points.

~~~
nextbigfuture9
How was Aaronson was 100% wrong before -

[http://nextbigfuture.com/2013/05/aaronson-intuition-about-
wh...](http://nextbigfuture.com/2013/05/aaronson-intuition-about-what-
dwave.html)

<http://www.scottaaronson.com/blog/?p=306>

"Even if D-Wave managed to build (say) a coherent 1,024-qubit machine
satisfying all of its design specs, it’s not obvious it would outperform a
classical computer on any problem of practical interest. This is true both
because of the inherent limitations of the adiabatic algorithm, and because of
specific concerns about the Ising spin graph problem. On the other hand, it’s
also not obvious that such a machine wouldn’t outperform a classical computer
on some practical problems. The experiment would be an interesting one! Of
course, this uncertainty — combined with the more immediate uncertainties
about whether D-Wave can build such a machine at all, and indeed, about
whether they can even produce two-qubit entanglement — also means that any
talk of “lining up customers” is comically premature."

* Dwave built the machines * Aaronson concedes that Dwave has achieved entanglement with a quantum annealing system for its full 512 qubits. * there were two big sales (Lockheed, Google)

My own prediction from 2006 <http://longbets.org/266/>

There will be a quantum computer with over 100 qubits of processing capability
sold either as a hardware system or whose use is made available as a
commercial service by Dec 31, 2010.

128 qubit system was sold in 2010. Quantum entangled annealing is proven in
the USC paper of the 128 qubit system that was sold.

------
shawabawa3
Probably the biggest point to take away from the article is this:

> For years, I tirelessly repeated that D-Wave hadn’t even provided evidence
> that its qubits were entangled—and that, while you can have entanglement
> with no quantum speedup, you can’t possibly have a quantum speedup without
> at least the capacity to generate entanglement. Now, I’d say, D-Wave finally
> has cleared the evidence-for-entanglement bar—and, while they’re not the
> first to do so with superconducting qubits, they’re certainly the first to
> do so with so many superconducting qubits.

In other words, this is the first actual evidence that D-Wave even has the
_potential_ for quantum speedup.

~~~
nraynaud
I love this idea of having a computing system that might not work as
advertised but still produce the right results in a repeatable manner. When
the ESA had a computing system that did not work as advertised, they blew up
the GNP of an african country and went straight to the textbooks.

~~~
mikeash
That really confuses me. I haven't been following the story, but _how_ can
there possibly be questions over whether the thing is doing what they say it
is? Are the results that ill-defined?

~~~
nraynaud
They are playing with statistics and montecarlo stuff. Somehow it would work
too in rocket science, you just have to try a lot of designs and after a while
a significant portion of them will reach the moon, but they deemed the
convergence speed to low, or the cost to high, I don't remember wich.

edit: a more serious answer is in the article, entanglement or not the
algorithm naturally converge towards the solution. And they did not even check
with convergence speed (I think the quantic version is meant to be faster) but
they checked on some statistical criterion that is different in the quantic
version and the plain version.

~~~
smoyer
"you just have to try a lot of designs"

That's exactly what Wernher von Braun did with his early rocket designs. He
(intelligently and quickly) iterated, keeping the parts that worked and
getting rid of the parts that didn't.

~~~
nraynaud
Yeah, actually trying to be a smart rocket scientist can be seen as an
euristic to make the stochastic process of finding a working rocket design
converge faster.

------
headcanon
It seems to me that all the hyperbole is about raised expectations. Hell, the
fact they can even get something like this to work, and are able to determine
(somewhat) that it is working the way they intended is enough to get me
excited. I don't care if the QC they built performs faster or slower than a
classical algorithm, because that stuff will simply come with time as we
understand more about how QC works. With classical computers, we had several
decades of knowledge about how electromagnetism worked - it was easily
observable, and applying those principles to a complex switching system
(already implemented by some mechanical computers at the time) was relatively
straightforward, at least from a physical standpoint. With QC, it seems to me
(I don't really know more about than anyone else here) that we are making all
sorts of theoretical discoveries as well as attempting to build a computer
with those discoveries at the same time. So D-wave is basically the modern
equivalent of Tesla and Turing, etc. all wrapped up into one big package. So
the fact that this stuff is turning out not to be a complete fairytale is more
than enough to get me excited.

~~~
marcosdumay
Yes, the problem is about expectations. D-Wave told everybody that they made a
quantum computer, when in fact they've redefined the term to mean something
else.

Well, that something else is working, and altough a nice physics experiment,
it still lacks any real world utility.

------
varelse
A really long time ago when I was in grad school playing with genetic
algorithms and simulated annealing, I implemented something that seems awfully
similar to quantum adiabatic annealing. It worked as follows:

1\. Assign all states of each variable equal probability.

2\. Sample the living crap out of the search space with some sort of energy
function for every possible configuration, scoring the discrete values of the
variables of each individual sample by Boltzmann weighting of the energy
function.

3\. Every so often update the weights for selecting each variables potential
values using the accumulated scores for each variable generated during step 2.

4\. Repeat steps 2 through 3 until each variable converges to a single state.

I never published anything but I learned three things from this process.

1\. It worked like gangbusters to find the space surrounding global optima of
reasonably complex functions

2\. It worked like crap to refine really good solutions into really great
solutions.

3\. It was horribly dependent on the underlying representation of the
variables (i.e. if you mapped the input variables to a spin glass, it was just
_awful_ )

That and I suspect somebody already has a fancy name for exactly what I did
back then...

~~~
sbierwagen
<http://en.wikipedia.org/wiki/Quantum_Monte_Carlo> ?

~~~
prawks
Sounds an awful lot like Monte Carlo to me as well.

~~~
varelse
It's not _quite_ like it from what I just read, it almost seems like it's
halfway between Quantum Monte Carlo and a multi-armed bandit.

------
becauseracecar
While most of the recent popular coverage has been full of hype, Aaronson
provides a concise summary of what's been going on before taking on the hype
which appears to have left the poor man at his wit's end. Honestly I was
rather confused also as the skepticism D-wave was met with at the beginning
appears to have been replaced with a lot of hype without any mention of the
actual physics of what's happening.

The position of most of the scientific community at the outset regarding
D-Wave quantum computers was that it was uncertain what was going on at all.
Nobody knew for sure if the D-Wave computers were really using quantum
entanglement when they ran or not. Obviously a computer that does computations
without doing at least some of the weird things allowed by quantum mechanics
wouldn't be much of a quantum computer.

It appears that the D-Wave computers could indeed be taking advantage of
entanglement. However since the D-Wave computers are not very isolated from
their environment, the delicate effects they attempt to harness are sometimes
disrupted when the computer interacts with its environment (aka decoherence to
use the Quantum Mechanics term).

Overall it looks like D-Wave is making some progress on demonstrating their
computer does really harness what's allowed by quantum mechanics. This is
exciting, though ironically they have not caught up to their own overstated
claims of what their machine does. Perhaps with more work they can better
isolate their computer from it's environment and graduate from quantum
annealing to reversible adiabatic quantum computing. Or maybe someone else
working with some other physical system which has an intrinsically lower
coupling to it's environment might beat them to it. An exciting time for the
field nonetheless.

Getting a speed up on a particular class of problem could have a great deal of
practical importance, but building a scalable computer that fully takes
advantage of everything allowed by the laws of physics is the holy grail of
quantum computing, and it doesn't look like D-Wave is there quite yet. Still
an exciting time for the field nonetheless.

------
crm416
Aaronson is fabulous. In my opinion, possibly the best technical author around
these days. If you haven't already, his new book[1] is absolutely worth
checking out, especially if you're unfamiliar with quantum.

\---

[1] : [http://www.amazon.com/Quantum-Computing-since-Democritus-
Aar...](http://www.amazon.com/Quantum-Computing-since-Democritus-
Aaronson/dp/0521199565/ref=sr_1_1?ie=UTF8&qid=1368734378&sr=8-1&keywords=scott+aaronson)

~~~
KVFinn
His book is seriously awesome. It's in a weird space technically -- not sure
I'd recommend it to someone who didn't take a least a few math or cs courses
as an undergrad. But if you have, wow, it really shines some new light on
things you thought you probably understood well.

------
smutticus
I'm glad there are people who understand these things. I hope some day when
the field of quantum computing congeals a bit more us mere mortals can begin
to understand it as well.

~~~
gizmo686
Quantum computing is actually surprisingly approachable. Unfourtuantly, the
written material on it is still mostly in the research paper format, which is
very rarely usefull for mortals. However, many lectures on the subject are
pretty approachable. I used to have a collection of links to Perimeter
Institute videos, but they seem to have 404`ed, and I don't have the time to
dig up the new urls.

If you are just looking for the theoretical basics , this youtube series [1]
is a good place to start.

While I do find these videos approachable, they are fairly math heavy. Most of
the lectures I've seen explain things at a level that almost anyone can get
something out of, and you don't miss out if you skip over the stuff that is
beyond you.

For most of the talks, the math involved is only algebra and vectors.
Although, the vectors are represented using bra-ket notation which may throw
you off.

[1]<http://www.youtube.com/user/mnielsencourses?feature=watch>

~~~
michael_nielsen
Glad you enjoyed the videos (I created them)! Unfortunately, that link has the
videos in reverse order. Here they are in the intended order:

[http://michaelnielsen.org/blog/quantum-computing-for-the-
det...](http://michaelnielsen.org/blog/quantum-computing-for-the-determined/)

The main thing that's needed to follow along is familiarity and comfort with
basic linear algebra.

------
gadetron
Holy Quantum Annealing Qubits Batman!

That article was not for the faint of geek.

~~~
JanezStupar
I understand some of the words, but the article itself seems to me as ancient
Greek.

Can someone explain to me what the hell the whole story is about?

~~~
gjm11
BACKGROUND: Quantum computing

If you are able to get substantial numbers of "quantum bits" to stay entangled
with one another, and hence behave in all those counterintuitive ways quantum
things do, then you can (in principle) use the resulting machinery to perform
some kinds of computations faster than any "conventional" computer can do
them.

Making that actually happen is an enormous engineering challenge. No one's
been able to do it with more than a very few bits, yet.

If they did, it would be a big deal: in particular, something called Shor's
algorithm allows you (in principle) to factorize numbers efficiently on a
quantum computer, and a big enough quantum computer would effectively break
RSA encryption. As an indication of the progress that's been made in practical
quantum computation: the first ever demonstration of Shor's algorithm in
practice was in 2001 when some researchers at IBM managed to use it to find
that 15 = 3x5; there was a major breakthrough in 2011, when the algorithm was
used to find that 21 = 3x7.

Some other varieties of public-key encryption are not, so far as anyone
currently knows, broken once we have quantum computers. But exactly what
quantum computers are capable of is a very open question.

BACKGROUND: D-Wave

There's a company called D-Wave that, for years now, has been touting what
they claim is a quantum computer of an unconventional design, very different
from what most quantum computing researchers have been trying to do (or trying
to analyse the capabilities of). They have claimed that their machine works
with hundreds of (qu)bits, whereas no one else is using more than, say, ten.
They have attracted a lot of media attention and a lot of money.

(Their machine allegedly does something called "adiabatic quantum computing",
which somewhat resembles the non-quantum optimization process called simulated
annealing. It may or may not actually be more powerful than simulated
annealing.)

Until very recently, D-Wave (despite their great _media_ success) had provided
no evidence at all that their machine actually does anything "genuinely
quantum", or that it is able to do anything that can't be done just as well
with conventional classical computers costing much, much less than their
machine.

Scott Aaronson (a young but already eminent researcher in the theory of
quantum computation) has long been a leading critic of D-Wave, countering
their hype (and that of those in the media who like their story) with patient
skepticism and careful analysis.

A couple of years ago, D-Wave (for the first time) offered some actual
evidence for actual quantum effects having some actual contribution to the
behaviour of their machine. Aaronson's post about this --
<http://www.scottaaronson.com/blog/?p=639> \-- said (among other things) "I
hereby announce my retirement as Chief D-Wave Skeptic".

WHAT'S GOING ON NOW

In the last few days there have been breathless reports in the media about how
D-Wave's machine has been found to be _thousands of times faster_ (at solving
a single particular problem, the one it was designed to solve) than
conventional computers. These reports have been discussed here on HN, too.

So Aaronson is back to debunking D-Wave hype. First, though, the good news: it
does appear that this latest work gives some evidence that D-Wave's machine is
genuinely doing something quantum. Specifically, some researchers have taken
the same kind of problems that D-Wave's machine solves, and compared the
performance of D-Wave's machine with (1) an algorithm called "quantum Monte
Carlo", which is approximately a simulation (on conventional classical
computers) of the particular quantum thing it's alleged to be doing and (2) a
conventional computer doing ordinary simulated annealing. They found that the
performance characteristics -- which problems are easier to solve and which
harder, and by how much -- match up well between D-Wave's machine and the
simulation of the quantum process it's meant to be an implementation of,
whereas classical simulated annealing doesn't match at all well. So it does
seem pretty likely that D-Wave's machine is doing roughly what D-Wave say it
is, and that this truly is a quantum effect. Yay!

The bad news, part 1: This particular quantum phenomenon turns out to be one
that can be efficiently and accurately simulated using ordinary classical
computers. In other words, in so far as D-Wave's machine is really doing that,
it offers no prospect of a more-than-constant-factor speedup relative to
conventional, "non-quantum" digital computers. (The quotation marks are
because actually semiconductors, as used in all integrated circuits, are
fundamentally quantum devices. But they don't exploit quantum coherence in the
sort of way quantum computers do.)

The bad news, part 2: At the same time as one researcher was comparing
D-Wave's machine against a bunch of classical optimization algorithms and
finding that D-Wave's machine performs much better, another researcher was
comparing it against a _different_ classical optimization algorithm, namely
(you guessed it) simulated annealing -- and finding that simulated annealing
actually solves the problems just as well as D-Wave's machine, but much faster
and on cheaper hardware.

CONCLUSION

(For the avoidance of doubt, this is my summary of what Aaronson says; I think
he is almost certainly right because he demonstrably knows his stuff, but I'm
in no position to give any endorsement beyond that.)

D-Wave do seem to have a genuine quantum device. However, it doesn't seem to
be a _quantum computer_ in the sense of something that exploits quantum
effects to do computation faster than a classical device can do by more than a
constant factor, and the recent hype about their machine is very misleading.

[EDITED to fix a goof where I missed out half a sentence.]

~~~
blazespin
[http://dwave.wordpress.com/2013/05/08/first-ever-head-to-
hea...](http://dwave.wordpress.com/2013/05/08/first-ever-head-to-head-win-in-
speed-for-a-quantum-computer/#comments)

Geordie's reply to Scott's blog: "The majority of that post is simply
factually incorrect.

As one example, Troyer hasn’t even had access yet to the system Cathy
benchmarked (the Vesuvius – based system). (!) Yes Rainier could be beat by
dedicated solvers — it was really slow! Vesuvius can’t (at least for certain
types of problems). Another is he thinks we only benchmarked against cplex
(not true) and he thinks cplex is just an exact solver (not true). These types
of gross misunderstanding permeate the whole thing.

I used to find this stuff vaguely amusing in an irritating kind of way. Now I
just find it boring and I wonder why anyone listens to a guy who’s been wrong
exactly 100% of the time about everything. Update your priors, people!!

If you want to know what’s really going on, listen to the real experts. Like
Hartmut.

[http://googleresearch.blogspot.ca/2013/05/launching-
quantum-...](http://googleresearch.blogspot.ca/2013/05/launching-quantum-
artificial.html)

~~~
cantos
As far as I can tell, nothing in your second link is inconsistent with
anything Scott has written.

~~~
blazespin
Not mine, Geordie's!

~~~
cantos
I see now. I guess since there wasn't a closing quotation mark my brain just
decided to invent one at a random point.

------
nraynaud
Is there a SDK? I'd like to port a NES simulator.

(there is a conjecture that anything remotely able to do an addition will be
used for emulating a Nintendo console)

~~~
iguana
Someone will ask how many Bitcoins this computer can mine any minute now!

~~~
DanBC
We've had that already!

Some of the comments in this thread talked about it.
(<https://news.ycombinator.com/item?id=5697619>)

I learned a bit, so it's not too bad. :-)

------
DanBC
PR departments have to release good publicity. Going too far is sleazy.

So is the recent D-Wave stuff just a combination of good PR and lazy journos,
or are D-Wave being sleazy?

I guess it doesn't help that most people have no idea about quantum anything
(let alone computing); or about massively parallel or P=NP etc.

~~~
marshray
There's an old joke a computer salesmen once told me:

Q: What's the difference between a computer salesman and a used car salesman?

A: The used car salesman knows when he's lying.

~~~
marshray
Q: What's the difference between a classical computer salesman and a quantum
computer salesman?

~~~
SideburnsOfDoom
Is the quantum computer salesman in a superimposition of truth-states?

------
mtdewcmu
Quantum computing reminds me of power from nuclear fusion: it's always 15
years away. I was awfully surprised when I read that D-Wave was already
selling quantum computers. I'm glad that someone is willing to counter the
hype. I find truth more interesting than fiction.

~~~
kyzyl
Coincidentally, D-wave's offices are located in Burnaby, BC, about a 10 minute
drive from the offices of General Fusion, another company trying to crack the
power-from-nuclear-fusion nut. They too have gotten a lot of flak for their
claims about their technology, although I haven't heard much from that end of
town lately.

------
blazespin
Important comment from Geordie on the dwave blog:

"It’s absolutely possible that adding active error correction might help at
some point, maybe even in the next generation. If that is the case, we’ll
certainly try anything anyone can think of to make the processors work better!
In the specific case of exhibiting scaling differences over conventional
algorithms, I’d bet we don’t need error correction (at least at the current
snapshot) but at the next level (say 2000+ qubits) maybe we might. If we do,
no problem — we’ll find a way!"

[http://dwave.wordpress.com/2013/05/08/first-ever-head-to-
hea...](http://dwave.wordpress.com/2013/05/08/first-ever-head-to-head-win-in-
speed-for-a-quantum-computer/#comment-23435)

------
rwmj
This thing wasn't delivered by aliens. For $10m isn't there an instruction
book that describes what it's supposed to do? Can't they take it apart and
look inside it?

~~~
yew
We know what it's _supposed_ to do. We don't know what it's actually _doing_
\- and neither do the people who built it, not with any certainty.

This is a relatively ill-understood discipline and a very fragile machine.
'Take it apart and look inside' won't tell you much about what happens when
you turn it on. In that regard it might as well have been built by aliens.
(Okay, it's probably not quite that bad.)

------
mikecane
OK, so the problem D-Wave has itself defined can be run faster on classical
machinery. Does this also hold true for other problems run on the D-Wave?

~~~
amalcon
The way one solves other problems on the D-Wave is by reducing them to the
D-Wave problem, so save in extraordinary circumstances the answer will be
"Yes."

~~~
mturmon
The "D-Wave problem" (as Aaronson calls it) is kind of peculiar.

It's a certain kind of binary optimization problem that comes up in models of
magnetic media ("Ising model"). In the original magnetic context, the two
states are N and S. Each magnetic element within a 2D array of states jiggles
around locally trying to align with its neighbors, and by doing so, each
state-flip influences a global energy function.

The same problem comes up in image processing. The probabilistic equations
describing segmentation of on-object versus off-object regions are the same
mathematically as the magnetic energy function of the Ising model.

This particular problem is important in its niche, but it's really not that
general. It's possible that even something as trivial (analytically) as going
from 2 to 3 states will not generalize well to the D-wave hardware.

And it's possible that, by the time you transform a given problem (say, graph
matching) to encode it in this model, you end up with either (a) something
with more variables than the original problem, (b) something with exotic
parameter settings that the D-wave hardware cannot handle, or (c) a model
having an energy surface that is not well-suited to the particular D-Wave
computational mechanism (annealing).

In some applications, the use of linear programming relaxations (I have not
read the detailed paper, but I assume this is the CPLEX result discussed in
the post) is much slower than annealing, and in others, LP relaxations are
more competitive. Sometimes the LP relaxations give much better results, but
they tend to be much, much, slower, for the Ising problem.

~~~
8080
Also note that the topology of dwave's architecture strongly limits the
instance of the Ising model you can implement and so the blowup in the
instance size is twice: from your problem to the ising model and to map this
Ising problem to the dwave's architecture.

------
Zarathust
Can anyone attempt to explain this in layman terms? I can't even figure out if
this is a real paper or not!

~~~
amalcon
I'm not an expert on quantum computing, but it looks like two points to me:

1) D-Wave devices have finally been proven to meet a fundamental requirement
to do quantum computing. This is a significant scientific advancement.

2) Despite this, they don't seem to answer any computational questions more
quickly than commodity electronic computers.

Again, I'm not an expert. The author's reasoning does seem sound. Still, I'm
not qualified to assess either of those claims directly.

~~~
marcosdumay
> a fundamental requirement to do quantum computing

Yes, and it's a necessary, but not sufficient requirement. They don't have a
quantum computer by the usual meanning of that term, but they have a different
kind of computer, that may or may not do something better than a normal
computer.

------
sturob
TLDR: <http://mlkshk.com/p/MQLE>

