Hacker News new | comments | show | ask | jobs | submit login
A Polynomial Time Bounded-Error Quantum Algorithm for Boolean Satisfiability (arxiv.org)
148 points by user_235711 849 days ago | hide | past | web | 125 comments | favorite



[Note: I am a master student doing my thesis in this area]

Ok, I'm going to go out on a limb here and call bullshit, based on the meta-reason that they cite a paper of theirs which already claims to show that NP is contained in BQP using a different NP-complete problem, published in may 2015 (which is basically the same groundbreaking result as this paper), but this somehow completely failed to make waves in the community.

Additionally, on a superficial reading of the paper nothing seems wrong with it, except that it has this air of everything being too easy, too novel, not building on any widely recognized partial results. Considering this problem has been attacked by hundreds (?) of researchers over the years, it seems highly implausible that they just happened to find this one weird trick nobody had thought of before.

Anyway, those are just heuristics, and I'd love to be wrong on this occasion, but life is too short to give my full attention to a dubious paper for more than 15 minutes. I could attempt to whip up this algorithm in a quantum simulator, but that would take a considerable amount of time to do right, so I'll let somebody else do the work of debunking/confirming the result.


After quickly reading through the paper it seems that the algorithm they employ is very similar to Grover search [1], which provides a quadratic/square-root speed-up for generic search problems, so O(2^m) -> O(2^(n/2)). Basically, they perform the following three steps:

1) Generate an even superposition of all possible solution states using a Hadamard transform (so go from |000000...> to 1/sqrt(N)*(|00000..0>+|000...1>+|1111...1>). This is pretty standard and used in almost all quantum algorithms.

2) Evaluate each clause of their Boolean E3-CNF function on the superposed input state and transfer the results to a register of control qubits, which yields an entangled state of the input state qubits with these control qubits (akin to the Oracle operator in the Grover search algorithm that performs the operation |x>|0> -> |x>|f(x)>, only here they use multiple Oracle functions and control qubits [which is fine of course]). This is a classical operation which we can implement in reversible computing btw, so no quantum magic involved here yet (apart from applying it to a superposed input state).

3) Increase the amplitude of the states that contain the solution to the search problem, very much akin to the rotation operator employed in Grover search that transfers amplitude from non-solution states to the solution state. <- The problem might be here.

So where I have trouble understanding the algorithm is when they apply the operator M_x to increase "the amplitude of the truth vector of clauses with maximum number of satisfied clauses using a partial negation and iterative measurement technique" (page 6 ff.). They seem to want to repeatedly apply that operator to the same state in order to maximize the amplitude transfer to the correct solution states. However, there is a rather simple proof that shows that the maximum amount of amplitude which you can transfer from one component of a quantum state to another using a control qubit is fundamentally limited by the Unitarian nature of quantum operations. This also limits the efficiency of quantum search in the general case and is the reason that we cannot achieve arbitrary speed-up in the Grover algorithm (see e.g. my PhD thesis for a simple explanation [2] or Grover's original paper to get the full picture [3]). Basically this proof states that although we can generate the quantum information about the answer that we're looking for, we cannot get that information out of the quantum system in an (exponentially) efficient way.

It would be very exciting if this algorithm was correct of course, I just can't imagine that someone would publish such an important finding on Arxiv.org without having it cross-checked and backed by some other people in the field first (which does not seem to be the case judging from the text).

That being said, I think it should be not too difficult to simulate this algorithm and see if all of their transformations are actually unitary and possible to implement in reversible computing and if they correctly take into account the entanglement of individual control qubits and the corresponding wave function collapse due to the qubit measurements they perform.

[1] Grover Search Algorithm - https://en.wikipedia.org/wiki/Grover%27s_algorithm

[2] My PhD thesis on an Experimental Realization of Grover's Algorithm - see page 136 ff. for a simple explanation of the effiency of quantum searching http://iramis.cea.fr/spec/Pres/Quantro/static/wp-content/upl...

[3] Grover's Algorithm - Original Paper http://arxiv.org/abs/quant-ph/9605043

[4] The No-Cloning Theorem: https://en.wikipedia.org/wiki/No-cloning_theorem


> However, there is a rather simple proof that shows that the maximum amount of amplitude which you can transfer from one component of a quantum state to another using a control qubit is fundamentally limited by the Unitarian nature of quantum operations.

The "weird trick" is that they do non unitary things via partial measurement to beat that bound.

There might well be hidden costs here. Or judging from what others have said, it might just be a more straightforward mistake.


> The "weird trick" is that they do non unitary things via partial measurement to beat that bound.

Heh? Unitarity is basically the defining characteristic of quantum mechanics. You can't just casually violate that.


I'm not an expert in this subject area, but I know that I've sometimes seen the phenomena that are sometimes called "wavefunction collapse" described as non-unitary (and for good reason). Even if you buy into an interpretation of QM that doesn't include collapse as a separate process (most non-Copenhagen interpretations, in other words), measurements of the quantum state will still have to somehow look non-unitary to any given observer. (Something something projection operators, in the formulation I learned in grad school.)

There are definitely some interesting tricks you can do to exploit these behaviors. (I'm thinking for example of the "quantum Zeno effect", in which one can "find an answer without ever asking the question".[0]) But I don't know remotely enough about quantum computing to know what's actually possible in this context.

[0] A fun writeup on this idea is here (with an example involving an attempt not to wake any adorable sleeping puppies): http://www.preposterousuniverse.com/blog/2006/02/27/quantum-...


That is not correct. Quantum mechanics has both a unitary and a non-unitary evolution. The non-unitary evolution is quantum measurement/collapse:

http://www.wikiwand.com/en/Wave_function_collapse

How to reconcile them in a satisfactory theory is an open issue, but non-unitary measurements play a crucial role in many quantum protocols, including, for example, quantum teleportation.


Basically if we would look at the wave function of the whole universe it would always behave in a Unitarian way. Non-unitarian behavior (such as qubit readouts) are the result of coupling our -initially isolated- quantum system with an external system that has many degrees of freedom and causes decoherence in the individual components of the original system's wave function. A measurement operation in that sense is an entanglement of the original system with an external quantum system, followed by decoherence of the wavefunction due to the coupling of the external system to another one with a large number of degrees of freedom (which destroys the interference in the components of the wave function and thus turns a quantum state into a "classical" state). The role of decoherence and entanglemenr was not well understood for a long time and led to many of the seeming paradoxes of quantum mechanics, but today the theory of "open quantum systems" explains that behavior quite well and one can even simulate such systems using e.g. a so-called "Master equation" approach. What's important to remember here is that the whole system (so initial quantum system + environment) is always behaving in a Unitarian way, but this must not be true when looking at individual parts of the wave function in isolation.


> Basically if we would look at the wave function of the whole universe it would always behave in a Unitarian way.

There is absolutely no consensus on this. This is merely saying that you believe in Everett style interpretations, which have so far failed to explain the appearance of non-linear processes in quantum mechanics satisfactorily.

Decoherence does _not_ solve the quantum measurement problem. You can not derive the Born rule.

Basically what decoherence does for you is translate a quantum amplitude on the space of operators into a classical distribution on the spectrum of your coupling. It does not tell you why you are allowed to interpret this distribution as a probability distribution. It does not tell you why you should be able to say that the state with higher amplitude is more likely to occur. In decoherence all states occur. So it is even unclear a priori what should be meant by the probability of a state occurring.

This is not my private opinion, this is the opinion of people like Zeh, who was instrumental in developing our current understanding of decoherence:

"[Decoherence] would explain why we never observe an apparatus pointing, say, to two different results, i.e. decoherence would provide a solution to the measurement problem of quantum mechanics. As pointed out by many authors, however (e.g. Adler 2003; Zeh 1995, pp. 14–15), this claim is not tenable."

http://plato.stanford.edu/entries/qm-decoherence/#SolMeaPro

Edit: Note that this is not in defense of the correctness of the above paper.


> which have so far failed to explain the appearance of non-linear processes in quantum mechanics satisfactorily.

There's a famous theorem that any non-linearity in quantum mechanics (however small) would permit FTL information transfer. Could you link me one paper (either from a reputable journal or a highly cited article on the arXiv) that demonstrates non-linearity other than the exception of instantaneous collapse via measurement?


I wasn't talking about non-linearity other than the one you mention. That should be abundantly obvious from what I posted.

You can't have quantum computation without measurement, and thus without non-linearity. Quantum computation is not simply "doing the computation in each possible world", it also is some trick to extract (collapse) the information into one particular world, and I don't think we currently understand fully what that means, but it's certainly non-linear. After all the information is often in the amplitude.

Conversely there are measurement based models for quantum computation that rely purely on the non-linear process.


As far as I know it, in any recent QM lecture or textbooks a measurement operation is described as an entanglement of the original system with an external quantum system with very large number of degrees of freedom. In older texts you would just find Born's rule.


^Yep, basically what I was going to say, but I forgot to reply more promptly. Thanks.


So, I don't have the theory background a PhD candidate might, so forgive my ignorance, but why would a theory be expected to build on the already proven partial results? I was under the impression their use was as a quality heuristic, not something a proof of a lower bound of the general case could make much use of.


Generally because that’s how mathematics tends to work - people publish a partial result & then they or others build on it to improve that result in various ways[1]. The authors are also usually embedded in a wider community of mathematicians who act as a pre-publication bullshit filter by asking pointed questions at seminars & that kind of thing.

It’s quite rare for a genuinely novel solution to a major problem in mathematics to spring up out of nowhere in the literature.

[1] Think of, eg, the Twin Prime conjecture (that there are infinitely many pairs of primes only 2 integers apart). For decades there was no progress on this problem, before Yitang Zhang published a paper last year proving that there were infinitely many primes less than a finite bound apart (70 million in his case). Then a horde of matheaticians attacked the problem using his & other new approaches to progressively reduce that large bound down to the point where it’s now been proven that there are infinitely many primes < 246 apart (according to the polymath group - I’m not sure whether this is a published result yet). The twin prime conjecture is in sight!


Yea, but to my understanding the partial work done thus far does not apply to an understanding of the general case solution. Let's say there is a linear solution for some form--how does that help establish a lower bound on the general form?

Recognition of past work I can understand. But it seems like occam would argue with me against the use of some of the partial results without understanding why they would be useful.


> I could attempt to whip up this algorithm in a quantum simulator

This is the first time I've heard of a quantum simulator. How expensive would be to run the algorithm in one?


Quantum computer can be simulated by classical computer with exponential slowdown.

Here are some quantum computer simulators: http://www.quantiki.org/wiki/List_of_QC_simulators


No mention of LIQUi|> (pronounced liquid) in this list. http://research.microsoft.com/en-us/projects/liquid/ MSR claims to have the most advanced / fastest quantum computer simulator.


As I understand, LIQUi|> is not publicly available. Is it?


I believe you are correct, so you cannot just hobby hack on it, but it you have a serious purpose I think the team will listen to you.

In any event "List of QC simulators" appears to be just that, a list, with no stipulation that it contains only open source software, etc.


another one that seems missing is this one http://www.quantumplayground.net which runs in a browser and has tutorials on how to write programs.

EDIT: spelling


It's just an exponential overhead. The state of a quantum computer is a superposition over all bitstrings, i.e. a vector of size 2^n for n qubits, and you have to implement quantum operators by multiplying with 2^n x 2^n unitary matrices (you better do that implicitly whenever possible or it will be very slow).

After taking a lecture on quantum computing, it was actually a weekend project to implement a basic quantum simulator myself, very useful to confirm that I actually understood the model as well as I thought.


In fact, you don't even have to understand quantum mechanics to create a quantum simulator. Mostly just linear algebra, and how states evolve through time, given some Hamiltonian.


This sounds really interesting! Do you have by any chance a link to some useful resources ?


V=[0.75+0.4330127018922193i 0.24999999999999994-0.4330127018922193i , 0.24999999999999994-0.4330127018922193i 0.75+0.4330127018922193i ]

qubit_0="x0"

qubit_1="x1"

qubit_2="x2"

qubit_3="c0"

qubit_4="c1"

qubit_5="d0"

qubit_6="ax"

gateproperty("group_gate10gate11gate12gate13gate14_6068",reps=10.0)

gate0={H:-:-:-:-:-:-}

gate1={-:H:-:-:-:-:-}

gate2={-:-:H:-:-:-:-}

gate3={1:1:1:NOT:-:-:-}

gate4={NOT:-:-:-:-:-:-}

gate5={-:NOT:-:-:-:-:-}

gate6={-:-:NOT:-:-:-:-}

gate7={1:1:1:-:NOT:-:-}

gate8={-:-:NOT:-:-:-:-}

gate9={-:NOT:-:-:-:-:-}

group_gate10gate11gate12gate13gate14_6068.gate10={-:-:-:1:-:-:V}

group_gate10gate11gate12gate13gate14_6068.gate11={-:-:-:-:1:-:V}

group_gate10gate11gate12gate13gate14_6068.gate12={-:-:-:-:-:1:V}

group_gate10gate11gate12gate13gate14_6068.gate13={-:-:-:-:-:-:!}

group_gate10gate11gate12gate13gate14_6068.gate14_6068={-:-:-:-:-:-:NOT}

gate17={-:-:-:!:-:-:-}

gate14={-:-:-:-:!:-:-}


algorias: I finally managed to simulate the algorithm on a handy quantum simulator jaQuzzi 0.1(http://www.eng.buffalo.edu/~phygons/jaQuzzi/).

As I can see, the algorithm really works. Below is the code for the example shown in Fig 5, (x0 v x1 v x2)&(~x0 v ~x1 v ~x2). You can try it by yourself and here are the steps:

1- copy the below code to a text editor and save it with extension .jaq.

2- open the file with jaQuzzi.

3- set |c0>=|1>,|c1>=|1>,|d0>=|1>

4- run the circuit.

5- select |c0> and |c1> and press on "plot probability chart" to open the "chart center" and see the correct answer for c's.

6- select |x0>, |x1> and |x2> and press on "plot probability chart" to open the "chart center" and see the probability for the superposition of the correct answer of x's.

Hints:

-You can trace the circuit by selecting |c0> and |c1> and press on "plot probability chart" to open the "chart center", reset the circuit and start pressing on "step forward" to watch the probability of the solution increases during the loop.

-Don't forget to select auto in the "chart center".

If you want to play around,

1-set the number of x’s and c’s as required.

2-add the dummy qubits.

3-initialize c’s and d’s to |1> (required number of d’s calculated using equations in the paper)

4- set V = [0.5(1+exp((ipi)/k)),0.5(1-exp((ipi)/k));0.5(1-exp((ipi)/k)),0.5(1+exp((ipi)/k))], where k is the number of c’s and d’s.

(you have to ungroup the set of gates before you modify V, and group again)

5- set the number of iterations in the properties of the group.

The code in another comment...have fun!!!


V=[0.75+0.4330127018922193i 0.24999999999999994-0.4330127018922193i , 0.24999999999999994-0.4330127018922193i 0.75+0.4330127018922193i ]

qubit_0="x0"

qubit_1="x1"

qubit_2="x2"

qubit_3="c0"

qubit_4="c1"

qubit_5="d0"

qubit_6="ax"

gateproperty("group_gate10gate11gate12gate13gate14_6068",reps=10.0)

gate0={H:-:-:-:-:-:-}

gate1={-:H:-:-:-:-:-}

gate2={-:-:H:-:-:-:-}

gate3={1:1:1:NOT:-:-:-}

gate4={NOT:-:-:-:-:-:-}

gate5={-:NOT:-:-:-:-:-}

gate6={-:-:NOT:-:-:-:-}

gate7={1:1:1:-:NOT:-:-}

gate8={-:-:NOT:-:-:-:-}

gate9={-:NOT:-:-:-:-:-}

group_gate10gate11gate12gate13gate14_6068.gate10={-:-:-:1:-:-:V}

group_gate10gate11gate12gate13gate14_6068.gate11={-:-:-:-:1:-:V}

group_gate10gate11gate12gate13gate14_6068.gate12={-:-:-:-:-:1:V}

group_gate10gate11gate12gate13gate14_6068.gate13={-:-:-:-:-:-:!}

group_gate10gate11gate12gate13gate14_6068.gate14_6068={-:-:-:-:-:-:NOT}

gate17={-:-:-:!:-:-:-}

gate14={-:-:-:-:!:-:-}


Is the "!" showing in some of the gate definitions a post-selection operation on the corresponding qubit?


The "!" is the "partial measurement" operator. I am not expert in the field but as far as I understand the partial measurement is to collapse the superposition to one of the states relative to its amplitude (probability), while the post-selection operation is to choose the output state regardless its amplitude (probability). For example, measurement on a qubit in state 1/sqrt(2)(|0>+|1>) might give |0> or |1> with 50% chance, while to postselect |0> is to get |0> with 100% regardless its amplitude. Postselection gives you the power to choose the outcomes of certain measurements while normal measurement doesn't give you that power.


Could you provide a link to this paper?


Sure. May 2015 paper is http://arxiv.org/abs/1505.06284, as easily found by reading the paper. (It is reference 13.)

As for "but this somehow completely failed to make waves in the community", I think "but life is too short to give my full attention to a dubious paper for more than 15 minutes" is an enough explanation.


I think he means this one: http://arxiv.org/abs/1505.06284


Heuristics like that are a dangerous thing. Have you seen the line of reasoning used in the paper before, and what is the error?


Why comment when you don't have the time for more than superficial analysis?


Because experts with domain-specific knowledge tend to have better heuristics for this sort of thing than the average interested outsider. (They have to, or else they'd get nothing done except debunking flawed claim after flawed claim. Which might be worthwhile, but doesn't move the field forward.)

I've seen plenty of "theories of particle physics" that evidently look fairly plausible to non-physicists (and maybe even to non-specialists) that I can recognize immediately as crackpottery by my own heuristics (but that would take at least a weekend's work to convincingly demonstrate as such, if I wanted to invest that kind of time: I've done that, too). I'm not saying that anyone should take anyone's heuristic guesses in such cases as gospel (once or twice a century we might actually get a delightful surprise), but they can serve as a useful restraining influence if you're tempted to get excited about a headline.


An expert would be able to skim it, recognize the path of reasoning as something he's seen a before, and narrow down on the error quickly. Reasoning about something by how many "waves in the community" it has made is a deference of personal judgement to the social network an individual is embedded in, which is a useful shortcut when you are not an expert relative to your peers, but dangerous set of mind to operate in for any lengthy amount of time, as it can be the basis of cults and the like.


> An expert would be able to skim it, recognize the path of reasoning as something he's seen a before, and narrow down on the error quickly.

This is almost exactly equivalent to saying that an expert programmer should be able to determine whether an unfamiliar, complicated code base contains a bug by skimming the source. The hard bugs are going to be subtle.


Not at all equivalent, because code isn't meant to be human readable, but computer interpretable. Because of these different goals, computer code is actually quite a bit harder to read than a well structured academic paper. I think programmers have a lot to learn about logical structure from writing meant to be read by humans [1].

Note how many people on this thread seem to be understanding and discussion its contents, whereas if I were to post 13 pages of "unfamiliar, complicated code" that claimed to do something, I can't imagine anyone would help me debug it.

1. Knuth expounds on this idea here: https://en.wikipedia.org/wiki/Literate_programming


> complicated code base

So then the answer is certainly "yes" with the same level of certainty that algorias is claiming. Large, complicated solutions are inherently likely to have subtle flaws.


I agree, but not all bugs are subtle. This seems to be the equivalent to someone having trouble with "hello world" because they forgot to include iostream.


I think debunking crackpottery is helpful for society, at least. I’d even argue that it helps the specific scientific field in question to move forward if the claimer is an active researcher, and is losing their time by not realizing it’s wrong. To sceptically scrutinize claims is what moves us forward. Of course, one has to choose their battle, but if those, who can, debunk crackpottery from time to time, then, I think, the general picture of science gets better. I think public interaction is important for science — the constructive lectures of what we know aswell as the destruction of false claims.


According to that criterion, no one should comment on HN ever, unless they are writing at the level of a peer reviewed journal.

I supposed it would be interesting for people here for me to at least point out some obvious red flags.


Because even the superficial analysis provides more than zero useful information. (Nothing super-obviously wrong. Nothing that looks clever enough to overcome the guess that, none the less, something's probably wrong.)


'This relies on their earlier paper from May 2015, and if that paper was correct it would have made big waves' is valuable context.


Eq. 16.

As a grad student in quantum computing we used to have a game where we would race to see who could be the first one to spot the flaw in a paper that claimed an efficient quantum polynomial time algorithm for NP-complete problems.

My bet after a few minutes looking at the paper is Eq. 16. Eq. 15 is computing the clauses with a traditional clause oracle \sum_x |x>|C(x)> where C(x) is a vector of the clauses evaluated on the input x. Eq. 16 claims that one can ignore the first register and this becomes \sum_x |C(x)>. That's not correct: the system is entangled across those two registers, and the partial portion of one of those systems is a mixed state. In particular there is no coherence between the |C(x)> states. The next step relies crucially on this property.


My guess is that the problem lies in the M_x operator, which is similar to the Grover rotation operator but supposedly only acts on the control qubits and another control-control qubit (see my other comment here). Would love to get your opinion on that!


Yes, Eq. 16 certainly does not follow from 15. They should have a mixed state of the system. You should recognize this, since it's the graph isomorphism approach everyone tries.

infparadox: It's a subsystem, not a subspace. When you discard a subsystem which is entangled (as they specifically say it is) the result cannot be a pure state, and you need to express it in the dentist matrix formalism by taking the partial trace over the discarded system. What they wrote is not correct.


I am not expert in the field, I just took a course, but I know that a mixed state is not the same as an entangled state. In their case it is entangled not mixed and so there is no need to take the partial trace. For example, consider a two subsystems entangled with each other, |q1>|q2>. I we want to apply an operator U on the second subsytem we apply I(x)U or we can simply consider the second subsystem and apply U|q2>. This is valid as long as the entanglement holds and Identity gates are assumed to be applied on the first subsystem.


infparadox: This isn't about credentials, but da-bacon is the "Bacon" in Bacon-Shor codes.

I think you are misunderstanding the situation or the math. You say "For example, consider a two subsystems entangled with each other, |q1>|q2>." But this system is -not- entangled, it is a separable state by definition, since you wrote it as a tensor product. For the state to be entangled, it needs to be a sum over separable states: sum_i a_i |x_i>|y_i>, with 0<=a_i<1. In this case, you cannot simply factor the state because of the summation. The second system is not in a pure state individually, nor is the first, it is only the joint state of the system which is pure. The reduced state for a single subsystem cannot be pure if it is entangled to other subsystems (due to the need for the summation in order for the state to be entangled: if you can remove the summation, as in the case where [x_0 = 0, x_1 = 1, y_0 = 0, y_1 = 0] then there is no entanglement).

Now, in the case of a separable state |q1>|q2>, you can indeed just consider the state of each subsystem individually as |q1> and |q2>, so you are correct in this regard. However, entangled states are not of this form, and cannot be treated in this way. It's an elementary mistake that many beginners make.


I think I made a misunderstanding by the |q1>|q2> notation. Consider the entangled 2-qubits system a|00>+b|11>. The prob of the 2nd qubit is |0> is |a|^2 and the prob of 2nd qubit is |1> is |b|^2. Now apply I(x)NOT, then the system is a|01>+b|10>. The prob of 2nd qubit to be |0> is now |b|^2 and prob of 2nd qubit is |1> is |a|^2. The same can be obtained by considering the 2nd qubit only as a|0>+b|1> and apply NOT only on the second qubit, without making any operation on the first qubit and without breaking the entanglement.


Yes, this is true for any unitary operation on a different subsystem, and is known as the no-signalling principle. But this doesn't let you remove the second system: you are still stuck with a mixed state on the first system, not a pure state (in your case the state is 1/2 |0><0| + 1/2 |1><1|).


Well, I still don't see any thing wrong with the math. Even the authors rewrite the equations as sum(|A_k>|C_k>) instead of sum(|C_k>) and apply I(x)M_x instead of applying only M_x on |C_k> alone, this will not change the following equations.


I don't think there is a problem in that, they are just considering a subspace of the system.


1. The presented construction is pretty simple and falls in the category of computing the output for all possible inputs in parallel and then extract the correct or best input.

2. This is exactly the type of construction that is often(?) believed not to work, i.e. even on a quantum computer you have to exploit the structure of the problem and can not just throw quantum parallelism at the problem.

3. They already published a paper in May [1] in which they applied the same techniques to graph partitioning problems.

4. Which option is true? The algorithm is wrong, the believe that quantum computers are of not much help for NP-hard problems was wrong, or there is a way to translate this algorithm to classical computers? The last one would be awesome but the first two seem much more likely.

5. There is one thing in the paper that struck me as a bit odd, namely that they just add a couple of dummy qubits to boost the probability of one measurement. It is certainly possible that something like that works but at least it runs counter to my intuition. It reminds me of zero-padding FFT inputs to get a result with higher frequency resolution which of course is not real.

[1] http://arxiv.org/abs/1505.06284


Can you (or someone else) explain why that simple construction where all outputs are computed in parallel is not believed to work? Is it that quantum computers have bounded parallelism in some sense?


The "tricky bit" is not in branching out, it's in collapsing back down to determinism again.

When you do your measurement at the end of the process, if you have a mixed state you'll just pick one of the "parallel computations" -- you can't just "pick the best one". Put simply, it's like a nondeterministic Turing machine that branches randomly instead of taking "all paths at once."

Where quantum computers are meant to get their extra power is in what's called "destructive interference". The idea here is that there aren't really classical probabilities associated with the computation branches, there are complex "amplitude" numbers that square to give probabilities. When your computation is laid out in the right way, you want the amplitudes of the "bad" paths in the computation to cancel out, so when you take a measurement at the end all you're left choosing between are "good" paths.


It is possible to compute all outputs in parallel with a quantum computer, but what you'll end up is an unintelligibly entangled state. What you really want is a single output, and there's no generic way of disentangling the entangled state in a useful way.

The reason why numbers can be factored faster using quantum computers is precisely that factoring has an algebraic structure which can be used to achieve this disentangling.


One of the points of Scott Aaronson's excellent "Quantum Computing since Democritus" is that if you can quickly solve NP problems, you basically have super powers. Forget about SAT or Travelling Salesman, you can now do fast automated theorem provers and the like. Depending on how it were solved, you wouldn't read about P=NP on arxiv, you'd read about someone making billions on the stock market. Aaronson describes it as being comparably implausible to having faster than light communication.

If this paper is true, then it looks to me like quantum computers would be a means of doing that. I'm skeptical. Know those sci-fi stories where they hand-wave a quantum computer to have infinite computing power? Then they simulate the universe or hack all the computers simultaneously or whatever. That's all a laughable misinterpretation of what current state-of-the-art quantum algorithms can do (even if we remove the technical details of constructing the machine). Well, if this paper holds up, then that's a whole lot closer to the truth, maybe!

(Disclaimer: not an expert. I really hope to see Aaronson's analysis of this soon.)


You could very well have P = NP but not have superpowers yet. There are problems which are significantly harder than SAT et al. The canonical one for PSPACE (which is a superset of NP) is QBF (which is only SAT on steroids, i.e SAT + universal quantifier).

For instance, while SAT can be used to model one-player games (sudoku, 8 queens, etc.), you need QBF to model two-player games (even tic-tac-toe). And stock market looks a lot like a n-players game. So, unless NP = PSPACE, even with P = NP you're stuck with limited superpowers.


Nitpick: you need a potentially unbounded number of alternations between existential and universal quantifiers to get PSPACE. Otherwise it is just somewhere in the polynomial hierarchy.

Otherwise you are right, "outsmarting" the rest of the world is probably in PSPACE.


It's worth pointing out that they already claim to have proven NP ⊆ BQP

> The common belief concerning the relationship between NP and BQP was that NP is not contained in BQP (e.g see chapter 15 of [8]. However, a recent paper by one of the current authors has shown that an NP-hard problem (Graph Bisection) can be efficiently solved, with low failure probability, by a quantum algorithm [13]. This implies that NP is in fact contained within BQP, as any NP problem can be polynomially reduced to an NP-hard problem


>any NP problem can be polynomially reduced to an NP-hard problem

Except that there is some probability that the algorithm will be unable to solve the problem?


Success/Failure would be independent between runs. Ie if you fail, you just run again.


At this point I feel bad for Scott Aaronson. Everytime a crackpot paper comes out (and I'm 97% convinced at this point this is one of them) he takes the time to refute it on his blog.


If I'm not mistaken, showing P=NP doesn't necessarily guarantee that the solution is quick and likewise being able to quickly solve NP problems doesn't require showing that P=NP.


Depends on how you define "quick". But, sure, finding an algorithm that solves SAT in O(1.0000001^n) (still exponential) would be much better in practice than finding one that would solve it in O(n^100) (although polynomial).


> if you can quickly solve NP problems, you basically have super powers.

This classic argument is flawed, because P=NP doesn't mean that you can "easily" solve all NP problems. Nature is still free to define huge exponents to difficult problems, making them essentially unsolvable in practice. I even heard Knuth recognizing this recently.


Yes, of course, in the same sense that we might have faster than light communication (but just barely) and not be able to exploit it for meaningful communication backwards in time to make a killing on the stock market. But we can still think "wow, that sounds kind of absurd" as a metric by which to judge the likelihood of a result. Either way, this is Scott Aaronson's argument, not mine. For example, he makes it here: http://www.scottaaronson.com/writings/limitsqc-draft.pdf


People like Scott Aaronson who spouse this view use it as a fundamental reason why NP=P should be false. The problem is exactly that it is a flawed argument. It could be an argument for the impossibility of finding "efficient" algorithms, but this is not what P=NP is about. For all we know, some problems in P could be so difficult as to preclude any practical algorithm.


> For all we know, some problems in P could be so difficult as to preclude any practical algorithm.

You can actually prove that given any (concrete) subclass of P that there is a language in both in P and harder, by a simple application of the time hierarchy theorem.


That's true: it's essentially conflating "efficient" meaning in polynomial time with "efficient" in the colloquial sense. Still is enough to make me skeptical!


I am in no way saying that this indicates that P=NP, so I am also skeptic. I am just pointing out that you should be skeptic for the right reasons, not because of a misused argument.


This is all true, except nobody has a large quantum computer yet. You could discover an efficient SAT algorithm and not be able to apply it for years or decades. (I don't expect this either.)


Can anyone give me a simplified explanation of this paper and the consequences of it?


So, to start with, it's worth reminding that this is a preprint. It hasn't necessarily been reviewed or anything, so shouldn't be considered a final result.

The basic idea is that they've found an efficient quantum algorithm for solving an NP-hard problem (MAX-E3-SAT). This is a version of the boolean satisfiability problem: given a boolean expression (in this case, in a certain normal form) with N variables, is there some way to set the variables such that the expression is true?

As those familiar with complexity theory will know, if it's possible to solve one NP-hard problem efficiently, it's also possible to solve every NP problem efficiently by adding efficient algorithmic steps. Therefore, we can now use a quantum computer to solve arbitrary NP problems (well, we could if we had a large enough one).

"NP" is computer science shorthand for problems whose answers can be checked efficiently, but cannot be discovered efficiently.

Prior to this, only a few NP problems were efficiently solvable by quantum computers (factoring and things that reduce to factoring, due to Shor's algorithm). With this discovery, any NP problem shares this characteristic. Technically NP only applies to decision problems, but it's generally not that difficult to extend it to computations that produce additional information.

One consequence of this is that it will completely ruin public-key cryptography in the face of a quantum computer: public-key cryptography relies on a hard instance of an NP problem, and no such instances are hard on a quantum computer anymore. This is relatively minor, as we currently use factorization (RSA) or elliptic-curves for public-key encryption, and those were already breakable with Shor's algorithm.

Quantum computers should also be able to reverse hashes and find collisions easily: the NP algorithm for this is trivial. You may not be able to reverse them to the "correct" input, though. If someone were silly enough to use a perfectly good quantum computer to mine Bitcoin, it would be insanely efficient. This is relatively minor as well -- more important than the public key cryptography, but it's not going to cause all computer security to collapse.

Many useful real-world problems are in the NP space: traveling salesman, protein folding, things of that nature. A quantum computer could be used to solve these problems. This would be a really big deal, and overall a great boon to humanity.


Factoring is not NP-hard (fixed!), it's simpler than that (but not, until the present day, P)

It is NP though (edited as per child comment)


It is NP (well, it would be if it were a decision problem). It is not (known to be) NP-complete (or, equivalently since it is NP, NP-hard).


Ah true, my mistake, it is not NP-hard but it is in NP

Wikipedia says it's UP https://en.wikipedia.org/wiki/UP_%28complexity%29 (which is contained in NP)


See, the weird way I always understood NP hardness was by virtue of the impossibility of it. Suppose all NP hard problems are solvable. Assume this knowledge is distributed, a culture surrounds it, eventually that knowledge is structured into the culture intrinsically (like intuitive knowledge, like knowing how to move your arm, the knowledge becomes to effect: deterministic).

Now, if that occurs, then new layers of conceptual reasoning (however it may manifest, the problems of human creativity, of opening domains of knowledge, of forming new ideas or building things, anything humans will be able to try to do once all NP hard problems can be both efficiently discovered and checked).

Assuming we can do new stuff now that we couldn't before, this opens the table to new variation, new problems, and new plausible solutions (but on a meta layer, where the problems of today are intrinsically quickly solvable).

To me, that is what NP hard has always been. Even when you think you have solved it, you haven't, precisely because of the ways we construct and define problems. We either select that which can be known or that which can not, and we translate this into a language. For what can not be known, that which has significant variance across the human domain, such as mind reading, can not be predicted. Even in a fairly formal logical or mathematical language, which becomes more difficult the more compressed the language is with cultural information.

So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.

I know it's not a super mathy solution, but there is a problem with being able to look at a letter and knowing it means 'A' which may have very little meaning in one context, but in another it might represent a culturally standardized 'memory'. It retains that which is known. The more information you have to compress into symbols, the complexity of your systems explodes as those symbols can construct more and more permutations and meanings (like what a graph represents). You can't know the answer unless you know the answer, literally.

I still study all the math and formal logic and complexity theory in my free time though (because I do get the dimensional perspective - finding the shortest path is a mathematical problem at heart), but the above is my intuitive understanding of NP hard, that which is infinite and capable of redefining itself in ways the human mind is independent of the collective, incapable of imagining.


> Assuming we can do new stuff now that we couldn't before, this opens the table to new variation, new problems, and new plausible solutions.

Right, but that's circular reasoning. Closed systems exist, there are things that can be understood so fully that you can't think of any new questions to ask about them that you don't already understand. What if we really did solve everything?

> So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.

NP is a specific, well-defined category of things. It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.


> Closed systems exist, there are things that can be understood so fully that you can't think of any new questions to ask about them that you don't already understand.

The existence of closed systems does not imply all systems are closed. Also, you can think you understand things so well that you have to revisit things you think you know in order to find new questions.

> It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.

That's fine to you, but there's obviously some communication and definition of terms problems going on, and I don't think that's entirely surprising considering how lost everyone tends to get in the language of it.

P=NP means you can get computers to write their own math proofs. Tell that to a mathematician who would rather compare his process and functionality to that of Picasso or Rembrandt painting, Mozart composing, than that of the self serve pay station at your local grocers. Maybe we can start birthing mechanical babies, I don't really know.


> The existence of closed systems does not imply all systems are closed

I never said they were. Just that we don't know one way or the other.

> P=NP means you can get computers to write their own math proofs.

Yes and no. It means the computer can fill in the technical details of a proposition you've already formalized. But mathematicians only ever sketched those parts in papers anyway.

> Maybe we can start birthing mechanical babies, I don't really know.

You're not making any sense. Try starting with more concrete things before waxing philosophical.


I've read from mathematicians who consider their computer to be a tool, and from others who see the computer as a partner. As a hobbyist who studies automated theorem provers (like coq), there is so much elegance that goes into the computational proofs in order to even construct such a program that allows a mathematician to do their work.

Can you honestly say that it doesn't take an extreme amount of effort to construct the perfect typing system that allows you to even begin to be able to write a proof with a machine? People have to prove that theorem provers are correct with respect to that which what they define. This is no trivial task, it's an entirely different domain of knowledge.


Can you try making a concrete example for your reasoning?


I don't know if I can. What do you mean by concrete?

For me, as soon as I use language, it is symbolic. So even if I were to describe the shape of a rock to you, it is still not a concrete example, unless I give you the rock.

Maybe that sounds absurd, but I am being serious. Can you clarify?


Polynomial complexity doesn't necessarily mean efficient if the polynom is of a high order.


That is true and worth remembering if you're a theoretical computer scientist, but we're trying to simplify here.


In essence, if this algorithm is correct, for quantum computers it holds that P = NP, which means that every problem for which the answer can be checked in polynomial time, you can also compute the answer in polynomial time.


Polynomial time in a quantum computer is called QP. (The same way that polynomial time in a non deterministic computer is called NP.)


Isn't it actually only showing that BQP contains NP, not P = NP?


The short version is, if this is correct (which is exceedingly unlikely) then either quantum mechanics is wrong or whether P=NP has just become irrelevant. This is because, as Scott Aaronson is often forced to tirelessly point out, an inability to build a Quantum Computer must also show Quantum mechanics to somehow be in error. If instead, we can build a Quantum computer and this result is true then the universe has suddenly become an incredibly more interesting place (for one example, building this would also solve AI).

The result also has consequences for quantum computation specifically, in that it also takes care of the negative sign problem, which arises when simulating certain kinds of important quantum many body systems.

And last but certainly not least, all those press releases breathlessly proclaiming Quantum computers work by trying a gazillion possibilities at once (with no worries about the plausibility of actually reading out the correct answer with a non-negligible probability) weren't so far off after all.

All in all, either there is a mistake hidden somewhere in the preprint or this is the greatest scientific achievement since Evolution invented Human Level intelligence.

See here for intuition on why NP-complete problems if at all solvable without brute force, should also be efficiently solvable: http://windowsontheory.org/2013/05/06/reasons-to-care-in-hon...


This breaks a lot of cryptography. More than Shor's.


Just to add to the GP, this would break any cryptography algorithm that one can run on a classical computer.

Currently, quantum computers are believed to break only our most used algorithms, not all of them.


Somewhat relevant (If I just proved that P = NP, how do I start taking over the world?):

http://www.quora.com/If-I-just-proved-that-P-NP-how-do-I-sta...


Wow. Can't wait to see if this holds up.

For context: "On the other hand, Bennett et al. [11] gave oracle evidence that NP is not in BQP, and while no one regards such evidence as decisive, today it seems extremely unlikely that quantum computers can solve NP-complete problems in polynomial time."

http://www.scottaaronson.com/papers/bqpph.pdf


Can somebody comment, if this actually means that one can essentially solve SAT with a quantum computer? In particular:

    The algorithm prepares a superposition of all possible 
    variable assignments, then the algorithm evaluates the 
    set of clauses using all the pos-sible variable 
    assignments simultaneously and then amplifies the 
    amplitudes of the state(s) that achieve(s) the maximum 
    satisfaction to the set of clauses using a novel 
    amplitude amplification  technique that applies an 
    iterative partial nega-tion and partial measurement.
This sounds a bit like hiding some runtime in 'partial measurement.' ( But I did only read the introduction yet, and I don't understand quantum computers.)


To understand this paper you should read about this problem which they cite in the paper. https://en.wikipedia.org/wiki/MAXEkSAT

They are attempting to solve the problem for k=3 and then probably figure out how to run the approximation algorithm for this problem on a quantum computer.


I am not sure whether this works, but the paper's statement does actually mean that one can solve SAT in polynomial time with high probability with a quantum computer.


An "arbitrary [sic] high probability" even. If I got it right.


Yup, arbitrarily high probability.


The paper describes a "proposed" algorithm with a lot of theoretical justification [1]. Is there any way to verify it on a real quantum computer or reliable simulation? I'm thinking of the way that Shor's algorithm was verified [2] using a real quantum device.

[1] I'm not qualified to understand the detail of the paper, although I can appreciate its general outline and the broad consequences of it being correct.

[2] http://arxiv.org/abs/1202.5707


If this holds, this is the discovery of the century.


Or it will be, if someone manages to create a sufficiently powerful quantum computer. If nobody does it will stay an academic triviality.

If someone builds a useful NP problem solving machine, that will change the world.


No the discovery of the century would be a scalable quantum computer.


That would be the technology of the century, but the science is known. (Unless we need new science to overcome some technical hurdles.)


Don't be silly. A hundred years take you back to 1915. There are plenty of scientific discoveries that is more important than this will ever be that has been discovered since then. The general theory of relativity, the transistor, DNA and many other are obviously more important.

Heck, most of the early groundwork in the computational field is probably more important than an algorithm that is useful only on quantum computers.


As another commenter pointed out, being able to crack NP problems quickly will give you theorem provers.

At that point quantum computing research will receive gobs of funding -- and our knowledge will absolutely explode.

You can debate whether it's the absolute top discovery, but this would be a lever to shift around the entire math and computer science field. If it wasn't a shaky paper, of course.


You're right. Century is silly. A tractable algorithm for P = NP would be the single greatest discovery of science. Implications would dwarf gravity and DNA.

(Side note: until scott aaronson says otherwise, this paper is wrong. )


"Discovery of the century" is a shorthand for really important. Don't take it literally. Proving P=NP would be extremely significant in mathematics, though.


Nobody doubts that proving P=NP would be the discovery of the century, but that's not what this paper is trying to prove.


You're correct. They're claiming an algorithm to solve an NP hard problem in polynomial time.

I was thinking of "if this paper is a contribution to eventually proving/disproving P=NP...", but failed to express that.


Note: This century started in 2000, not one hundred years ago.


Here is the author's previous publication track record:

https://scholar.google.com/citations?hl=en&user=CZz2XFIAAAAJ...

His co-author is more infleuntial as measured by number of citations:

https://scholar.google.com/citations?user=co2yqKoAAAAJ&hl=en


I know Jon Rowe a little bit. He's a very well-respected and reputable researcher.


As much as I'd like to be on the cutting edge of science/math, I (suspect most of us here) don't have the tools to understand, let alone evaluate this preprint. So before we make a tweet storm and start blowing our horns, let's try to hold our horses and wait for peer review. That could take well over a year.

Well, in the meanwhile, I have a question: what's a good intro resource to quantum computing? For general complexity intro, I like Michael Sipser's Theory Of Computation


Here's Scott Aaronson's answer to a similar question on his blog a couple of weeks ago:

Yes, Nielsen and Chuang is the standard textbook for quantum computing; it’s excellent (even though 16 years old by now). If you wanted to start with something shorter and more introductory, you could try David Mermin’s “Introduction to Quantum Computer Science.”


Lipton wrote a pretty decent and short intro to QC that builds the theory a little differently from usual. It is aimed more towards Mathematicians and Computer Scientists than Physicists, I believe, but I think it is pretty understandable in general.

I think Sipser is probably the best introduction to Computation if you do not have a background in Computability and Complexity. Arora and Barak's text has a much larger breadth and is much more detailed as far as Complexity goes. It is the more appropriate of the two if you are just interested in Computational Complexity.


Great, thanks! I'll check those out.


I don't think it will take long to get a verdict, the construction is really pretty simple. Maybe if there is a really subtle error this could cause some debate but as far as I can tell there is nothing exotic in there. The only thing that struck me as a bit odd is that they just add a couple of dummy qubits to increase the probability of one measurement.


The algorithm is:

    1. Hadamard transform into a uniform superposition of all variable assignments (very standard step)
    2. Add an OFF ancilla bit and rotate it by 1/(2m)'th of a turn for each satisfied clause, where m is the number of clauses, using triply-controlled m'th-root-of-NOT gates.
    3. Measure that ancilla qubit
    4. If the ancilla stayed OFF, restart the whole algorithm.
    5. Repeat steps 2 through 4 a total of R times, without triggering the restart (the appropriate value for R is figured out later)
    6. Measure all the variable qubits.
    7. Return SATISFIABLE iff all clauses are satisfied by the measured variable values
(Note that I simplified the algorithm. The paper uses ancilla bits to hold the clauses in qubits, but they are not needed because they're only measured at the end and otherwise only used as controls.)

For starters, I'll just say that up front there's no way this works. It's too simple. Other researchers would have tried it. Repeatedly rotating proportional to the amount satisfied and measuring? Too obvious. I would not be surprised at all if a majority of researchers have come up with this idea, tried it, and understood why it didn't work. Grover's algorithm is more complicated than this.

Anyways, why does it actually not work? It becomes clear if we simplify the algorithm a bit more by deferring the conditional restarting measurements.

Instead of conditionally continuing the 2-4 loop R times, simply run Step 2 a total of R times up front. Then measure the R ancilla bits you introduced, and restart if any of them was false. This is equivalent to conditionally restarting because a) there's no information dependency between the loops beyond "don't bother doing it" and b) measurements can always be deferred until later [1].

Now make the simplifying assumption that all variable assignments, except for a single satisfying one, satisfy 7/8'ths of the clauses (based on [1]). Note that applying 7/8'ths of a NOT will transition an OFF bit into a state where it has a roughly 96% chance of being ON.

Since most possible assignments are not satisfying, and non-satisfying assignments have a 4% chance of causing a restart (the satisfying assignment has a 0% chance, but it gets diluted by the exponentially many non-satisfiers), the ancilla bits' measured value will be dominated by the non-satisfying assignments. Each ancilla bit will have a roughly 4% chance of being OFF.

That means the chance of not having to restart is roughly 0.96^R. But the paper says that R increases faster than linear with respect to the number of qubits. So the odds of not-restarting are upper bounded by 0.96^N. That places an exponential multiplier on the running time, because it's so unlikely to succeed as N gets large.

Therefore this is not a polynomial time algorithm.

1: https://en.wikipedia.org/wiki/Deferred_Measurement_Principle

2: https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick_algorith...


This looks right by me. Another way of phrasing the point is that the loop at steps 2--4 is doing "postselection"; this is clear also if you look at Fig. 3 on the top of page 7 where the (+) gate is present only when the measurement result is 1. Scott Aaronson showed that "Post-BQP" (BQP with postselection allowed) equals the complexity class PP, which contains NP and is equivalent to the counting class #P, so there is no surprise here.

I am the co-author of the textbook mentioned as "Lipton" in the comment by user dmmackay above.


I think your analysis just ignord the dummy qubits they added to the algorithm to fix the problem you mentioned. So the chance of being ON can be more than 96%. They said it can be high,e.g. 99.999999..% What is the effect of that on the upper bound you wrote?


In the case of an instance with a satisfiable solution, it just increases the chances of a false negative.

When a solution exists, you want the check to be extremely strict; 0% let through instead of 99.999999%^(n^6).

When there's no solution, you want relaxed checks the allow non-satisfying solutions to get through after not-too-many-retries (so the algorithm can terminate).


I think you missed an important point in your analysis, that the prob of ax is increasing every iteration, assume as an example, p_1(ax)=0.97, p_2(ax)=0.98 and so on till p_r(ax) \approx 1,so the expected number of iterations E till the algorithm should start is E=1/p_1+1/(p_1p_2)+...+1/(p_1p_2...p_r). I think the main trick here will be the rate of increase of p(ax). If p_1(ax) starts small and the increasing rate is really slowly then E will be exponential, but this will not be the case if p(ax) starts very high ,e.g. 0.99 and the rate of increase of p(ax) is "OK", then E will be polynomial. I think someone needs to check the rate of increase of p(ax). This will be the killing point.


I still want to have time to explore my won KSAT project.

True = UP, False = DOWN

Every variables are a spin (node)

Every operation are describing a neighborhood (vertex).

Mapping physical effects of spin magnetism (magnetism or anti ferro magnetism) to 2 operations

Refactoring Boolean equation to only those 2 (boolean algebrae rox)

Building a kind of conway game of life cellular automata.

Constructing the utility function per variable based on the neighboorhood

Make evolve (montecarlo) in a globally slowly moving magnetic field

Look at the number of flips per iterations and analyse results.

Explore the oscillation time half life Explore the temperature required to avoid too much variations too much convergence in a frustrated loop Explore utility per operation MAKE NICE GRAPHS proove it gives results.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: