Hacker News new | past | comments | ask | show | jobs | submit login

The algorithm is:

    1. Hadamard transform into a uniform superposition of all variable assignments (very standard step)
    2. Add an OFF ancilla bit and rotate it by 1/(2m)'th of a turn for each satisfied clause, where m is the number of clauses, using triply-controlled m'th-root-of-NOT gates.
    3. Measure that ancilla qubit
    4. If the ancilla stayed OFF, restart the whole algorithm.
    5. Repeat steps 2 through 4 a total of R times, without triggering the restart (the appropriate value for R is figured out later)
    6. Measure all the variable qubits.
    7. Return SATISFIABLE iff all clauses are satisfied by the measured variable values
(Note that I simplified the algorithm. The paper uses ancilla bits to hold the clauses in qubits, but they are not needed because they're only measured at the end and otherwise only used as controls.)

For starters, I'll just say that up front there's no way this works. It's too simple. Other researchers would have tried it. Repeatedly rotating proportional to the amount satisfied and measuring? Too obvious. I would not be surprised at all if a majority of researchers have come up with this idea, tried it, and understood why it didn't work. Grover's algorithm is more complicated than this.

Anyways, why does it actually not work? It becomes clear if we simplify the algorithm a bit more by deferring the conditional restarting measurements.

Instead of conditionally continuing the 2-4 loop R times, simply run Step 2 a total of R times up front. Then measure the R ancilla bits you introduced, and restart if any of them was false. This is equivalent to conditionally restarting because a) there's no information dependency between the loops beyond "don't bother doing it" and b) measurements can always be deferred until later [1].

Now make the simplifying assumption that all variable assignments, except for a single satisfying one, satisfy 7/8'ths of the clauses (based on [1]). Note that applying 7/8'ths of a NOT will transition an OFF bit into a state where it has a roughly 96% chance of being ON.

Since most possible assignments are not satisfying, and non-satisfying assignments have a 4% chance of causing a restart (the satisfying assignment has a 0% chance, but it gets diluted by the exponentially many non-satisfiers), the ancilla bits' measured value will be dominated by the non-satisfying assignments. Each ancilla bit will have a roughly 4% chance of being OFF.

That means the chance of not having to restart is roughly 0.96^R. But the paper says that R increases faster than linear with respect to the number of qubits. So the odds of not-restarting are upper bounded by 0.96^N. That places an exponential multiplier on the running time, because it's so unlikely to succeed as N gets large.

Therefore this is not a polynomial time algorithm.

1: https://en.wikipedia.org/wiki/Deferred_Measurement_Principle

2: https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick_algorith...




This looks right by me. Another way of phrasing the point is that the loop at steps 2--4 is doing "postselection"; this is clear also if you look at Fig. 3 on the top of page 7 where the (+) gate is present only when the measurement result is 1. Scott Aaronson showed that "Post-BQP" (BQP with postselection allowed) equals the complexity class PP, which contains NP and is equivalent to the counting class #P, so there is no surprise here.

I am the co-author of the textbook mentioned as "Lipton" in the comment by user dmmackay above.


I think your analysis just ignord the dummy qubits they added to the algorithm to fix the problem you mentioned. So the chance of being ON can be more than 96%. They said it can be high,e.g. 99.999999..% What is the effect of that on the upper bound you wrote?


In the case of an instance with a satisfiable solution, it just increases the chances of a false negative.

When a solution exists, you want the check to be extremely strict; 0% let through instead of 99.999999%^(n^6).

When there's no solution, you want relaxed checks the allow non-satisfying solutions to get through after not-too-many-retries (so the algorithm can terminate).


I think you missed an important point in your analysis, that the prob of ax is increasing every iteration, assume as an example, p_1(ax)=0.97, p_2(ax)=0.98 and so on till p_r(ax) \approx 1,so the expected number of iterations E till the algorithm should start is E=1/p_1+1/(p_1p_2)+...+1/(p_1p_2...p_r). I think the main trick here will be the rate of increase of p(ax). If p_1(ax) starts small and the increasing rate is really slowly then E will be exponential, but this will not be the case if p(ax) starts very high ,e.g. 0.99 and the rate of increase of p(ax) is "OK", then E will be polynomial. I think someone needs to check the rate of increase of p(ax). This will be the killing point.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: