
A Polynomial Time Bounded-Error Quantum Algorithm for Boolean Satisfiability - user_235711
http://arxiv.org/abs/1507.05061
======
algorias
[Note: I am a master student doing my thesis in this area]

Ok, I'm going to go out on a limb here and call bullshit, based on the meta-
reason that they cite a paper of theirs which already claims to show that NP
is contained in BQP using a different NP-complete problem, published in may
2015 (which is basically the same groundbreaking result as this paper), but
this somehow completely failed to make waves in the community.

Additionally, on a superficial reading of the paper nothing seems wrong with
it, except that it has this air of everything being too easy, too novel, not
building on any widely recognized partial results. Considering this problem
has been attacked by hundreds (?) of researchers over the years, it seems
highly implausible that they just happened to find this one weird trick nobody
had thought of before.

Anyway, those are just heuristics, and I'd love to be wrong on this occasion,
but life is too short to give my full attention to a dubious paper for more
than 15 minutes. I could attempt to whip up this algorithm in a quantum
simulator, but that would take a considerable amount of time to do right, so
I'll let somebody else do the work of debunking/confirming the result.

~~~
yread
Why comment when you don't have the time for more than superficial analysis?

~~~
Steuard
Because experts with domain-specific knowledge tend to have better heuristics
for this sort of thing than the average interested outsider. (They have to, or
else they'd get nothing done except debunking flawed claim after flawed claim.
Which might be worthwhile, but doesn't move the field forward.)

I've seen plenty of "theories of particle physics" that evidently look fairly
plausible to non-physicists (and maybe even to non-specialists) that I can
recognize immediately as crackpottery by my own heuristics (but that would
take at least a weekend's work to convincingly demonstrate as such, if I
wanted to invest that kind of time: I've done that, too). I'm not saying that
anyone should take anyone's heuristic guesses in such cases as gospel (once or
twice a century we might actually get a delightful surprise), but they can
serve as a useful restraining influence if you're tempted to get excited about
a headline.

~~~
anfedorov
An expert would be able to skim it, recognize the path of reasoning as
something he's seen a before, and narrow down on the error quickly. Reasoning
about something by how many "waves in the community" it has made is a
deference of personal judgement to the social network an individual is
embedded in, which is a useful shortcut when you are _not_ an expert relative
to your peers, but dangerous set of mind to operate in for any lengthy amount
of time, as it can be the basis of cults and the like.

~~~
evanpw
> An expert would be able to skim it, recognize the path of reasoning as
> something he's seen a before, and narrow down on the error quickly.

This is almost exactly equivalent to saying that an expert programmer should
be able to determine whether an unfamiliar, complicated code base contains a
bug by skimming the source. The hard bugs are going to be subtle.

~~~
anfedorov
Not at all equivalent, because code isn't meant to be human readable, but
computer interpretable. Because of these different goals, computer code is
actually quite a bit harder to read than a well structured academic paper. I
think programmers have a lot to learn about logical structure from writing
meant to be read by humans [1].

Note how many people on this thread seem to be understanding and discussion
its contents, whereas if I were to post 13 pages of "unfamiliar, complicated
code" that claimed to do something, I can't imagine anyone would help me debug
it.

1\. Knuth expounds on this idea here:
[https://en.wikipedia.org/wiki/Literate_programming](https://en.wikipedia.org/wiki/Literate_programming)

------
da-bacon
Eq. 16.

As a grad student in quantum computing we used to have a game where we would
race to see who could be the first one to spot the flaw in a paper that
claimed an efficient quantum polynomial time algorithm for NP-complete
problems.

My bet after a few minutes looking at the paper is Eq. 16. Eq. 15 is computing
the clauses with a traditional clause oracle \sum_x |x>|C(x)> where C(x) is a
vector of the clauses evaluated on the input x. Eq. 16 claims that one can
ignore the first register and this becomes \sum_x |C(x)>. That's not correct:
the system is entangled across those two registers, and the partial portion of
one of those systems is a mixed state. In particular there is no coherence
between the |C(x)> states. The next step relies crucially on this property.

~~~
irljf
Yes, Eq. 16 certainly does not follow from 15. They should have a mixed state
of the system. You should recognize this, since it's the graph isomorphism
approach everyone tries.

infparadox: It's a subsystem, not a subspace. When you discard a subsystem
which is entangled (as they specifically say it is) the result cannot be a
pure state, and you need to express it in the dentist matrix formalism by
taking the partial trace over the discarded system. What they wrote is not
correct.

~~~
infparadox
I am not expert in the field, I just took a course, but I know that a mixed
state is not the same as an entangled state. In their case it is entangled not
mixed and so there is no need to take the partial trace. For example, consider
a two subsystems entangled with each other, |q1>|q2>. I we want to apply an
operator U on the second subsytem we apply I(x)U or we can simply consider the
second subsystem and apply U|q2>. This is valid as long as the entanglement
holds and Identity gates are assumed to be applied on the first subsystem.

~~~
irljf
infparadox: This isn't about credentials, but da-bacon is the "Bacon" in
Bacon-Shor codes.

I think you are misunderstanding the situation or the math. You say "For
example, consider a two subsystems entangled with each other, |q1>|q2>." But
this system is -not- entangled, it is a separable state by definition, since
you wrote it as a tensor product. For the state to be entangled, it needs to
be a sum over separable states: sum_i a_i |x_i>|y_i>, with 0<=a_i<1\. In this
case, you cannot simply factor the state because of the summation. The second
system is not in a pure state individually, nor is the first, it is only the
joint state of the system which is pure. The reduced state for a single
subsystem cannot be pure if it is entangled to other subsystems (due to the
need for the summation in order for the state to be entangled: if you can
remove the summation, as in the case where [x_0 = 0, x_1 = 1, y_0 = 0, y_1 =
0] then there is no entanglement).

Now, in the case of a separable state |q1>|q2>, you can indeed just consider
the state of each subsystem individually as |q1> and |q2>, so you are correct
in this regard. However, entangled states are not of this form, and cannot be
treated in this way. It's an elementary mistake that many beginners make.

~~~
infparadox
I think I made a misunderstanding by the |q1>|q2> notation. Consider the
entangled 2-qubits system a|00>+b|11>. The prob of the 2nd qubit is |0> is
|a|^2 and the prob of 2nd qubit is |1> is |b|^2. Now apply I(x)NOT, then the
system is a|01>+b|10>. The prob of 2nd qubit to be |0> is now |b|^2 and prob
of 2nd qubit is |1> is |a|^2. The same can be obtained by considering the 2nd
qubit only as a|0>+b|1> and apply NOT only on the second qubit, without making
any operation on the first qubit and without breaking the entanglement.

~~~
irljf
Yes, this is true for any unitary operation on a different subsystem, and is
known as the no-signalling principle. But this doesn't let you remove the
second system: you are still stuck with a mixed state on the first system, not
a pure state (in your case the state is 1/2 |0><0| + 1/2 |1><1|).

~~~
infparadox
Well, I still don't see any thing wrong with the math. Even the authors
rewrite the equations as sum(|A_k>|C_k>) instead of sum(|C_k>) and apply
I(x)M_x instead of applying only M_x on |C_k> alone, this will not change the
following equations.

------
danbruc
1\. The presented construction is pretty simple and falls in the category of
computing the output for all possible inputs in parallel and then extract the
correct or best input.

2\. This is exactly the type of construction that is often(?) believed not to
work, i.e. even on a quantum computer you have to exploit the structure of the
problem and can not just throw quantum parallelism at the problem.

3\. They already published a paper in May [1] in which they applied the same
techniques to graph partitioning problems.

4\. Which option is true? The algorithm is wrong, the believe that quantum
computers are of not much help for NP-hard problems was wrong, or there is a
way to translate this algorithm to classical computers? The last one would be
awesome but the first two seem much more likely.

5\. There is one thing in the paper that struck me as a bit odd, namely that
they just add a couple of dummy qubits to boost the probability of one
measurement. It is certainly possible that something like that works but at
least it runs counter to my intuition. It reminds me of zero-padding FFT
inputs to get a result with higher frequency resolution which of course is not
real.

[1] [http://arxiv.org/abs/1505.06284](http://arxiv.org/abs/1505.06284)

~~~
clemensley
Can you (or someone else) explain why that simple construction where all
outputs are computed in parallel is not believed to work? Is it that quantum
computers have bounded parallelism in some sense?

~~~
repsilat
The "tricky bit" is not in branching out, it's in collapsing back down to
determinism again.

When you do your measurement at the end of the process, if you have a mixed
state you'll just pick one of the "parallel computations" \-- you can't just
"pick the best one". Put simply, it's like a nondeterministic Turing machine
that branches randomly instead of taking "all paths at once."

Where quantum computers are meant to get their extra power is in what's called
"destructive interference". The idea here is that there aren't really
classical probabilities associated with the computation branches, there are
complex "amplitude" numbers that _square_ to give probabilities. When your
computation is laid out in the right way, you want the amplitudes of the "bad"
paths in the computation to cancel out, so when you take a measurement at the
end all you're left choosing between are "good" paths.

------
tgb
One of the points of Scott Aaronson's excellent "Quantum Computing since
Democritus" is that if you can quickly solve NP problems, you basically have
super powers. Forget about SAT or Travelling Salesman, you can now do fast
automated theorem provers and the like. Depending on how it were solved, you
wouldn't read about P=NP on arxiv, you'd read about someone making billions on
the stock market. Aaronson describes it as being comparably implausible to
having faster than light communication.

If this paper is true, then it looks to me like quantum computers would be a
means of doing that. I'm skeptical. Know those sci-fi stories where they hand-
wave a quantum computer to have infinite computing power? Then they simulate
the universe or hack all the computers simultaneously or whatever. That's all
a laughable misinterpretation of what current state-of-the-art quantum
algorithms can do (even if we remove the technical details of constructing the
machine). Well, if this paper holds up, then that's a whole lot closer to the
truth, maybe!

(Disclaimer: not an expert. I really hope to see Aaronson's analysis of this
soon.)

~~~
sacado2
You could very well have P = NP but not have superpowers yet. There are
problems which are significantly harder than SAT et al. The canonical one for
PSPACE (which is a superset of NP) is QBF (which is only SAT on steroids, i.e
SAT + universal quantifier).

For instance, while SAT can be used to model one-player games (sudoku, 8
queens, etc.), you need QBF to model two-player games (even tic-tac-toe). And
stock market looks a lot like a n-players game. So, unless NP = PSPACE, even
with P = NP you're stuck with limited superpowers.

~~~
c_lebesgue
Nitpick: you need a potentially unbounded number of alternations between
existential and universal quantifiers to get PSPACE. Otherwise it is just
somewhere in the polynomial hierarchy.

Otherwise you are right, "outsmarting" the rest of the world is probably in
PSPACE.

------
analogmind
Can anyone give me a simplified explanation of this paper and the consequences
of it?

~~~
amalcon
So, to start with, it's worth reminding that this is a preprint. It hasn't
necessarily been reviewed or anything, so shouldn't be considered a final
result.

The basic idea is that they've found an efficient quantum algorithm for
solving an NP-hard problem (MAX-E3-SAT). This is a version of the boolean
satisfiability problem: given a boolean expression (in this case, in a certain
normal form) with N variables, is there some way to set the variables such
that the expression is true?

As those familiar with complexity theory will know, if it's possible to solve
one NP-hard problem efficiently, it's also possible to solve every NP problem
efficiently by adding efficient algorithmic steps. Therefore, we can now use a
quantum computer to solve arbitrary NP problems (well, we could if we had a
large enough one).

"NP" is computer science shorthand for problems whose answers can be checked
efficiently, but cannot be discovered efficiently.

Prior to this, only a few NP problems were efficiently solvable by quantum
computers (factoring and things that reduce to factoring, due to Shor's
algorithm). With this discovery, any NP problem shares this characteristic.
Technically NP only applies to decision problems, but it's generally not that
difficult to extend it to computations that produce additional information.

One consequence of this is that it will completely ruin public-key
cryptography in the face of a quantum computer: public-key cryptography relies
on a hard instance of an NP problem, and no such instances are hard on a
quantum computer anymore. This is relatively minor, as we currently use
factorization (RSA) or elliptic-curves for public-key encryption, and those
were already breakable with Shor's algorithm.

Quantum computers should also be able to reverse hashes and find collisions
easily: the NP algorithm for this is trivial. You may not be able to reverse
them to the "correct" input, though. If someone were silly enough to use a
perfectly good quantum computer to mine Bitcoin, it would be insanely
efficient. This is relatively minor as well -- more important than the public
key cryptography, but it's not going to cause all computer security to
collapse.

Many useful real-world problems are in the NP space: traveling salesman,
protein folding, things of that nature. A quantum computer could be used to
solve these problems. This would be a really big deal, and overall a great
boon to humanity.

~~~
myderpyaccount
See, the weird way I always understood NP hardness was by virtue of the
impossibility of it. Suppose all NP hard problems are solvable. Assume this
knowledge is distributed, a culture surrounds it, eventually that knowledge is
structured into the culture intrinsically (like intuitive knowledge, like
knowing how to move your arm, the knowledge becomes to effect: deterministic).

Now, if that occurs, then new layers of conceptual reasoning (however it may
manifest, the problems of human creativity, of opening domains of knowledge,
of forming new ideas or building things, anything humans will be able to try
to do once all NP hard problems can be both efficiently discovered and
checked).

Assuming we can do new stuff now that we couldn't before, this opens the table
to new variation, new problems, and new plausible solutions (but on a meta
layer, where the problems of today are intrinsically quickly solvable).

To me, that is what NP hard has always been. Even when you think you have
solved it, you haven't, precisely because of the ways we construct and define
problems. We either select that which can be known or that which can not, and
we translate this into a language. For what can not be known, that which has
significant variance across the human domain, such as mind reading, can not be
predicted. Even in a fairly formal logical or mathematical language, which
becomes more difficult the more compressed the language is with cultural
information.

So even though we might say we've solved NP hard problems, that allows us to
define new problems of which are going to be more difficult to solve (unless
the universe just stops forever or approaches an information uniformity). All
that new variation is the new NP hard.

I know it's not a super mathy solution, but there is a problem with being able
to look at a letter and knowing it means 'A' which may have very little
meaning in one context, but in another it might represent a culturally
standardized 'memory'. It retains that which is known. The more information
you have to compress into symbols, the complexity of your systems explodes as
those symbols can construct more and more permutations and meanings (like what
a graph represents). You can't know the answer unless you know the answer,
literally.

I still study all the math and formal logic and complexity theory in my free
time though (because I do get the dimensional perspective - finding the
shortest path is a mathematical problem at heart), but the above is my
intuitive understanding of NP hard, that which is infinite and capable of
redefining itself in ways the human mind is independent of the collective,
incapable of imagining.

~~~
lmm
> Assuming we can do new stuff now that we couldn't before, this opens the
> table to new variation, new problems, and new plausible solutions.

Right, but that's circular reasoning. Closed systems exist, there are things
that can be understood so fully that you can't think of any new questions to
ask about them that you don't already understand. What if we really did solve
everything?

> So even though we might say we've solved NP hard problems, that allows us to
> define new problems of which are going to be more difficult to solve (unless
> the universe just stops forever or approaches an information uniformity).
> All that new variation is the new NP hard.

NP is a specific, well-defined category of things. It isn't remotely the class
of the hardest problems we can think of. If NP=P that doesn't mean we can
solve every problem easily. But it does mean we can solve a lot of important
problems easily.

~~~
myderpyaccount
> Closed systems exist, there are things that can be understood so fully that
> you can't think of any new questions to ask about them that you don't
> already understand.

The existence of closed systems does not imply all systems are closed. Also,
you can think you understand things so well that you have to revisit things
you think you know in order to find new questions.

> It isn't remotely the class of the hardest problems we can think of. If NP=P
> that doesn't mean we can solve every problem easily. But it does mean we can
> solve a lot of important problems easily.

That's fine to you, but there's obviously some communication and definition of
terms problems going on, and I don't think that's entirely surprising
considering how lost everyone tends to get in the language of it.

P=NP means you can get computers to write their own math proofs. Tell that to
a mathematician who would rather compare his process and functionality to that
of Picasso or Rembrandt painting, Mozart composing, than that of the self
serve pay station at your local grocers. Maybe we can start birthing
mechanical babies, I don't really know.

~~~
lmm
> The existence of closed systems does not imply all systems are closed

I never said they were. Just that we don't know one way or the other.

> P=NP means you can get computers to write their own math proofs.

Yes and no. It means the computer can fill in the technical details of a
proposition you've already formalized. But mathematicians only ever sketched
those parts in papers anyway.

> Maybe we can start birthing mechanical babies, I don't really know.

You're not making any sense. Try starting with more concrete things before
waxing philosophical.

~~~
myderpyaccount
I've read from mathematicians who consider their computer to be a tool, and
from others who see the computer as a partner. As a hobbyist who studies
automated theorem provers (like coq), there is so much elegance that goes into
the computational proofs in order to even construct such a program that allows
a mathematician to do their work.

Can you honestly say that it doesn't take an extreme amount of effort to
construct the perfect typing system that allows you to even begin to be able
to write a proof with a machine? People have to prove that theorem provers are
correct with respect to that which what they define. This is no trivial task,
it's an entirely different domain of knowledge.

------
whyleyc
Somewhat relevant (If I just proved that P = NP, how do I start taking over
the world?):

[http://www.quora.com/If-I-just-proved-that-P-NP-how-do-I-
sta...](http://www.quora.com/If-I-just-proved-that-P-NP-how-do-I-start-taking-
over-the-world)

------
Certhas
Wow. Can't wait to see if this holds up.

For context: "On the other hand, Bennett et al. [11] gave oracle evidence that
NP is not in BQP, and while no one regards such evidence as decisive, today it
seems extremely unlikely that quantum computers can solve NP-complete problems
in polynomial time."

[http://www.scottaaronson.com/papers/bqpph.pdf](http://www.scottaaronson.com/papers/bqpph.pdf)

------
yk
Can somebody comment, if this actually means that one can essentially solve
SAT with a quantum computer? In particular:

    
    
        The algorithm prepares a superposition of all possible 
        variable assignments, then the algorithm evaluates the 
        set of clauses using all the pos-sible variable 
        assignments simultaneously and then amplifies the 
        amplitudes of the state(s) that achieve(s) the maximum 
        satisfaction to the set of clauses using a novel 
        amplitude amplification  technique that applies an 
        iterative partial nega-tion and partial measurement.
    

This sounds a bit like hiding some runtime in 'partial measurement.' ( But I
did only read the introduction yet, and I don't understand quantum computers.)

~~~
sanxiyn
I am not sure whether this works, but the paper's statement does actually mean
that one can solve SAT in polynomial time with high probability with a quantum
computer.

~~~
andrelaszlo
An "arbitrary [sic] high probability" even. If I got it right.

~~~
sanxiyn
Yup, arbitrarily high probability.

------
andyjohnson0
The paper describes a "proposed" algorithm with a lot of theoretical
justification [1]. Is there any way to verify it on a real quantum computer or
reliable simulation? I'm thinking of the way that Shor's algorithm was
verified [2] using a real quantum device.

[1] I'm not qualified to understand the detail of the paper, although I can
appreciate its general outline and the broad consequences of it being correct.

[2] [http://arxiv.org/abs/1202.5707](http://arxiv.org/abs/1202.5707)

------
sanxiyn
If this holds, this is the discovery of the century.

~~~
yxhuvud
Don't be silly. A hundred years take you back to 1915. There are _plenty_ of
scientific discoveries that is more important than this will ever be that has
been discovered since then. The general theory of relativity, the transistor,
DNA and many other are obviously more important.

Heck, most of the early groundwork in the computational field is probably more
important than an algorithm that is useful only on quantum computers.

~~~
andyjohnson0
"Discovery of the century" is a shorthand for really important. Don't take it
literally. Proving P=NP would be extremely significant in mathematics, though.

~~~
mekkz
Nobody doubts that proving P=NP would be the discovery of the century, but
that's not what this paper is trying to prove.

~~~
andyjohnson0
You're correct. They're claiming an algorithm to solve an NP hard problem in
polynomial time.

I was thinking of "if this paper is a contribution to eventually
proving/disproving P=NP...", but failed to express that.

------
bhouston
Here is the author's previous publication track record:

[https://scholar.google.com/citations?hl=en&user=CZz2XFIAAAAJ...](https://scholar.google.com/citations?hl=en&user=CZz2XFIAAAAJ&view_op=list_works&sortby=pubdate)

His co-author is more infleuntial as measured by number of citations:

[https://scholar.google.com/citations?user=co2yqKoAAAAJ&hl=en](https://scholar.google.com/citations?user=co2yqKoAAAAJ&hl=en)

~~~
deong
I know Jon Rowe a little bit. He's a very well-respected and reputable
researcher.

------
anukulrm
As much as I'd like to be on the cutting edge of science/math, I (suspect most
of us here) don't have the tools to understand, let alone evaluate this
preprint. So before we make a tweet storm and start blowing our horns, let's
try to hold our horses and wait for peer review. That could take well over a
year.

Well, in the meanwhile, I have a question: what's a good intro resource to
quantum computing? For general complexity intro, I like Michael Sipser's
Theory Of Computation

~~~
gjm11
Here's Scott Aaronson's answer to a similar question on his blog a couple of
weeks ago:

 _Yes, Nielsen and Chuang is the standard textbook for quantum computing; it’s
excellent (even though 16 years old by now). If you wanted to start with
something shorter and more introductory, you could try David Mermin’s
“Introduction to Quantum Computer Science.”_

~~~
dmmckay
Lipton wrote a pretty decent and short intro to QC that builds the theory a
little differently from usual. It is aimed more towards Mathematicians and
Computer Scientists than Physicists, I believe, but I think it is pretty
understandable in general.

I think Sipser is probably the best introduction to Computation if you do not
have a background in Computability and Complexity. Arora and Barak's text has
a much larger breadth and is much more detailed as far as Complexity goes. It
is the more appropriate of the two if you are just interested in Computational
Complexity.

------
Strilanc
The algorithm is:

    
    
        1. Hadamard transform into a uniform superposition of all variable assignments (very standard step)
        2. Add an OFF ancilla bit and rotate it by 1/(2m)'th of a turn for each satisfied clause, where m is the number of clauses, using triply-controlled m'th-root-of-NOT gates.
        3. Measure that ancilla qubit
        4. If the ancilla stayed OFF, restart the whole algorithm.
        5. Repeat steps 2 through 4 a total of R times, without triggering the restart (the appropriate value for R is figured out later)
        6. Measure all the variable qubits.
        7. Return SATISFIABLE iff all clauses are satisfied by the measured variable values
    

(Note that I simplified the algorithm. The paper uses ancilla bits to hold the
clauses in qubits, but they are not needed because they're only measured at
the end and otherwise only used as controls.)

For starters, I'll just say that up front there's no way this works. It's too
simple. Other researchers would have tried it. Repeatedly rotating
proportional to the amount satisfied and measuring? Too obvious. I would not
be surprised at all if a majority of researchers have come up with this idea,
tried it, and understood why it didn't work. Grover's algorithm is more
complicated than this.

Anyways, why does it _actually_ not work? It becomes clear if we simplify the
algorithm a bit more by deferring the conditional restarting measurements.

Instead of conditionally continuing the 2-4 loop R times, simply run Step 2 a
total of R times up front. Then measure the R ancilla bits you introduced, and
restart if any of them was false. This is equivalent to conditionally
restarting because a) there's no information dependency between the loops
beyond "don't bother doing it" and b) measurements can always be deferred
until later [1].

Now make the simplifying assumption that all variable assignments, except for
a single satisfying one, satisfy 7/8'ths of the clauses (based on [1]). Note
that applying 7/8'ths of a NOT will transition an OFF bit into a state where
it has a roughly 96% chance of being ON.

Since _most_ possible assignments are not satisfying, and non-satisfying
assignments have a 4% chance of causing a restart (the satisfying assignment
has a 0% chance, but it gets diluted by the exponentially many non-
satisfiers), the ancilla bits' measured value will be dominated by the non-
satisfying assignments. Each ancilla bit will have a roughly 4% chance of
being OFF.

That means the chance of not having to restart is roughly 0.96^R. But the
paper says that R increases faster than linear with respect to the number of
qubits. So the odds of not-restarting are upper bounded by 0.96^N. That places
an exponential multiplier on the running time, because it's so unlikely to
succeed as N gets large.

Therefore this is not a polynomial time algorithm.

1:
[https://en.wikipedia.org/wiki/Deferred_Measurement_Principle](https://en.wikipedia.org/wiki/Deferred_Measurement_Principle)

2:
[https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick_algorith...](https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick_algorithm)

~~~
kindlearn
I think your analysis just ignord the dummy qubits they added to the algorithm
to fix the problem you mentioned. So the chance of being ON can be more than
96%. They said it can be high,e.g. 99.999999..% What is the effect of that on
the upper bound you wrote?

~~~
Strilanc
In the case of an instance with a satisfiable solution, it just increases the
chances of a false negative.

When a solution exists, you want the check to be extremely strict; 0% let
through instead of 99.999999%^(n^6).

When there's no solution, you want relaxed checks the allow non-satisfying
solutions to get through after not-too-many-retries (so the algorithm can
terminate).

~~~
kindlearn2
I think you missed an important point in your analysis, that the prob of ax is
increasing every iteration, assume as an example, p_1(ax)=0.97, p_2(ax)=0.98
and so on till p_r(ax) \approx 1,so the expected number of iterations E till
the algorithm should start is E=1/p_1+1/(p_1 _p_2)+...+1 /(p_1_p_2 _..._ p_r).
I think the main trick here will be the rate of increase of p(ax). If p_1(ax)
starts small and the increasing rate is really slowly then E will be
exponential, but this will not be the case if p(ax) starts very high ,e.g.
0.99 and the rate of increase of p(ax) is "OK", then E will be polynomial. I
think someone needs to check the rate of increase of p(ax). This will be the
killing point.

------
SFjulie1
I still want to have time to explore my won KSAT project.

True = UP, False = DOWN

Every variables are a spin (node)

Every operation are describing a neighborhood (vertex).

Mapping physical effects of spin magnetism (magnetism or anti ferro magnetism)
to 2 operations

Refactoring Boolean equation to only those 2 (boolean algebrae rox)

Building a kind of conway game of life cellular automata.

Constructing the utility function per variable based on the neighboorhood

Make evolve (montecarlo) in a globally slowly moving magnetic field

Look at the number of flips per iterations and analyse results.

Explore the oscillation time half life Explore the temperature required to
avoid too much variations too much convergence in a frustrated loop Explore
utility per operation MAKE NICE GRAPHS proove it gives results.

