Ok, I'm going to go out on a limb here and call bullshit, based on the meta-reason that they cite a paper of theirs which already claims to show that NP is contained in BQP using a different NP-complete problem, published in may 2015 (which is basically the same groundbreaking result as this paper), but this somehow completely failed to make waves in the community.
Additionally, on a superficial reading of the paper nothing seems wrong with it, except that it has this air of everything being too easy, too novel, not building on any widely recognized partial results. Considering this problem has been attacked by hundreds (?) of researchers over the years, it seems highly implausible that they just happened to find this one weird trick nobody had thought of before.
Anyway, those are just heuristics, and I'd love to be wrong on this occasion, but life is too short to give my full attention to a dubious paper for more than 15 minutes. I could attempt to whip up this algorithm in a quantum simulator, but that would take a considerable amount of time to do right, so I'll let somebody else do the work of debunking/confirming the result.
1) Generate an even superposition of all possible solution states using a Hadamard transform (so go from |000000...> to 1/sqrt(N)*(|00000..0>+|000...1>+|1111...1>). This is pretty standard and used in almost all quantum algorithms.
2) Evaluate each clause of their Boolean E3-CNF function on the superposed input state and transfer the results to a register of control qubits, which yields an entangled state of the input state qubits with these control qubits (akin to the Oracle operator in the Grover search algorithm that performs the operation |x>|0> -> |x>|f(x)>, only here they use multiple Oracle functions and control qubits [which is fine of course]). This is a classical operation which we can implement in reversible computing btw, so no quantum magic involved here yet (apart from applying it to a superposed input state).
3) Increase the amplitude of the states that contain the solution to the search problem, very much akin to the rotation operator employed in Grover search that transfers amplitude from non-solution states to the solution state. <- The problem might be here.
So where I have trouble understanding the algorithm is when they apply the operator M_x to increase "the amplitude of the truth vector of clauses with maximum number of satisfied clauses using a partial negation and iterative measurement technique" (page 6 ff.). They seem to want to repeatedly apply that operator to the same state in order to maximize the amplitude transfer to the correct solution states. However, there is a rather simple proof that shows that the maximum amount of amplitude which you can transfer from one component of a quantum state to another using a control qubit is fundamentally limited by the Unitarian nature of quantum operations. This also limits the efficiency of quantum search in the general case and is the reason that we cannot achieve arbitrary speed-up in the Grover algorithm (see e.g. my PhD thesis for a simple explanation  or Grover's original paper to get the full picture ). Basically this proof states that although we can generate the quantum information about the answer that we're looking for, we cannot get that information out of the quantum system in an (exponentially) efficient way.
It would be very exciting if this algorithm was correct of course, I just can't imagine that someone would publish such an important finding on Arxiv.org without having it cross-checked and backed by some other people in the field first (which does not seem to be the case judging from the text).
That being said, I think it should be not too difficult to simulate this algorithm and see if all of their transformations are actually unitary and possible to implement in reversible computing and if they correctly take into account the entanglement of individual control qubits and the corresponding wave function collapse due to the qubit measurements they perform.
 Grover Search Algorithm - https://en.wikipedia.org/wiki/Grover%27s_algorithm
 My PhD thesis on an Experimental Realization of Grover's Algorithm - see page 136 ff. for a simple explanation of the effiency of quantum searching http://iramis.cea.fr/spec/Pres/Quantro/static/wp-content/upl...
 Grover's Algorithm - Original Paper http://arxiv.org/abs/quant-ph/9605043
 The No-Cloning Theorem: https://en.wikipedia.org/wiki/No-cloning_theorem
The "weird trick" is that they do non unitary things via partial measurement to beat that bound.
There might well be hidden costs here. Or judging from what others have said, it might just be a more straightforward mistake.
Heh? Unitarity is basically the defining characteristic of quantum mechanics. You can't just casually violate that.
There are definitely some interesting tricks you can do to exploit these behaviors. (I'm thinking for example of the "quantum Zeno effect", in which one can "find an answer without ever asking the question".) But I don't know remotely enough about quantum computing to know what's actually possible in this context.
 A fun writeup on this idea is here (with an example involving an attempt not to wake any adorable sleeping puppies): http://www.preposterousuniverse.com/blog/2006/02/27/quantum-...
How to reconcile them in a satisfactory theory is an open issue, but non-unitary measurements play a crucial role in many quantum protocols, including, for example, quantum teleportation.
There is absolutely no consensus on this. This is merely saying that you believe in Everett style interpretations, which have so far failed to explain the appearance of non-linear processes in quantum mechanics satisfactorily.
Decoherence does _not_ solve the quantum measurement problem. You can not derive the Born rule.
Basically what decoherence does for you is translate a quantum amplitude on the space of operators into a classical distribution on the spectrum of your coupling. It does not tell you why you are allowed to interpret this distribution as a probability distribution.
It does not tell you why you should be able to say that the state with higher amplitude is more likely to occur. In decoherence all states occur. So it is even unclear a priori what should be meant by the probability of a state occurring.
This is not my private opinion, this is the opinion of people like Zeh, who was instrumental in developing our current understanding of decoherence:
"[Decoherence] would explain why we never observe an apparatus pointing, say, to two different results, i.e. decoherence would provide a solution to the measurement problem of quantum mechanics. As pointed out by many authors, however (e.g. Adler 2003; Zeh 1995, pp. 14–15), this claim is not tenable."
Note that this is not in defense of the correctness of the above paper.
There's a famous theorem that any non-linearity in quantum mechanics (however small) would permit FTL information transfer. Could you link me one paper (either from a reputable journal or a highly cited article on the arXiv) that demonstrates non-linearity other than the exception of instantaneous collapse via measurement?
You can't have quantum computation without measurement, and thus without non-linearity. Quantum computation is not simply "doing the computation in each possible world", it also is some trick to extract (collapse) the information into one particular world, and I don't think we currently understand fully what that means, but it's certainly non-linear. After all the information is often in the amplitude.
Conversely there are measurement based models for quantum computation that rely purely on the non-linear process.
It’s quite rare for a genuinely novel solution to a major problem in mathematics to spring up out of nowhere in the literature.
 Think of, eg, the Twin Prime conjecture (that there are infinitely many pairs of primes only 2 integers apart). For decades there was no progress on this problem, before Yitang Zhang published a paper last year proving that there were infinitely many primes less than a finite bound apart (70 million in his case). Then a horde of matheaticians attacked the problem using his & other new approaches to progressively reduce that large bound down to the point where it’s now been proven that there are infinitely many primes < 246 apart (according to the polymath group - I’m not sure whether this is a published result yet). The twin prime conjecture is in sight!
Recognition of past work I can understand. But it seems like occam would argue with me against the use of some of the partial results without understanding why they would be useful.
This is the first time I've heard of a quantum simulator. How expensive would be to run the algorithm in one?
Here are some quantum computer simulators: http://www.quantiki.org/wiki/List_of_QC_simulators
In any event "List of QC simulators" appears to be just that, a list, with no stipulation that it contains only open source software, etc.
After taking a lecture on quantum computing, it was actually a weekend project to implement a basic quantum simulator myself, very useful to confirm that I actually understood the model as well as I thought.
As I can see, the algorithm really works. Below is the code for the example shown in Fig 5, (x0 v x1 v x2)&(~x0 v ~x1 v ~x2). You can try it by yourself and here are the steps:
1- copy the below code to a text editor and save it with extension .jaq.
2- open the file with jaQuzzi.
3- set |c0>=|1>,|c1>=|1>,|d0>=|1>
4- run the circuit.
5- select |c0> and |c1> and press on "plot probability chart" to open the "chart center" and see the correct answer for c's.
6- select |x0>, |x1> and |x2> and press on "plot probability chart" to open the "chart center" and see the probability for the superposition of the correct answer of x's.
-You can trace the circuit by selecting |c0> and |c1> and press on "plot probability chart" to open the "chart center", reset the circuit and start pressing on "step forward" to watch the probability of the solution increases during the loop.
-Don't forget to select auto in the "chart center".
If you want to play around,
1-set the number of x’s and c’s as required.
2-add the dummy qubits.
3-initialize c’s and d’s to |1> (required number of d’s calculated using equations in the paper)
4- set V = [0.5(1+exp((ipi)/k)),0.5(1-exp((ipi)/k));0.5(1-exp((ipi)/k)),0.5(1+exp((ipi)/k))], where k is the number of c’s and d’s.
(you have to ungroup the set of gates before you modify V, and group again)
5- set the number of iterations in the properties of the group.
The code in another comment...have fun!!!
As for "but this somehow completely failed to make waves in the community", I think "but life is too short to give my full attention to a dubious paper for more than 15 minutes" is an enough explanation.
I've seen plenty of "theories of particle physics" that evidently look fairly plausible to non-physicists (and maybe even to non-specialists) that I can recognize immediately as crackpottery by my own heuristics (but that would take at least a weekend's work to convincingly demonstrate as such, if I wanted to invest that kind of time: I've done that, too). I'm not saying that anyone should take anyone's heuristic guesses in such cases as gospel (once or twice a century we might actually get a delightful surprise), but they can serve as a useful restraining influence if you're tempted to get excited about a headline.
This is almost exactly equivalent to saying that an expert programmer should be able to determine whether an unfamiliar, complicated code base contains a bug by skimming the source. The hard bugs are going to be subtle.
Note how many people on this thread seem to be understanding and discussion its contents, whereas if I were to post 13 pages of "unfamiliar, complicated code" that claimed to do something, I can't imagine anyone would help me debug it.
1. Knuth expounds on this idea here: https://en.wikipedia.org/wiki/Literate_programming
So then the answer is certainly "yes" with the same level of certainty that algorias is claiming. Large, complicated solutions are inherently likely to have subtle flaws.
I supposed it would be interesting for people here for me to at least point out some obvious red flags.
As a grad student in quantum computing we used to have a game where we would race to see who could be the first one to spot the flaw in a paper that claimed an efficient quantum polynomial time algorithm for NP-complete problems.
My bet after a few minutes looking at the paper is Eq. 16. Eq. 15 is computing the clauses with a traditional clause oracle \sum_x |x>|C(x)> where C(x) is a vector of the clauses evaluated on the input x. Eq. 16 claims that one can ignore the first register and this becomes \sum_x |C(x)>. That's not correct: the system is entangled across those two registers, and the partial portion of one of those systems is a mixed state. In particular there is no coherence between the |C(x)> states. The next step relies crucially on this property.
infparadox: It's a subsystem, not a subspace. When you discard a subsystem which is entangled (as they specifically say it is) the result cannot be a pure state, and you need to express it in the dentist matrix formalism by taking the partial trace over the discarded system. What they wrote is not correct.
I think you are misunderstanding the situation or the math. You say "For example, consider a two subsystems entangled with each other, |q1>|q2>." But this system is -not- entangled, it is a separable state by definition, since you wrote it as a tensor product. For the state to be entangled, it needs to be a sum over separable states: sum_i a_i |x_i>|y_i>, with 0<=a_i<1. In this case, you cannot simply factor the state because of the summation. The second system is not in a pure state individually, nor is the first, it is only the joint state of the system which is pure. The reduced state for a single subsystem cannot be pure if it is entangled to other subsystems (due to the need for the summation in order for the state to be entangled: if you can remove the summation, as in the case where [x_0 = 0, x_1 = 1, y_0 = 0, y_1 = 0] then there is no entanglement).
Now, in the case of a separable state |q1>|q2>, you can indeed just consider the state of each subsystem individually as |q1> and |q2>, so you are correct in this regard. However, entangled states are not of this form, and cannot be treated in this way. It's an elementary mistake that many beginners make.
2. This is exactly the type of construction that is often(?) believed not to work, i.e. even on a quantum computer you have to exploit the structure of the problem and can not just throw quantum parallelism at the problem.
3. They already published a paper in May  in which they applied the same techniques to graph partitioning problems.
4. Which option is true? The algorithm is wrong, the believe that quantum computers are of not much help for NP-hard problems was wrong, or there is a way to translate this algorithm to classical computers? The last one would be awesome but the first two seem much more likely.
5. There is one thing in the paper that struck me as a bit odd, namely that they just add a couple of dummy qubits to boost the probability of one measurement. It is certainly possible that something like that works but at least it runs counter to my intuition. It reminds me of zero-padding FFT inputs to get a result with higher frequency resolution which of course is not real.
When you do your measurement at the end of the process, if you have a mixed state you'll just pick one of the "parallel computations" -- you can't just "pick the best one". Put simply, it's like a nondeterministic Turing machine that branches randomly instead of taking "all paths at once."
Where quantum computers are meant to get their extra power is in what's called "destructive interference". The idea here is that there aren't really classical probabilities associated with the computation branches, there are complex "amplitude" numbers that square to give probabilities. When your computation is laid out in the right way, you want the amplitudes of the "bad" paths in the computation to cancel out, so when you take a measurement at the end all you're left choosing between are "good" paths.
The reason why numbers can be factored faster using quantum computers is precisely that factoring has an algebraic structure which can be used to achieve this disentangling.
If this paper is true, then it looks to me like quantum computers would be a means of doing that. I'm skeptical. Know those sci-fi stories where they hand-wave a quantum computer to have infinite computing power? Then they simulate the universe or hack all the computers simultaneously or whatever. That's all a laughable misinterpretation of what current state-of-the-art quantum algorithms can do (even if we remove the technical details of constructing the machine). Well, if this paper holds up, then that's a whole lot closer to the truth, maybe!
(Disclaimer: not an expert. I really hope to see Aaronson's analysis of this soon.)
For instance, while SAT can be used to model one-player games (sudoku, 8 queens, etc.), you need QBF to model two-player games (even tic-tac-toe). And stock market looks a lot like a n-players game. So, unless NP = PSPACE, even with P = NP you're stuck with limited superpowers.
Otherwise you are right, "outsmarting" the rest of the world is probably in PSPACE.
> The common belief concerning the relationship between NP and BQP was that NP is not contained in BQP (e.g see chapter 15 of . However, a recent paper by one of the current authors has shown that an NP-hard problem (Graph
Bisection) can be efficiently solved, with low failure probability, by a quantum algorithm . This implies that NP is in fact contained within BQP, as any NP problem can be polynomially reduced to an NP-hard problem
Except that there is some probability that the algorithm will be unable to solve the problem?
This classic argument is flawed, because P=NP doesn't mean that you can "easily" solve all NP problems. Nature is still free to define huge exponents to difficult problems, making them essentially unsolvable in practice. I even heard Knuth recognizing this recently.
You can actually prove that given any (concrete) subclass of P that there is a language in both in P and harder, by a simple application of the time hierarchy theorem.
The basic idea is that they've found an efficient quantum algorithm for solving an NP-hard problem (MAX-E3-SAT). This is a version of the boolean satisfiability problem: given a boolean expression (in this case, in a certain normal form) with N variables, is there some way to set the variables such that the expression is true?
As those familiar with complexity theory will know, if it's possible to solve one NP-hard problem efficiently, it's also possible to solve every NP problem efficiently by adding efficient algorithmic steps. Therefore, we can now use a quantum computer to solve arbitrary NP problems (well, we could if we had a large enough one).
"NP" is computer science shorthand for problems whose answers can be checked efficiently, but cannot be discovered efficiently.
Prior to this, only a few NP problems were efficiently solvable by quantum computers (factoring and things that reduce to factoring, due to Shor's algorithm). With this discovery, any NP problem shares this characteristic. Technically NP only applies to decision problems, but it's generally not that difficult to extend it to computations that produce additional information.
One consequence of this is that it will completely ruin public-key cryptography in the face of a quantum computer: public-key cryptography relies on a hard instance of an NP problem, and no such instances are hard on a quantum computer anymore. This is relatively minor, as we currently use factorization (RSA) or elliptic-curves for public-key encryption, and those were already breakable with Shor's algorithm.
Quantum computers should also be able to reverse hashes and find collisions easily: the NP algorithm for this is trivial. You may not be able to reverse them to the "correct" input, though. If someone were silly enough to use a perfectly good quantum computer to mine Bitcoin, it would be insanely efficient. This is relatively minor as well -- more important than the public key cryptography, but it's not going to cause all computer security to collapse.
Many useful real-world problems are in the NP space: traveling salesman, protein folding, things of that nature. A quantum computer could be used to solve these problems. This would be a really big deal, and overall a great boon to humanity.
It is NP though (edited as per child comment)
Wikipedia says it's UP https://en.wikipedia.org/wiki/UP_%28complexity%29 (which is contained in NP)
Now, if that occurs, then new layers of conceptual reasoning (however it may manifest, the problems of human creativity, of opening domains of knowledge, of forming new ideas or building things, anything humans will be able to try to do once all NP hard problems can be both efficiently discovered and checked).
Assuming we can do new stuff now that we couldn't before, this opens the table to new variation, new problems, and new plausible solutions (but on a meta layer, where the problems of today are intrinsically quickly solvable).
To me, that is what NP hard has always been. Even when you think you have solved it, you haven't, precisely because of the ways we construct and define problems. We either select that which can be known or that which can not, and we translate this into a language. For what can not be known, that which has significant variance across the human domain, such as mind reading, can not be predicted. Even in a fairly formal logical or mathematical language, which becomes more difficult the more compressed the language is with cultural information.
So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.
I know it's not a super mathy solution, but there is a problem with being able to look at a letter and knowing it means 'A' which may have very little meaning in one context, but in another it might represent a culturally standardized 'memory'. It retains that which is known. The more information you have to compress into symbols, the complexity of your systems explodes as those symbols can construct more and more permutations and meanings (like what a graph represents). You can't know the answer unless you know the answer, literally.
I still study all the math and formal logic and complexity theory in my free time though (because I do get the dimensional perspective - finding the shortest path is a mathematical problem at heart), but the above is my intuitive understanding of NP hard, that which is infinite and capable of redefining itself in ways the human mind is independent of the collective, incapable of imagining.
Right, but that's circular reasoning. Closed systems exist, there are things that can be understood so fully that you can't think of any new questions to ask about them that you don't already understand. What if we really did solve everything?
> So even though we might say we've solved NP hard problems, that allows us to define new problems of which are going to be more difficult to solve (unless the universe just stops forever or approaches an information uniformity). All that new variation is the new NP hard.
NP is a specific, well-defined category of things. It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.
The existence of closed systems does not imply all systems are closed. Also, you can think you understand things so well that you have to revisit things you think you know in order to find new questions.
> It isn't remotely the class of the hardest problems we can think of. If NP=P that doesn't mean we can solve every problem easily. But it does mean we can solve a lot of important problems easily.
That's fine to you, but there's obviously some communication and definition of terms problems going on, and I don't think that's entirely surprising considering how lost everyone tends to get in the language of it.
P=NP means you can get computers to write their own math proofs. Tell that to a mathematician who would rather compare his process and functionality to that of Picasso or Rembrandt painting, Mozart composing, than that of the self serve pay station at your local grocers. Maybe we can start birthing mechanical babies, I don't really know.
I never said they were. Just that we don't know one way or the other.
> P=NP means you can get computers to write their own math proofs.
Yes and no. It means the computer can fill in the technical details of a proposition you've already formalized. But mathematicians only ever sketched those parts in papers anyway.
> Maybe we can start birthing mechanical babies, I don't really know.
You're not making any sense. Try starting with more concrete things before waxing philosophical.
Can you honestly say that it doesn't take an extreme amount of effort to construct the perfect typing system that allows you to even begin to be able to write a proof with a machine? People have to prove that theorem provers are correct with respect to that which what they define. This is no trivial task, it's an entirely different domain of knowledge.
For me, as soon as I use language, it is symbolic. So even if I were to describe the shape of a rock to you, it is still not a concrete example, unless I give you the rock.
Maybe that sounds absurd, but I am being serious. Can you clarify?
The result also has consequences for quantum computation specifically, in that it also takes care of the negative sign problem, which arises when simulating certain kinds of important quantum many body systems.
And last but certainly not least, all those press releases breathlessly proclaiming Quantum computers work by trying a gazillion possibilities at once (with no worries about the plausibility of actually reading out the correct answer with a non-negligible probability) weren't so far off after all.
All in all, either there is a mistake hidden somewhere in the preprint or this is the greatest scientific achievement since Evolution invented Human Level intelligence.
See here for intuition on why NP-complete problems if at all solvable without brute force, should also be efficiently solvable: http://windowsontheory.org/2013/05/06/reasons-to-care-in-hon...
Currently, quantum computers are believed to break only our most used algorithms, not all of them.
"On the other hand, Bennett et al.  gave oracle evidence that NP is not in BQP, and while no one regards such evidence as decisive, today it seems extremely unlikely that quantum computers can solve NP-complete problems in polynomial time."
The algorithm prepares a superposition of all possible
variable assignments, then the algorithm evaluates the
set of clauses using all the pos-sible variable
assignments simultaneously and then amplifies the
amplitudes of the state(s) that achieve(s) the maximum
satisfaction to the set of clauses using a novel
amplitude amplification technique that applies an
iterative partial nega-tion and partial measurement.
They are attempting to solve the problem for k=3 and then probably figure out how to run the approximation algorithm for this problem on a quantum computer.
 I'm not qualified to understand the detail of the paper, although I can appreciate its general outline and the broad consequences of it being correct.
If someone builds a useful NP problem solving machine, that will change the world.
Heck, most of the early groundwork in the computational field is probably more important than an algorithm that is useful only on quantum computers.
At that point quantum computing research will receive gobs of funding -- and our knowledge will absolutely explode.
You can debate whether it's the absolute top discovery, but this would be a lever to shift around the entire math and computer science field. If it wasn't a shaky paper, of course.
(Side note: until scott aaronson says otherwise, this paper is wrong. )
I was thinking of "if this paper is a contribution to eventually proving/disproving P=NP...", but failed to express that.
His co-author is more infleuntial as measured by number of citations:
Well, in the meanwhile, I have a question: what's a good intro resource to quantum computing? For general complexity intro, I like Michael Sipser's Theory Of Computation
Yes, Nielsen and Chuang is the standard textbook for quantum computing; it’s excellent (even though 16 years old by now). If you wanted to start with something shorter and more introductory, you could try David Mermin’s “Introduction to Quantum Computer Science.”
I think Sipser is probably the best introduction to Computation if you do not have a background in Computability and Complexity. Arora and Barak's text has a much larger breadth and is much more detailed as far as Complexity goes. It is the more appropriate of the two if you are just interested in Computational Complexity.
1. Hadamard transform into a uniform superposition of all variable assignments (very standard step)
2. Add an OFF ancilla bit and rotate it by 1/(2m)'th of a turn for each satisfied clause, where m is the number of clauses, using triply-controlled m'th-root-of-NOT gates.
3. Measure that ancilla qubit
4. If the ancilla stayed OFF, restart the whole algorithm.
5. Repeat steps 2 through 4 a total of R times, without triggering the restart (the appropriate value for R is figured out later)
6. Measure all the variable qubits.
7. Return SATISFIABLE iff all clauses are satisfied by the measured variable values
For starters, I'll just say that up front there's no way this works. It's too simple. Other researchers would have tried it. Repeatedly rotating proportional to the amount satisfied and measuring? Too obvious. I would not be surprised at all if a majority of researchers have come up with this idea, tried it, and understood why it didn't work. Grover's algorithm is more complicated than this.
Anyways, why does it actually not work? It becomes clear if we simplify the algorithm a bit more by deferring the conditional restarting measurements.
Instead of conditionally continuing the 2-4 loop R times, simply run Step 2 a total of R times up front. Then measure the R ancilla bits you introduced, and restart if any of them was false. This is equivalent to conditionally restarting because a) there's no information dependency between the loops beyond "don't bother doing it" and b) measurements can always be deferred until later .
Now make the simplifying assumption that all variable assignments, except for a single satisfying one, satisfy 7/8'ths of the clauses (based on ). Note that applying 7/8'ths of a NOT will transition an OFF bit into a state where it has a roughly 96% chance of being ON.
Since most possible assignments are not satisfying, and non-satisfying assignments have a 4% chance of causing a restart (the satisfying assignment has a 0% chance, but it gets diluted by the exponentially many non-satisfiers), the ancilla bits' measured value will be dominated by the non-satisfying assignments. Each ancilla bit will have a roughly 4% chance of being OFF.
That means the chance of not having to restart is roughly 0.96^R. But the paper says that R increases faster than linear with respect to the number of qubits. So the odds of not-restarting are upper bounded by 0.96^N. That places an exponential multiplier on the running time, because it's so unlikely to succeed as N gets large.
Therefore this is not a polynomial time algorithm.
I am the co-author of the textbook mentioned as "Lipton" in the comment by user dmmackay above.
When a solution exists, you want the check to be extremely strict; 0% let through instead of 99.999999%^(n^6).
When there's no solution, you want relaxed checks the allow non-satisfying solutions to get through after not-too-many-retries (so the algorithm can terminate).
True = UP, False = DOWN
Every variables are a spin (node)
Every operation are describing a neighborhood (vertex).
Mapping physical effects of spin magnetism (magnetism or anti ferro magnetism) to 2 operations
Refactoring Boolean equation to only those 2 (boolean algebrae rox)
Building a kind of conway game of life cellular automata.
Constructing the utility function per variable based on the neighboorhood
Make evolve (montecarlo) in a globally slowly moving magnetic field
Look at the number of flips per iterations and analyse results.
Explore the oscillation time half life
Explore the temperature required to avoid too much variations too much convergence in a frustrated loop
Explore utility per operation
MAKE NICE GRAPHS
proove it gives results.