
Evidence of exponential speed-up in the solution of hard optimization problems? - godelmachine
https://arxiv.org/abs/1710.09278
======
ColinWright
Quoting from the abstract:

    
    
      > ... a non-combinatorial approach
      > to hard optimization problems that
      > ... finds better approximations than
      > the current state-of-the-art.
    

So it appears that they're finding approximate solutions, so already it's
rather less significant than the title suggests.

Then:

    
    
      > We show empirical evidence that
      > our solver scales linearly with
      > the size of the problem, ...
    

We know that for most NP-Complete problems, most instances are easy. The
question then is whether they are testing their algorithms on instances that
are known to be hard. There's a chance they're doing something like finding
factors of random integers, which we know is easy in almost every instance.

I'm deeply pessimistic.

~~~
urgoroger
While I agree with you, you should note that even approximation within any
degree of error is NP-complete for a large class of NP-complete problems (e.g.
TSP, and the problem MAX-EkSAT used in the paper).

That is, a polynomial algorithm for the approximate problem would be just as
significant as one for the exact version.

~~~
ColinWright
But that's being NP-Complete, and so can be converted into exact solutions for
other NP-Complete problems. Otherwise, by definition, it's not NP-Complete.

So it's not clear what they're actually doing, but if they can solve NPC
problems they would say that. So I expect that they are getting approximate
solutions that are not then NPC.

~~~
urgoroger
That's correct. Given a polynomial time approximate algorithm for Ek-SAT
(note, by approximate algorithm I mean along the lines of the formal
definition of approximate algorithm, that the solution given by the algorithm
falls in some fraction of the real answer for all instances, see
[https://en.wikipedia.org/wiki/Hardness_of_approximation](https://en.wikipedia.org/wiki/Hardness_of_approximation)),
you would show P=NP.

My first concerns, namely the usage of analog methods to solve NP-complete
problems, lies along the same lines as this post:
[https://www.scottaaronson.com/blog/?p=2212](https://www.scottaaronson.com/blog/?p=2212)

Moreover, from what I can see, and as you mention as your original concern,
the extent of proof for the 'exponential-speedup' claim exists in the form of
some benchmarks, hardly a proof that their method actually works on all
instances, which would be needed to show that it is a approximation algorithm
as commonly defined.

The purpose of my post was to highlight that for some problems (for example,
TSP), even a really crappy approximation algorithm would imply P=NP. As
highlighted by the authors of this paper, MAX-EkSAT is in a similar situation,
though judging by
[https://www.cse.buffalo.edu/~hungngo/classes/2006/594/notes/...](https://www.cse.buffalo.edu/~hungngo/classes/2006/594/notes/hardness.pdf),
unlike TSP, approximation to SOME ratio is possible, though there exists
aratio >1 which cannot be beat unless P=NP.

I was simply trying to address the statement: "So it appears that they're
finding approximate solutions, so already it's rather less significant than
the title suggests. ", since an actual approximation algorithm for some ratio
really WOULD be significant (showing P=NP).

~~~
joshuamorton
I don't think your first statement is true, there exist polynomial time
approximation schemes and approximation algorithms for np complete problems,
notably a PTAS for knapsack. In other words, an approximation of one np
complete problems doesn't imply an approximation of all of them.

I can't fully articulate the reasons for this though.

~~~
zmonx
The reason for this apparent discrepancy is found in the difference between
strong and weak NP-completeness.

The fully polynomial-time approximation scheme (FPTAS) for the knapsack
problem only runs in so-called _pseudo-polynomial_ time:

[https://en.wikipedia.org/wiki/Pseudo-
polynomial_time](https://en.wikipedia.org/wiki/Pseudo-polynomial_time)

This means that the runtime is polynomial in the _numeric value_ of the
knapsack. Since the _encoding_ of that numeric value only takes logarithmic
space (unless you are using unary encoding), the runtime is in fact again
_exponential_ in the size of the input.

For this reason, the knapsack problem is called _weakly_ NP-complete:

[https://en.wikipedia.org/wiki/Weak_NP-
completeness](https://en.wikipedia.org/wiki/Weak_NP-completeness)

One can show that, unless P=NP, a so-called _strongly_ NP-hard optimization
problem with polynomially bounded objective function cannot have a fully
polynomial-time approximation scheme:

[https://en.wikipedia.org/wiki/Polynomial-
time_approximation_...](https://en.wikipedia.org/wiki/Polynomial-
time_approximation_scheme)

SAT, Hamiltonian circuit etc. are _strongly_ NP-complete:

[https://en.wikipedia.org/wiki/Strong_NP-
completeness](https://en.wikipedia.org/wiki/Strong_NP-completeness)

Thus, an FPTAS for these problems would indeed imply P=NP.

------
colordrops
This keeps popping up every few months for several years, and seems like BS.
Here's an example:

[https://news.ycombinator.com/item?id=8652475](https://news.ycombinator.com/item?id=8652475)

And here's Scott Aaronson's debunking:

[https://www.scottaaronson.com/blog/?p=2212](https://www.scottaaronson.com/blog/?p=2212)

If these guys could really solve NP complete problems, they should have some
amazing concrete results to show at this point, which they don't.

~~~
spuz
You link to research from 3 years ago. Does the latest paper (Oct 2017) not
represent the concrete results you claim they don't have?

~~~
colordrops
They claim to _solve_ np complete problems, which this paper is not
demonstrating.

------
zmonx
The authors have built a start-up based on these ideas:

[http://memcpu.com/](http://memcpu.com/)

They provide their SAT solver as a service that you can try.

A related paper I recommend in this context is _NP-complete Problems and
Physical Reality_ by Scott Aaronson:

[https://www.scottaaronson.com/papers/npcomplete.pdf](https://www.scottaaronson.com/papers/npcomplete.pdf)

~~~
whaaswijk
It's worth noting that Scott Aaronson has commented on previous work by the
same authors:
[https://www.scottaaronson.com/blog/?p=2212](https://www.scottaaronson.com/blog/?p=2212)
. At the time, he was convinced that their approach wouldn't scale. I'm not
sure about their current approach, but at a glance it seems similar to the
previous one.

Previous work:
[https://arxiv.org/pdf/1411.4798.pdf](https://arxiv.org/pdf/1411.4798.pdf)

------
bitL
Is this similar to what Berkeley MPC Lab is doing with quadratic programming
using electricity? I.e. that nature has means to solve optimization problems
in an instant and our approximations of this process in the form of
differential equations allow us to do something similar, albeit not 100%
accurately?

~~~
godelmachine
Would you kindly provide link to the Berkeley MPC Lab work?

~~~
bitL
Here you go:

[http://www.mpc.berkeley.edu/research/analog-
optimization](http://www.mpc.berkeley.edu/research/analog-optimization)

~~~
godelmachine
Thanks :)

------
stevemk14ebr
Can someone provide a technical tldr. I'm afraid this is over my head, but I'm
extremely curious as I understand it's importance.

~~~
CJefferson
My initial instinct is not very important, but we will wait and see.

There have been many, many examples of people achieving significant
improvements on many classes of NP-complete problems, and more come out every
year, of course the more general your improvement the better. Modern SAT
solvers with learning are such an improvement.

The claim of "exponential" seems very dodgy to me, they are running over a
fixed set of benchmarks, it's hard to measure an exponential improvement over
such a set.

I will wait until I see this peer reviewed, after a brief skim read I am a
little worried about how big the circuits they are making are.

EDIT: they mostly compare against 3 specially crafted classes of random
problems. Noone really cares about random problems, and it looks to me like
they made random problems their system would be particularly good at dealing
with. That sets off alarm bells for me.

------
thesz
Authors seem not know about this work:
[https://pdfs.semanticscholar.org/ff7b/3a7b1dad73797ff7c79ac2...](https://pdfs.semanticscholar.org/ff7b/3a7b1dad73797ff7c79ac2dec6ef03f1dae5.pdf)

One of the claims in the fine paper that started the discussion is that
there's a need to perform several flips at once to find better solution. The
paper I cite does something very similar - it walks along chained variables
postponing flips until energy lowers for sure.

They also claim their solver has O(vars) time complexity for many problems,
including ones with high density (clause/variable ratio). But nothing
revolutionary.

------
Animats
This is really important if not bogus. Any comments from people in the field?

It's like back-propagation for digital logic.

~~~
marmaduke
Did you look at the paper or just throwing the analogy out there? It seems the
described problems are integer programming which wouldn’t have gradients as in
back propagation.

The magic seems to lie in the so called self organizing logic gates detailed

[https://arxiv.org/abs/1512.05064](https://arxiv.org/abs/1512.05064)

~~~
Animats
Right. See the PDF of the paper at [1]. They're mapping Boolean logic into a
kind of analog system.

"SOLGs can use any terminal simultaneously as input or output, i.e., signals
can go in and out at the same time at any terminal resulting in a
superposition of input and output signals ... The gate changes dynamically the
outgoing components of the signals depending on the incoming components
according to some rules aimed at satisfying the logic relations of the gate.
... A SOLG ... can have either stable configurations ... or unstable ...
configurations. In the former case, the configuration of the signals at the
terminals satisfies the logic relation required ... and the signals would then
remain constant in time. Conversely, if the signals at a given time do not
satisfy the logic relation, we have the unstable configuration: the SOLG
drives the outgoing components of the signal to finally obtain a stable
configuration."

This is sort of like supervised training of a neural net. I think. Test cases
with both inputs and desired outputs are needed, and applying them to the
network pushes the parameters towards values that yield the desired outputs.
It's kind of like deliberately over-training a neural net to encode some
explicit function.

The paper is vague about how you train this thing. It seems like it has to be
driven by test cases, but they don't say much about that. It's not clear that
this scales. It ought to work for small numbers of gates, but for large
numbers, does this process converge?

[1]
[https://arxiv.org/pdf/1512.05064.pdf](https://arxiv.org/pdf/1512.05064.pdf)

------
Mrtierne
This looks too good to be true ... am I missing something?

