
I've proved that p=np - guilamu
http:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1205.6658<p>...and no one wants to&#x2F;is able refute me.
======
GFK_of_xmaspast
(a) if Feldmann really has a P solution to 3-SAT by reducing it to LP, surely
he can demonstrate this empirically, there's plenty of solvers out there for
both LP and 3-SAT. (b) the reduction of 3-SAT to ILP is classical
([http://www.cs.berkeley.edu/~vazirani/s99cs170/notes/npc.pdf](http://www.cs.berkeley.edu/~vazirani/s99cs170/notes/npc.pdf)
and
[http://www.cs.cmu.edu/afs/cs/academic/class/15451-s10/www/re...](http://www.cs.cmu.edu/afs/cs/academic/class/15451-s10/www/recitations/rec0408.txt)
are two quick examples) (c) Just skimming, I don't immediately see any reason
to believe that the solutions to Feldmann's LP formulation will be integral.

~~~
guilamu
Thanks for your input, I'll foward that to Feldmann.

------
deadgrey19
Perhaps you should submit this to a real journal rather than hacker news? Real
experts might be the right people to refute you. May I suggest the Journal of
Complexity ([http://www.journals.elsevier.com/journal-of-
complexity/](http://www.journals.elsevier.com/journal-of-complexity/)) or
perhaps the Journal of Systems Science and Complexity
([http://www.springer.com/mathematics/applications/journal/114...](http://www.springer.com/mathematics/applications/journal/11424))

~~~
guilamu
The guy tried just that. After 1 year of this paper being in "research
committee" the answer was, and I quote : "This is not possible, you're wrong.
Even if you're right, you're wrong". That's it, no one has been able to
disprove this, and I hopped someone skilled enough on HN would be.

~~~
tgflynn
The paper has pages of formalism about interpreting SAT in probabilistic terms
that feel like they probably don't add anything to the problem. I'm not
inclined to work through all of that either but I think it's very likely he
ends up with an LP that isn't integral and therefore he hasn't actually solved
the problem.

If that's not the case he should implement his algorithm and show that it
actually works on large cnf problems, such as those that one can download from
a number of SAT competition websites.

~~~
guilamu
Thanks, I hope I'll be able to convince him to register and answer to you
guys.

From what I gathered, the implication if he was right, would be the end of
encryption.

~~~
tgflynn
The implications would be far greater than that if he were right. An efficient
algorithm for an NP complete problem would mean that you could find global
optima for general non-linear functions. That would very likely lead to
superhuman AI.

~~~
schoen
I don't understand where your "very likely" comes from here; AI has quite a
lot of challenges other than ones that we know how to describe as optimization
problems.

You can also in some cases have asymptotically efficient algorithms that
aren't very efficient on the problem instances that we try to use them for; a
somewhat recent example is the AKS primality test.

[https://en.wikipedia.org/wiki/AKS_primality_test#Importance](https://en.wikipedia.org/wiki/AKS_primality_test#Importance)

~~~
tgflynn
For your second objection by "efficient algorithm", I meant an algorithm that
is practically efficient on very large problems. That probably means no worse
than O(N^2), but preferably more like O( N*log(N) ).

For your first objection I think a modified form of AIXI probably already
solves most of the "non-optimization" challenges of AI. By modified form, I
basically mean replace minimum size Turing Machines with minimum size boolean
circuits to transform an uncomputable problem into one that is computable and
that would be tractable given the scenario we are discussing. In other words
you would essentially be doing reinforcement learning by searching for small
boolean circuits which maximize the objective function.

Of course there is the rather important matter of defining the objective
function itself, which P=NP may not directly help much with. I think that with
a sufficiently powerful optimization algorithm you will get
(super-)intelligent behavior almost no matter what the objective function (as
Boscom has argued with his paper-clip scenario). The question would be more
whether the intelligent behavior that arises is desirable for humans or not.

Even if you don't believe in my AIXI-like approach just look at what a highly
suboptimal optimization algorithm like gradient ascent has done when applied
to deep neural networks. If you can get superhuman go players and image
recognizers with that, what would you get using a universal global optimizer ?

Finally we are talking about a scenario where mathematics is basically a
solved problem (in the sense that you would have an automatic theorem prover
that could prove any theorem you could state formally). If that world wouldn't
lead quickly to general AI then there would have to be some mystery to
intelligence that is literally deeper than mathematics itself, which is hard
for me to believe.

~~~
GFK_of_xmaspast
> That probably means no worse than O(N^2),...

We don't even have a quadratic-time algorithm for matrix multiplication (and
it's obvious that O(N^2) is a lower bound).

~~~
tgflynn
Nor do we have any proof of a lower bound > O( N ) for 3-SAT.

------
tgflynn
It's easy to formulate 3-SAT as an LP feasibility problem. The problem is that
you need solutions in which all variables are either 0 or 1. That makes it an
integer or 0-1 LP problem and such problems are NP hard.

~~~
schoen
I don't know if there's ultimately an analogy in the deeper mathematical
structure, but the difference between the logarithm problem and the discrete
logarithm problem comes to mind. On the one hand, in both cases it's "just"
finding the inverse of an operation over a field. On the other hand, numerical
approximations of continuous logarithms are straightforward and quick to
calculate, yet discrete logarithms aren't.

~~~
tgflynn
I'm not very familiar with the discrete logarithm problem but it seems to be
fairly generally true that many problems which are easy in the continuous case
become hard when you try to restrict the solutions to a grid. Even solving the
linear equation Ax = b in n dimensions, I believe, is NP hard for x \in
{0,1}^n.

~~~
schoen
I wonder if we could say that this has to do with the integers having measure
zero in the reals, so if you want to actually hit them you may need a
qualitatively better kind of "aim".

(I don't know enough higher math to know if this intuition is meaningful or
not.)

~~~
tgflynn
There are probably a lot of different ways to look at it.

One perspective is that easy problems usually involve convex sets (or
minimizing convex functions, which is pretty much the same thing since the
solution set is guaranteed to be convex). Discrete sets, such as the vertex
set of a hypercube, are trivially non-convex (unless they only contain a
single element).

You can also think about it geometrically. Imagine you have a plane that
passes through a cube. How do you determine if it intersects one of the cube
vertices (this is basically equivalent to the subset sum problem) ? Of course
it's easy to find a method that will work in 3D, but these methods all end up
scaling exponentially with the dimension of the space.

~~~
guilamu
Thanks a lot for your comment, I hope this will help Feldmann.

------
guilamu
Not really me, just a mathematician friend who has no clue on how to post on
HN.

