
An Argument for P=NP (2005) [pdf] - todd8
http://kryten.mm.rpi.edu/scb_pnp_solved22.pdf
======
ColinWright
OK, it says this:

    
    
        For example, the Steiner Tree problem (STP) is
        known to be NP-complete ... Nonetheless, a simple
        physical process (termed an analog computation)
        can apparently solve it quickly.
    

No, it can't. Small, trivial cases can be solved, but I believe it's been
shown that when you get to mildly interesting cases it's just as likely to
find non-optimal solutions, and sometimes even more likely to find non-optimal
solutions.

This is at the top of page 2, and it doesn't bode at all well. Section 3 of
Aaronson's paper[0] discusses this explicitly, and comes to the conclusion
that, not to put too fine a point on it, it doesn't work.

In essence, looks like complete nonsense to me.

[0] [http://arxiv.org/pdf/quant-ph/0502072v2.pdf](http://arxiv.org/pdf/quant-
ph/0502072v2.pdf)

~~~
mikeash
I see that sort of thing a lot, where a simple model appears to violate some
algorithmic boundary, but a more complete analysis reveals that all is well.

For example, take "spaghetti sort." This is a weird physical sorting algorithm
where you represent each number to be sorted as a physical strand of uncooked
spaghetti of corresponding length. To perform the sort, you grab the spaghetti
in a bundle and place it on a table so the bottom of each spaghetti strand is
on the table. Then you put your other hand on top, find the longest strand,
and pull it out. Repeat this process once for each strand, and you then have
all the numbers in order.

This is described as a "linear time" algorithm because you perform the same
procedure once per number, so it's clearly O(n). It's obviously impractical
for real-world use, but if you had some utterly ridiculous quantity of
numbers, maybe it could be the best way!

But of course this ignores important factors. For example, the radius of the
bundle will increase with the square root of the number of strands present
within it. Whatever you use to select and retrieve the longest strand must
move at some finite speed to traverse this distance, which means that the
actual running time is O(n^1.5), substantially worse than the theoretical O(n
log n) limit!

Another factor that is often ignored is the fact that analog systems don't
actually have infinite precision. If you look at their actual (or even
theoretical) precision and convert that to bits, it's not all that many! Lots
of analog solutions can look extremely clever and quick, until you realize
that it's just solving the problem for a 28-bit number, or whatever.

~~~
darkmighty
I consider those the Brownian ratchets [1] of computing. Interesting to model
and see what's going on!

Here's another variation: arrange the sticks in a line. Have a pole scan the
sticks from the top, and when the pole is impeded, record it's height. You've
just computed max(a1,...,aN) in O(1) time?! (disproving is left as an exercise
-- note you can even assume the pole is weightless and infinite!)

[1]
[https://en.wikipedia.org/wiki/Brownian_ratchet](https://en.wikipedia.org/wiki/Brownian_ratchet)

~~~
mikeash
All too easy!

Relativity tells us that disturbances in the pole cannot move faster than the
speed of light. Therefore the time it takes to find out that your pole hit
something is O(n), since the location of the tallest element will be on
average halfway down your line.

Of course, there's the much more obvious problem that "arrange the sticks in a
line" requires manipulating each stick, and is thus O(n)!

~~~
darkmighty
Correct, of course :) Maybe it's not that good example... I was trying to come
up with a way to obfuscate the speed of light limitation (one usually thinks
of poles as rigid). A thing I like about those is even if they're really
simple they highlight limits like the Bekenstein bound: you can make your
experiment arbitrarily large, but if you allow it to shrink arbitrarily (as a
function of n), you get unexpected results (essentially BSS machines[1] versus
Turing machines?) -- in your example you could actually sort the numbers in
O(n).

[1]
[https://en.wikipedia.org/wiki/Blum%E2%80%93Shub%E2%80%93Smal...](https://en.wikipedia.org/wiki/Blum%E2%80%93Shub%E2%80%93Smale_machine)

------
kleer001
"The bottom line regarding normality is that the burden of proof is surely on
those who would maintain that the formal machinery of digital physics is in
principle insufficient to model something as straightforward as submerging
nails in, and retrieving them from, a bucket of soapy water."

I think the idea of alternate physical systems doing computation is really
fun. I personally have a throbbing heart-on for cellular automata,
specifically Rule 110 and inspired by Greg Egan's "Permutation City". But
that's SF. But that's just emotional attachment.

------
asgard1024
I wonder, is there a relationship between P!=NP and Hilbert's 10th problem? It
seems to me that P!=NP is pretty much saying that integer solutions for
certain class of polynomials are easy to check (once you have them) but hard
to describe as a set (there isn't always a representation with the polynomial
algorithm), but I don't know how to put it formally.

------
airza
If anyone with expertise has been lured in by this nonsense paper, do any of
you think (like me) that P=NP is undecidable in ZFC?

~~~
ddinh
Scott Aaronson wrote a survey about that topic here:
[http://www.scottaaronson.com/papers/pnp.pdf](http://www.scottaaronson.com/papers/pnp.pdf)

------
lexicalscope
The dismissive attitude towards proofs and apparent disregard for the
important distinctions between "proofs" and "arguments" is kind of upsetting.
It's an interesting paper, but it's hard for me to get past this attitude, it
just seems like someone is being intentionally obtuse to sound clever :/

------
robotresearcher
Has anyone followed the history of this article? Is it known to be satire?

------
1arity
> The original reduction was given by the well-known Cook-Levin theorem ...

That’s easy. The NTM is just a condensed representation of a DTM, and in each
case, the machine encodes an algorithm for whatever NP problem you are
working. “Guessing correctly” has no bearing on NP, and it’s not a scary or
impossible property that it works for people to believe implies P == NP is
false. “Guessing correctly” ( otherwise known as branching ) is simply a
property of the NTM occupying more than one subsequent state. As long as you
construct an algorithm such that the NTM branches on its guesses in P time (
or equivalently the DTM is P size), then you have the P time algorithm.
Pretending a valid reason for P == NP being “widely believed to be false”
because there would then exist “an NTM that supports guessing correctly as a
primitive operation”, as if this should be somehow intuitively magical and
impossible, is obfuscation, “guessing correctly” has nothing to do with NP and
makes no problem harder or easier. “Guessing correctly” won’t make a shit
algorithm good ( it’s not like an oracle in a quantum algorithm like Shor’s ),
it’s just another way of describing a NTM. All you need to worry about is
making that P time algorithm for some NP problem. Whether you want to
implement that as a DTM or a NTM, is totally up to you. If it’s a P time
algorithm it’s still a P time algorithm. I can make an EXP time algorithm in a
NTM too if I like. “Guessing correctly” is not some magical speedup equivalent
to P == NP. Too suggest that, or imply that, by suggesting that such a machine
would somehow be magical is either the result of someone who doesn’t
understand this, or someone who is choosing to obfuscate this. So either you
don’t understand it, and you are pretending that you do. Or you do understand
it and you are needlessly fabricating to try to argue for P !== NP, simply
revealing how tenuous you fear the arguments are, and how deeply you lack any
substantive ones that you must fabricate them. Your language is an attempt to
obfuscate and incorrectly attempt to confuse people who don’t understand. If
you really know this space, doing that is just balderdash. Well played.

------
1arity
> because it is condescending

yes exactly.

you have perfectly summed up how its percieved by those in the defeatist
culture around P can not be == NP, and this really goes to the heart of what
is inefficient and not working here.

and that is especially what im talking about when I speak about the psychology
of compelling pay offs for believing this, and the strong attempts at
defensiveness when its being considered that the majority response doesnt
work.

NP defeatists can not stand the feeling they percieve of being condescended to
when their choice to abandon a problem they failed to advance on or make a
faster algorithm on, is unmasked from beneath the cloak of being "impossible"
they covered it with, owing to their religion that P can not be NP.

instead of owning their choice to give up, they disguised the problem as
impossible. when that disguise is revealed, they face their choice and face
the possibility their failure was not excused by its being impossible, and
their belief in intellectual superiority is challenged, and they percieve it
as condescension.

the core contradiction is that the origin of the insecurity they are
protecting against is their own choice to surrender, yet they try to blame it
on the revealing of that choice by pretending its condescension. so they want
to fight the voice that says the problem can be worked on ( and pretend its
making them insecure ), to preserve their own narrative that it can't, instead
of fighting their self defeatest narrative which gave them cause for the
insecurity in the first place.

its a common misattribution of responsibility to preserve a protective
delusion:

"it is not my fault i didnt solve this, the problem is impossible"

"it's not the fault of my choice to give up that I feel insecure, it's their
fault for unmasking that my giving up was a choice."

misattribution of responsibility is a common maladaptive ( non workable )
response substitute instead of choosing to try another approach and create
some results.

it so important to think about the culture of defeat around this because the
stakes of these opportunities are so high: efficient acheduling, and
dispatching, and gene sequence matching.

and the culture of defeat is essentially inefficiently systmeically and
structurally draining resources from working on this. its of a significance
that the psychology of this becomes important.

science works when it embraces questioning, including self questioning, of its
methods, including its structural, systemic and cultural methods.

these are after all the operations or support and infrastructure faculties
which support the work of science.

the culture of defeat and fake excuse around P and NP, lets people off the
hook, as an excuse to shutter advances on whole classes of problems, and
excuses a lack of results in making more efficient algorithms to meet those
opportunities.

it becomes indoctrinated, a dogma, that P can not be NP, and the true
believers as we have seen are all the more fervent to compensate for the
absence of proof of impossibility.

and this slows progress, which doesnt work.

------
1arity
It's cool that this paper exists as a counterpoint to the pervasive culture of
defeat around P == NP.

People want to believe P != NP so they feel better about how they failed to
prove it, or make a faster algorithm.

This is ridiculous and doesn't help advance the cause to solve it. It's an
illogicality at the heart of the CS establishment, everyone is "confident" P
!= NP, with no proof, simply so they can move on to other things.

What happened to backbone, determination and persistence?

Their confidence in impossibility smells of self-justification of failure, and
fear. And the irrational "culture of belief" ( without proof ) is the very
antithesis of good science.

Your downvote makes you complicit to this culture of defeat.

~~~
MarkPNeyer
i spent years thinking about p vs np, under the same intution you have - i
figured p must equal np.

i even had it on my license plate:

[http://s3.neyer.me/pnp.jpg](http://s3.neyer.me/pnp.jpg)

after nightmares involving np complete problems, i finally started considering
that p didn't equal np, and suddenly the world made much more sense.

[https://www.youtube.com/watch?v=dUD9fDxiWKs](https://www.youtube.com/watch?v=dUD9fDxiWKs)

~~~
1arity
I hope sometime you come back to the light. Stay strong.

~~~
MarkPNeyer
see my response here:
[https://news.ycombinator.com/item?id=10074890](https://news.ycombinator.com/item?id=10074890)

a world where P = NP is a much darker world.

~~~
1arity
Dude, with all due respect, that's what the church said when the possibility
was raised of the Earth not being the centre of the universe.

"We cannae conceive how it can be bearable to be so" !==> "It can not be so"

