
Issues in the Proof that P ≠ NP - bumbledraven
http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%e2%89%a0np
======
sprout
_Head spinning._ At least I got that bit about LFP... mostly.

The coolest thing about this all is watching the Internet doing what it was
built for. Real-time collaboration on a massive scale. The whole of the
relevant scientific communities all in one virtual space clamoring at a
hundred virtual whiteboards. I love living in the future.

~~~
joe_the_user
I haven't looked at the proof in any kind of detail.

But comments by Jame S. Gates etc. raise a question in my mind. Any statical
argument that a given phenomena is unlikely is based on the space of phenomena
behaving very roughly "communatively" - AB and BA equally likely or at least
correlated.

How do you apply that kind of reasoning to a totally arbitrary process of
computation, a process which wouldn't necessarily have respect any kind of
randomness? It seem like any statistical argument for P=/NP has to have a
detailed and direct discussion of how you can escape this problem.

Anyway, that's my crude effort to translate the discussion into my mere math
MA understanding...

~~~
nonce3232321
Probabilistic arguments for deterministic systems are standard practice.
Search for Erdős and probabilistic methods to see several examples in
combinatorics.

In computation theory many proofs have the tagline "...with probability 1",
meaning the author enumerated all desired possibilities and showed their
probabilities summed to one. Most people accept this as proof, although it is
less preferred, in the way that proof by contradiction is less favored than
direct proof.

------
amichail
Could it be that the million dollar millennium prizes are actually slowing
down the resolution of important problems because they discourage crowd
sourcing proofs?

~~~
j-g-faustus
I may be out of tune with many others here, but I think certain constructive
processes (as in creating a many-faceted proof, outlining overarching software
architecture, making the storyline of a good novel/movie) are most efficient
with small groups of people. Otherwise you run into issues of "design by
committee" or "too many cooks".

It is easier to crowdsource once the main outline is in place, it structures a
vague problem into a set of more specific subtasks that can (more or less)
easily be distributed across many people.

Like in this case: Deconstructing an existing proof can work very well with
crowdsourcing, since a proof contains a relatively small set of specific
claims, and each "crowd" contributor can focus on one particular issue.

But this works because the crowd now has a common focus point and task list
created by the paper being published. It's not clear to me how (or whether)
the process could be reversed, that the same crowd and effort could be
coordinated to cowrite an original paper with an alternative proof.

~~~
joe_the_user
I'd agree that crowd sourcing might not always be the best way to approach
things. But the general situation that the person who gets the final result
_wins_ very likely does inhibit effective work on the solution of very
difficult problems.

I mean, I think there's a consensus that if Wiles had been publishing his
results as he went along, Fermat would have been solved earlier but probably
not by Wiles himself. Whether the current N/NP proof is right or not, it also
is clearly the product of long work in a vacuum. That's not the best way to do
things if nothing else as a matter of sanity. If you're publishing as you go
along, you've got a lot more of a sanity check. Further, if research is open,
you can do single person, small-committee and crowd-sourced versions.

Still, I don't think this primarily a matter of the Millennium Prizes in
particular but of the tremendous competition of academia in general - Wiles'
effort happened before the Millennium Prizes - and he couldn't get Fields
Medal either - too old.

Oddly enough, the Netflix prize actually seems to have produced a fusion of
efforts in the end. Perhaps future prize creators could think about that. Both
the Millennium Prize and the Fields Medal seem deeply problematic in their
effect on mathematics in general.

~~~
j-g-faustus
Good points, and I agree.

The Netflix prize is interesting. According to a summary post on the Netflix
forum

    
    
        the early results were mainly by individuals... 
        the team members began to coalesce and combine, and 
        in the end, entire teams coalesced and recombined.
    

[http://www.netflixprize.com//community/viewtopic.php?pid=961...](http://www.netflixprize.com//community/viewtopic.php?pid=9616#p9616)

I can only speculate that the combination of a deadline, a problem that was
too hard for any individual and a common forum for exchanging ideas helped
encourage team formation. (I.e when you realize that you can't do this alone,
teaming up is a rational thing to do.)

I assume that the problems we (as in society, humanity) want to tackle will
grow ever more complex in the future, and may easily outgrow the capacity of
individuals.

So yes, perhaps we could/should learn from how the Netflix prize played out...

~~~
_delirium
I think some risk-averseness drove the combination as well: at the very end,
two teams with very similar performance were looking at it being more or less
random chance who'd finish a hair ahead, with one getting $0 and the other
getting $1m. So they merged and split a much safer $500k each.

------
cdavidcash
It seems like a bit much to hope to understand and verify the proof without a
huge investment of time and effort. The problem is exponentially compounded if
you don't already do research in theoretical computer science and have the
appropriate logic background.

But! All of this has put a spotlight on a theorem that I had somehow never
appreciated. It is a beautiful theorem that is well within reach of a
mathematically-literate computer scientist: P = FO(LFP) over finite ordered
structures. If you start reading up on this, it's immediately clear that this
is interesting, because FO(LFP) say nothing a priori about computational
resources in the sense that P does.

Anyway, a sufficiently curious reader can look it up as theorem 4.10 in the
following text:

[http://www.amazon.com/Descriptive-Complexity-Texts-
Computer-...](http://www.amazon.com/Descriptive-Complexity-Texts-Computer-
Science/dp/0387986006)

That book is pretty light on the basics of logic. I used this book:

[http://www.amazon.com/Mathematical-Logic-Undergraduate-
Texts...](http://www.amazon.com/Mathematical-Logic-Undergraduate-Texts-
Mathematics/dp/0387942580)

------
Sephr
Learn to use the DOM instead of HTML strings, and you'll almost never run into
those XSS problems. You can replace all of your HTML sanitization functions
with document.createTextNode(text).

~~~
Figs
I think you're looking for a different thread...

~~~
Sephr
Oops, I meant to reply in the thread about that node.js chat application full
of XSS holes.

