Hacker News new | past | comments | ask | show | jobs | submit login
P ≠ NP (scribd.com)
740 points by zacs on Aug 8, 2010 | hide | past | favorite | 208 comments

Several points on the question of whether the proof is likely to be correct:

* As far as I know this paper wasn't circulated for informal peer review before being made public; I heard no talk on the grapevine. (Edit: apparently it was circulated and someone other than the author made it public.)

* Therefore a proper assessment is going to take a while. Until then we can only speculate :-)

* While the crank attempts at P =? NP are statistically much more common (http://news.ycombinator.com/item?id=347295), this isn't one of them. The author is a legit computer scientist: http://www.hpl.hp.com/personal/Vinay_Deolalikar/

* On the other hand he hasn't published much on complexity theory and isn't known in that community. Which is weird but not necessarily a red flag.

* Looking at his papers, it's possible he's been working on this for about 5+ years -- he has two threads of research, basic and industrial, and the former line of publications dried up around 2004.

* On the other hand I don't think anyone knew he was working on this. The only known serious effort was by Ketan Mulmuley at U Chicago.

* It has been known that the straightforward combinatorial approaches to P =? NP aren't going to work, and therefore something out of left field was required (http://web.cs.wpi.edu/~gsarkozy/3133/p78-fortnow.pdf). Mulmuley's plan of attack involved algebraic geometry.

* This paper uses statistical physics. This approach doesn't seem to have been talked about much in the community; I found only one blog comment http://rjlipton.wordpress.com/2009/04/27/how-to-solve-pnp/#c... which mentions the survey propagation algorithm. (Deolalikar's paper also talks about it tangentially.)

* If the statistical physics method used here is powerful enough to resolve P != NP, then there's a good chance it is powerful enough to have led to many smaller results before the author was able to nail the big one. It's a little weird we haven't heard anything about that earlier.

* Finally, since the author is using physics-based methods, there's the possibility that he is using something that's a "theorem" in physics even though it is technically only a conjecture and hasn't actually been proven. Physicists are notorious for brushing technicalities under the rug. It would be very unlikely that the author didn't realize that, but still worth mentioning.

* If that is indeed what happened here, but the rest of the proof holds up, then we would be left with a reduction from P != NP to a physics conjecture, which could be very interesting but not ground breaking.

Conclusion: overall, it certainly looks superficially legit. But in non peer reviewed solutions of open problems there's always a high chance that there's a bug, which might or might not be fixable. Even Andrew Wiles's first attempt at FLT had one. So I wouldn't get too excited yet.

  * If the statistical physics method used here is powerful
  enough to resolve P != NP, then there's a good chance it
  is powerful enough to have led to many smaller results
  before the author was able to nail the big one. It's a
  little weird we haven't heard anything about that earlier.
Well, Wiles didn't publish intermediate results either, partly because someone might have beat him to the final result with those intermediate results. It would also give away what he was working on. Deolalikar was aiming for the grand prize as well, so skipping the publishing of intermediate results doesn't seem strange to me.

The conclusion might seem to be that prizes harm science.

Which sucks, but intermediate results are important for science as a whole; if there's significant financial reason to withhold them, we're no better than the alchemists.

The problem is, the most important prize (being able to say "I found it first") is unavoidable.

Yeah, that's what I meant with 'the grand prize'. Not the 1 M$ from the Clay institute; that's just topping on the cake.

Agree fully. Also worth considering if this proof turns out to be correct, is that he will probably have employment contracts and speaker deals dwarfing that $1M quite soon.

Money is the last thing to care about if you solve P ?= NP. Humanity will remember you in same league as Turing Pythagoras Gauss and others.

You can't eat prestige. Nor find joy in a life of isolation. Look at Rembrandt's philosopher if you want the true picture of the life of the mind.

How history remembers you only matters after you're dead. Can you really tell someone to live a life of solitary confinement? The isolating parts of academia honestly suck. But everyone convinces themselves that academia is so great, in part because so many have gone before, suffering, down the same path.

Look at all the theoretical physicists and mathematicians. Do you think these people are HAPPY? Remember, if you leave math you are "dead." It's about sacrifice, not about your health or happiness as a human being.

The best part of academia is the collaboration, the social aspects. As for the environment, it could definitely be renovated to make accommodation for humanity. Obviously that will never happen. So there needs to be big CAUTION signs posted up whenever someone lauds the greatness of academia, saying it is not all fun and games, lots of people spend decades locked in offices hunched over pieces of paper.

I don't think academia is really like that. One of my former roommates was a Physics PhD student at MIT, so I know him and his colleagues. They all drank beer and played chess and cards in this one basement bar in Cambridge, did stuff like hang gliding, rock climbing, ultimate frisbee and played in various awful experimental-folk bands. They were nerds but they were all bros and even attracted the weird kind of cyber-hippy chicks who are into dudes like that. They all eventually got married. They seemed pretty happy to me. In contrast I once got stuck in a corporate programming job, which was the most uninteresting and socially isolating experience of my entire life.

Whenever I read these HN screeds against academia, I wonder what they are comparing it against. In my experience academia was way more interesting and fun than working as a professional programmer. The only thing good about the later is that it pays a lot.

Well maybe those physics PhDs are a lot better at socializing than me. I remember spending years of my PhD trying to make friends, going to bars/rock climbing/dances/running clubs etc to try and meet friends, but never got any where, and just feel this crushing depression every day I have to go to work and sit down at a computer for 8 hours. I feel roughly the same in a corporate or academic environment, it's the result of so much being by oneself. Perhaps I shouldn't have gone into computers?

However, Academia for the most part I think is just that sort of environment, I have commiserated with numerous PhD students who say the same thing. If one is into computers, both academia and computer engineering as career tracks are rather isolating, but some seem to like it, vs some get lonely, perhaps it's something psychological, and I'm not really intended to be such an introvert. Who knows.

I don't think academia is a bad career as long as one can get tenure at a good school, or in a good industry lab, so as to work with good people. I guess whether one can take it psychologically just depends on the individual.

Although I do agree that maybe if one had kids+wife or worked hard to be social, the 8 hours of isolation a day wouldn't be that bad. It might even be relaxing! So maybe professor is a good job for one in old age. But still I would worry about causing my students to suffer through isolation.

Actually sorry for my complaints. I thought about this more and realized being a professor is actually pretty good socially. I think being a grad student is just hard, unless you are very outgoing, and can get out of the lab.

I guess I just wanted to caution people that a career in computers and/or academia can very isolating unless one picks the right path. Moderation in all things.

First, Well the guy in question works in R&D lab of HP, and must be getting paid at least 120k$ per year. He also seems to have a good family and thus a social life :D

I fail to understand why you draw such a poor picture of a Maths / Physics PhD's life. Professors have a lot of interaction with students thus it is not exactly solitary confinement. Your picture of Academia is too dark.

I agree that there are people who get stuck in doing one Post-Doc after other and never get any tenure in life, surely it sucks for them but there are several Physicists and Mathematics PhD students who end up as a Quants on Wall Street and earn huge money.

If you read any posts / articles written by professors advising aspiring PhD students, you would rarely find them portraying a PhD life as fantasy filled life. So in my opinion Caution signs are already there.

But, there was no prize for solving FLT other than being able to list on one's CV "Proved FLT."

Not that it matters, but there was the Wolfskehl Prize:


You wouldnt actually need to put that in your CV

You actually won't need a CV.

That is a more precise statement :)

Very good summary.

Though I was just reading the synopsis, and the statistical physics part seems to be proven: "The 1RSB ansatz of statistical mechanics says that the space of solutions of random k-SAT shatters into exponentially many clusters of solutions when the clause density is sufficiently high. This phase is called 1dRSB (1- Step Dynamic Replica Symmetry Breaking) and was conjectured by physicists as part of the 1RSB ansatz. It has since been rigorously proved for high values of k. It demonstrates the properties of high correlation between large sets of variables that we will need."

I did not dig into the references yet, though.

Another thing. It's known that a proof resolving P vs NP can't relativize, can't be "natural", and can't "algebrize". (See http://portal.acm.org/citation.cfm?id=1490272&dl=GUIDE&#... and its references for what this means.) The paper doesn't directly address how it avoids these barriers. It does mention the first two barriers in passing. Maybe the answer is obvious to anyone who could read this paper.

In a cursory reading, I think I can address these criticisms. Although one would have liked the author to call out each in his introduction.

Naturalization: This proof is not combinatorial, it is based on a model of formal logic. This is addressed by the author directly.

Relativization: This is proof by contradiction and relies on understanding the solution space of k-SAT instances, not on diagonalization.

Algebrization: This proof looks beyond the size of small circuit into how the circuit inputs interact and how this interaction spreads throughout the circuit.

Thanks. I think you nailed just about every one of my concerns about this paper. I've only read the introduction, but if the rest of the paper can deliver what the first 16 pages promise, it looks like a winner to me. I suspect that if there's some problem hidden somewhere, it's a case of showing that some property or another of k-SAT holds almost always in the limit, but not being able to show that any specific instance of k-SAT has that property. And, if that's the case, it's probably a symptom of relying on one of those physics "theorems" you've mentioned. Color me cautiously optimistic, here.

Regarding your next to last point, it seems like there has been some prior work involving statistical physics techniques in computational complexity theory; see e.g., http://cdsagenda5.ictp.it/full_display.php?ida=a01155.

It seems like there has been some prior work involving statistical physics techniques in computational complexity theory; see e.g., http://cdsagenda5.ictp.it/full_display.php?ida=a01155.

I'm not sure if it is or isn't weird that he hasn't published on complexity theory. I recall Gasarch publishing a poll in SIGACT where a handful of the researchers polled claimed that it'll be resolved by someone outside of the field.

Regardless, from reading the synopsis of the proof, I'm skeptical to the point of disinterest.

(I apologize for not providing a link -- but the only URL I was able to find from Google: http://www.cs.umd.edu/~gasarch/papers/poll.pdf is broken.)

Why are you skeptical of the proof to "the point of disinterest"? I would assume that a researcher with HP with a dual Ph.D.s from USC wouldn't haphazardly make such a claim.

He has published in the area of complexity theory: http://portal.acm.org/citation.cfm?id=1185240

The paper's origin was that the author mailed a copy for review to some very high-level folks in the field (Cook, Mazirani, Sipser, etc). Here's the mail:

Date: Fri, 6 Aug 2010 21:28:39 +0000 Subject: Proof announcement: P is not equal to NP

Dear Fellow Researchers,

I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts.

The proof required the piecing together of principles from multiple areas within mathematics. The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens. Second to this were the technical hurdles faced at each stage in the proof.

This work builds upon fundamental contributions many esteemed researchers have made to their fields. In the presentation of this paper, it was my intention to provide the reader with an understanding of the global framework for this proof. Technical and computational details within chapters were minimized as much as possible.

This work was pursued independently of my duties as a HP Labs researcher, and without the knowledge of others. I made several unsuccessful attempts these past two years trying other combinations of ideas before I began this work.

Comments and suggestions for improvements to the paper are highly welcomed.


Vinay Deolalikar Principal Research Scientist HP Labs


Somewhere, deep in a dark corner of my heart, I hope and pray that this paper is correct, just so we can keep and revere the immortal words "I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts."

The font was specified to indicate that this proof, indeed, fits into the margin.

But is it truly marvelous?

It's not big and professional like GNU's fonts.

Sounds a little like the line in Watson and Crick's first paper about DNA. They said "It has not escaped our notice..." to introduce the idea that the subject of the paper might be the secret to life, the universe, and everything.

I'm not academic - can you explain why he mentioned the font sizes? Why are they relevant?

I find the sentence comical in its humility and practicality; I assume leif did too. Acangiano puts it in the same league as Fermat's famous note in the margin of his copy of Arithmetica.

I assume he included the paper in two font sizes to suit the reader's preference; no deeper meaning.

Yeah, basically I found it funny juxtaposing the announcement of such a magnificent result (should it end up proving true) with such a mundane, utilitarian comment.

True, some of those intellectuals must have poor eyesight due to a lifetime of reading.

It does feel a little intentional on the part of the author to give off a "all in a days work" type attitude.

Happy to see it in my collegiate font of choice, Book Antiqua.

Book Antiqua is a knockoff of Palatino, which in turn is the titling variant of Aldus: http://en.wikipedia.org/wiki/Aldus_(typeface) . It should read a little more smoothly.

If he can also prove it in 7pt font THEN I'll be impressed. Until then, meh.

10 and 12pt fonts - in Comic Sans

If this proof is up for review, that would mean there could be errors in it, right?

As another commenter mentioned, it will likely take months or even years to verify.

At 66 pages, coming from a well regarding researcher, and with its professional style of writing, I'd be shocked if this wasn't reviewed and slotted for publication before the end of the year.

In fact, one may argue that by FOCS we'll have had so many graduate students, reading groups, and reviewers pour over this paper that we'll have either a consensus or have found an error.

If a proof has been reviewed there might still be errors in it...

Indeed. But, fortunately, this is seldom a problem in math.

I mean, often people discover problems in proofs --- but if the result was beautiful enough, they are usually able to repair the proofs. It's like debugging. (And I mean it, thanks to the Curry-Howard isomorphism.)

I'm curious - how do you know this mail was sent to the aforementioned people in complexity theory?

Greg Baker blogged about it (and since it's a hot topic, it probably appeared in many other places) -- http://gregbaker.ca/blog/2010/08/07/p-n-np/

Also, from the blog post: "I see someone else has uploaded the paper. I should point out that in the email thread I got, Stephen Cook said “This appears to be a relatively serious claim to have solved P vs NP.". I would love to see that email thread.

102 pages via one of the most annoying PDF readers on the planet? No thanks.

There are good, free, PDF readers for every major operating system and for every major mobile device. I wish people would just link directly to PDFs.

The author's page (http://www.hpl.hp.com/personal/Vinay_Deolalikar/) has now been updated to include a direct link to the PDF version (http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_p...). He also notes: "The preliminary version made it to the web without my knowledge. Please note that the final version of the paper is under preparation, and is to be posted here shortly (in about a week). Stay tuned."

I converted the documented to HTML5 mode. Hopefully that makes it easier to read.

I wish people would just link directly to PDFs.

Agreed, and the benefit of doing that is you get a Scribd link come up on HN too :-)

Have you tried Scribd since they've transitioned to HTML 5? I now find it quite usable.

I got their flash viewer for this. I do sometimes get the html5 viewer, and I assumed the selection was based on a preference set by the uploader. Isn't it?

It seems they're A/B testing it. I got the html5 viewer all but one time viewing this.

This is not the point.

Sure it is. Scribd has succeeded in going from "the worst pdf viewer on the web" to something that's perfectly usable using HTML5.

Perfectly usable compared to what?

A big toolbar at the top, a big toolbar at the bottom, a big sidebar, none of which have any information I want to see; ads all over, and I have to sign up for their website to download the PDF.

That's not even as good as Acrobat (last I used it), never mind a lightweight PDF reader, never mind Chrome's integrated PDF rendering.

2 things.

- The html5 transformation is not instant. Everyone gets the Flash reader till the HTML5 version gets ready. Which can be anywhere between minutes to hours (I don't know the avg time).

- The HTML5 version or the Flash is worse than a plain old PDF viewer. Have you used the Integrated PDF viewer in Chrome lately? PDF under chrome makes reading PDF fun again, its the fastest/smoothest PDF viewer I have ever used

Also in order for me to download the PDF version I have to either register or sign in with my facebook account? What nonsense is that?

I have in the past for other sites. But now, sites that do decide to BugMe I choose Not to visit them.*

*NYT being the exception.

The best PDF viewer on the web is still not as nice as Preview.app. Just because hairs can be split doesn't mean we want that distraction.

+1 to that. Is there any point to Scribd other than putting ads on PDFs?

Scott Aaronson expresses his confidence that the proof is wrong by offering to supplement the million-dollar Clay prize with $200,000 of his own money if it's correct:


I find it strange that he doesn't explain his confidence with at least a general description of the way in which he thinks the proof will fail.

I suppose he could have safely claimed this with Wiles's original proof as well, since it did have at least one non-trivial flaw that required amending. It's quite possible that the proof contains flaws, but will still hold up in the end, because the flaws can be corrected. But I feel that's a bit of a lame gamble.

Well, but Scott Aaronson wrote: "If Vinay Deolalikar is awarded the $1,000,000 Clay Millennium Prize for his proof of P≠NP, then I, Scott Aaronson, will personally supplement his prize by the amount of $200,000." So I suppose that any non-trivial flaw which will be eventually fixed by author of the paper himself is not a problem here.

Sometimes there are also flaws which can not be corrected.

But I cannot make a qualified guess if this might be the case here.

Of course, but that's an even larger gamble, if you don't have at least a very specific hunch about the way in which a proof will fail. All in all, this announcement by Aaronson seems rather rash. I don't understand why he would do such a thing.

I am OK with it. He put his money where his mouth is, instead of just putting his mouth somewhere.

Scott Aaronson does not explicitly say the proof is wrong, but says

> If P≠NP has indeed been proved, my life will change so dramatically that having to pay $200,000 will be the least of it. […] If P≠NP is proved, then to whatever extent theoretical computer science continues to exist at all, it will have a very different character.


> I can afford $200k, but not in the same way Bill Gates can afford $200k.

Discussed here:


As randomwalker says, it's not entirely obvious that Aaronson is betting against the proof.

It is.

It wasn't, when the comment you just downmodded was written. There's a reason Aaronson's posted multiple clarifications, after all...

I skimmed through the synopsis but this philosophical statement baffles me (although obviously it is unrelated to the validity or invalidity of the proof):

"The implications of [P ?= NP] on the general philosophical question of [..] whether human creativity can be automated, would be profound."

How so? If P!=NP, that is obeyed by the brain as much as by computers and whatever trick our brains does to be creative despite of this, will be available to computers as well, regardless of P?=NP. What am I missing?

You must read it in context.

What it says is that if P == NP then the implications are profound for applications such as cryptography and on the general question of whether human creativity can be automated.

You can see why P == NP would have profound implications for crypto: many of our widely used techniques would turn out to, um, be easy to crack.

I believe the intent regarding human creativity is that humans often end up with a practical need to deal with some real-world instance of a size-able NP-complete problem. We don't know any way to efficiently find the optimal solution so we creatively deal with what we can figure out (the traveling salesman's optimization problem, placed into the real world, might count). Are our creative guessworks and heuristic discoveries essential? Or can automation based on P == NP eliminate the need for them - and do better than we do on these problems.

Lets assume that P==NP, why assume that we would be able to find algorithms to easily crack crypto?

Let me put it another way, lets say that we finally discover that it is possible to time travel to the past. However, the energy needed to travel back int time is the equivalent of a million Suns because that is the amount of energy needed to warp space enough so that time will reverse itself. And we found proof of this when we observed the black holes of two galaxies colliding. So, even if it were possible it would still not matter. Practically speaking, it would still not be possible to travel back in time.

My gut feeling is that if it were to turn out that P==NP that would not necessarily mean that all of a sudden we would be able to find the exact algorithms that make it easy to crack cryptography algorithms easily.

We would know from the proof that such algorithms would exist, and possibly get a method from the proof as to how formulate them.

Also, gut feeling doesn't work in math. I mean, not at all.

NPC algorithms are roughly equivalent, so a P solution to one is a P solution to all. So, assuming the proof would be somewhat constructive, that would give you a P solution.

But, RSA (and probably other) cryptosystems aren't even thought to be NPC to solve... so they're probably unaffected. So, I guess it depends on the implementation details of the cryptosystem. Probably it wouldn't render them wide open, but it might make previously good keys far weaker, and mean that much longer must be spent on encryption for it to be safe (rather than now where a safe key can be decoded almost instantly on an embedded platform).

Polynomial doesn't mean "easy".

O(n^10000) is polynomial but really not easy.

I didn't follow everything in my complexity class, but I believe that P=NP implies that search is as easy as decision. So producing a piece of good music is as easy (up to a polynomial) as deciding if a piece of music is good.

If I remember correctly, optimization also collapses. So finding the best possible piece of music is also easy as deciding if a piece of music is good.

Up to a polynomial.

What if the polynomial is of order 1000 or higher?

I understand asymptotics and the convention that polynomial algorithms are "efficient". However, it might turn out that P=NP but the polynomial is so big that the found algorithm is impractical for normal sized instances.

That's certainly a logical possibility. In practice, low-degree polynomial solutions have been developed for most important problems in P, even if the first known polynomial solution had a high degree.

What would be even more entertaining, is a non-constructive proof of NP=P.

Wikipedia on P ?= NP:

In essence, the question P = NP? asks: if 'yes'-answers to a 'yes'-or-'no'-question can be verified "quickly" can the answers themselves also be computed "quickly"?

Scott Aaronson:

If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...

The problem with these grandiose statements is that they forget the word formal. You have to be searching on a space that's formally defined to make optimization=verification if P=NP. When we do art and creativity, the space is not formally defined a priori. Hell, even theorem-proving isn't. Look at the breakthroughs in mathematics and most of them, a really large majority, came from applying some new ideas or operations (e.g., ricci flows) to problems that all others simply could not see as applicable. There are no "profound philosophical implications" if P=NP (which I doubt) other than our universe lets us have polynomially bounded algorithms for a huge cluster of problems.

How so? If P!=NP, that is obeyed by the brain as much as by computers and whatever trick our brains does to be creative despite of this, will be available to computers as well, regardless of P?=NP. What am I missing?

Let's talk math: In some very formal, reductionist sense, what we're doing when we do math is to start with a set of axioms, operate on them using a fairly small set of logical rules, and arrive at theorems that seem interesting.

If we do this very carefully -- much more carefully than a human would ever normally do, showing 100% of our work -- it's trivial to check such a proof automatically. You just look at each step, and verify that that step follows from the axioms + logical rules you're using + previous steps. This checking is clearly (handwave) polynomial: at worst, at each step N, you have to examine every axiom, every logical rule, and each prior step to make sure that step N+1 is legal.

So proof checking is in P.

Let's say I have a set of axioms, and a logic, and I want to know whether a theorem T is true given that system. One strategy I could use is to decide to only consider proofs of length less than N, and generate every legal theorem that I can reach in fewer than N logical steps. When I'm done, I check my list of proven theorems and see if any of them match T. Since I can always my answer in polynomial time, I know this is in NP.

The try-everything-possible algorithm has the downside of being a bit exponential (since we expect N to be very (very!) large for any interesting theorem) and so also a bit useless.

But what if a P=NP proof can provide guidance on how we should combine our axioms in order to reach theorem T? We suddenly have not a proof-checker, but a proof-generator! Anything that we can state in a formal language, we can simply ask the prover to prove -- it will either give a proof, or say that a proof of size < N doesn't exist. (That wouldn't be a disproof, and it's a Goedel/halting problem kind of argument that we may never really know how large N needs to be.)

Finding new proofs is a large part of what's considered creative in mathematics. It's hard to over-state how math might change with a tool like that available. The field has seen any number of revolutions before -- e.g., the effort of hundreds who worked on the algebraic roots of polynomials pretty much only survives in homework exercises nowadays -- but automating proofs would be Big.

(Finding interesting conjectures is valuable, too, but I doubt Erdos would be nearly as famous if he'd only posed his questions without his resume of theorems behind them.)

If P != NP, that doesn't mean we will never be able to automate human creativity; it just makes it a bigger challenge.

The general feeling is that creativity is hard, and even exponentially hard. When people try to do things creatively they feel like they aren't doing that much better than trying all possibilities and rejecting the ones that don't work.

If someone proved P=NP then unless we're mistaken about how hard we have to work to think creatively, it's likely that computers will end up being better than us at it, using this algorithm that I suspect would be difficult to backport to our brains. :)

Then again, it's possible that skilled practitioners of any creative field have stumbled across this polynomial time algorithm and don't know it, but that seems unlikely.

In regards to mathematics, creativity is displayed foremost in finding connections between different notions and finding logical arguments to support a claim. Mathematical discovery is not in the claim, but in the construction of the proof of the claim.

Mathematical statements are generally easy to state. If P=NP were true, then a valid proof could automatically and efficiently discovered. This would be a kind of creativity automated by technology.

AFAIK the thinking is that if P=NP then we might be able to automate the creation of theorems because we'd be able to take a problem who's solution is known to be quickly verifiable and because we know it's therefore quickly solvable we might be able to automatically develop the theorem that solves it. So his proof suggests that human creativity can't be automated.

In order to automate human creativity one must understand its nature at least.

I'm not sure why creativity has this special aura. My internal process seems fairly mechanistic, to me.

Perhaps it's a legacy from the idea of a life-force or vis viva.

While I agree P != NP doesn't say anything about human creativity, but the reason it can't be automated is, I think, different: creativity is closely tied to libido and biological evolution, which in turn can't be simulated in machines in full.

I'd argue that if (assuming it's true that) creativity stems from libido and biological evolution, it's been because there are evolutionary gains for creative individuals. The first person to throw a rock at a tree to knock down fruit got more food, for instance. The most complicated clothing (/nest/dance) gets the most mates.

All of which means it's purely a motivating force, not a requirement. We've been motivated to be creative, so we are. Programs can be similarly guided towards ends we desire; perhaps the most direct analogy is genetic programming, where you determine which "breed" by their "fitness" value.

What I actually meant was what we create rather than why.

Exactly my point: what we create is incredibly tightly tied to why we create, especially if you're looking at it from an evolutionary standpoint. I wouldn't call it an isomorphic relationship, but darn close.

You also have to consider that our creativity has been created by the equivalent of billions of the individuals with more computational power than the most powerful computer in the world right now, running for thousands of years of recorded history. Absolutely nothing we have done with computers approaches this, especially when you consider just how unexplored computer A.I. really is. If we call it ten orders of magnitude more "power", we're making a phenomenally conservative estimate. What could you do with a computer over a billion times more powerful than everything that Google owns?

I'd say our less-than-10-billionth computational efforts have in fact created a couple creative things. Which puts them about on par.

"which in turn can't be simulated in machines in full."

If you mean "can never be simulated in machines in full" I would disagree.

Here's a list of many other proofs for the P vs NP problem: http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

I hope this become wrong as well.

Very interesting approach! How cool if this is the real deal. The last 30 years has seen a lot of theoretical work on computation as a physical process. If the greatest conjecture in CS is proved using tools from physics it really brings together math, physics, and CS.

Edit: As someone else pointed out a few minutes ago http://www.hpl.hp.com/personal/Vinay_Deolalikar/ confirmations began arriving today. How soon before Vinay has a Wikipedia entry? For those who are qualified to evaluate this, I suspect a consensus as to validity will develop within weeks if not days. If thumbs up, he must be worthy of one of the outstanding large-cash-value math prizes. Perhaps even the Nobel?

If the proof works, he would easily get the Millenium prize and the Fields medal.

Edit: Also the Goedel prize

Turing Award, too.

Wrong area for a Nobel prize, the prize areas are physics, chemistry, physiology/medicine, literature, and peace. There is also a prize in economics given at the same time.

Perhaps he could steal the Economics "Nobel" prize, like lots of other mathematicians have done.

There was a paper about how recognizing bad securities is a NP hard problem a while ago. So this is applicable. (Tongue-in-cheek.)

They'll give a peace prize for anything. Why not a proof of P!=NP?

There is no Nobel prize for Mathematics.

Or Economics. No, really, the "Nobel Prize" in economics isn't: http://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economi...

His personal home page http://www.hpl.hp.com/personal/Vinay_Deolalikar/ seems to have been updated.

"Manuscript sent on 6th August to several leading researchers in various areas. Confirmations began arriving 8th August early morning. Final version of the paper to be posted here shortly. Stay tuned. "

It's been updated again, with a revised version of the draft ("minor updates") dated August 9, 2010.

I've not seen any discussion of his paper yet, but the author has an impressive CV:


Bummer! It's like proving conclusively that the tooth fairy, Santa and the Easter bunny definitely don't exist.

While the author of this paper does not appear to be a crank, nowhere in the entire paper does it discuss why the fundamental barriers of naturalization, algebrization, and relativization don't apply to the work, making it seem unlikely that those barriers have actually been overcome.

From what little I understand, those barriers prevent only certain proof strategies from working. So for instance, the Razborov-Rudich barrier concerns a class of combinatorial proofs (the so-called natural proofs); this paper uses two techniques - statistical mechanics and model theory - which I gather are out of the province of RR.

"only certain proof strategies" is technically correct, but its closer to "essentially every proof strategy we can conceive of".

And besides, the question is over the entire proof strategy and not the specific techniques involved. It seems plausible that one could give a relativizing proof using some method of calculation from statistical mechanics, for example.

I'd tend to agree...

Taking a methods from a different domain in no way shows that the mechanics of those method don't reduce to the same mechanics as a "natural proof". Further, it makes the actual process much more obscure.

Indeed, statistics in general seems like a hard approach for overcoming the Razborov-Rudich limit, since the limit is on showing that a "typical" function has certain properties and anything statical seems like it would be "typical".

But who knows really, maybe Razborov-Rudich isn't correct, since in fully generality it relies on things that "mathematicians generally believe" (the existence of certain pseudo-random functions) rather than things that are definitely proven - as far as I know.

Again, I'm no expert, but relativization and algebrization are properties of proofs that invoke oracles, which this paper doesn't appear to do.

Ah, that is not how those "barriers" work. Roughly, the relativization barrier goes like this: Say you have a proof that P!=NP. Does it also prove that P^A != NP^A for any oracle A? If it does, then the proof is flawed, because there <i>does</i> exist an oracle A such that P^A = NP^A!

Such proofs are said to relativize -- i.e., they are still valid relative to any oracle.

Ah, thanks for explaining!

I would note: "statistical mechanics" is not technique but a subfield of mathematics and physics within which a number of techniques are used.

"Model theory" might be closer to "a" technique or at least a somewhat distinct set of techniques.

And other have noted, there's reason that a proof from statistical mechanics would relativize.

I'd like to say congratulations to Vinay Deolalikar for getting to this point and putting the paper out there. As randomwalker and others have said, this appears to at least be a legitimate attempt. It must take some seriously thick skin to work on a paper that a) is in an area so full of failed attempts and error filled proofs; and b) that EVERYONE is going to read and try to tear apart.

We created a prediction market for the next winner of the Clay Prize here http://smarkets.com/current-affairs/clay-prize/next-winner


Assuming this isn't a hoax, and the proof holds up, this is front-page news kind of big deal.

As I recall, this has way more real-world practical usage than the Fermats Last Theorem proof.

Read the "Consequences of Proof" section in the Wikipedia article here: http://en.wikipedia.org/wiki/P_versus_NP_problem

[edit] The responses below are correct. Proving P=NP means the world gets turned upside down. P != NP is already sort of assumed.

It seems rather unlikely it would be a hoax. The author has published in this field before, in peer reviewed journals: http://logcom.oxfordjournals.org/cgi/content/abstract/15/5/5... or http://portal.acm.org/citation.cfm?id=1185240

Looking at his page at HP Research, he has several other publications in legitimate areas, and seems to be a well qualified researcher.

He may be wrong in this paper, but there's no reason to suspect its a hoax or that he's some kind of kook (as many other comments have sort of implied).

a) Complicated academic proofs take (and should take) months if not years to verify, and will only be "front-page news" after verification. This is hardly the first unverified proof of the [edit: possible (in)]equality of P and NP.

b) The practical consequences of P=NP are immense. The practical consequences of P≠NP are that we can stop looking for computational unicorns and fairies.

We have for some time thought that P≠NP, but boy do people like those unicorns and fairies.

I doubt it'll impact the unicorns-and-fairies searches significantly. Most people already work under the assumption that P≠NP.

Similarly, most people believe the world is round, and we go around the Sun. This hasn't prevented serious flat-Earthers nor geocentrists from existing.

It has been something of a damper on their funding, however.

> This is hardly the first unverified proof of the equality of P and NP.

in 'a' above didn't you mean to write 'inequality of P and NP' ?

I meant the general field of equality as it relates to P an NP, but I have edited for clarity.

Well, assuming this proof holds up (and there are already links in the comments to other proofs that P != NP) they have proven what most people already assume, that the two are not equivalent. The consequences of P != NP are mostly just that people can stop spending time looking for ways for P to equal NP. Which is valuable, but doesn't have much "real-world practical usage"


A proof of P = NP would have huge practical consequences. A proof that P != NP "only" represents a huge advance in the theory of computation.

A proof that P != NP also represents the death of an era in which CS could look upon a single problem as motivation in hundreds of different directions.

True, but if the history of mathematics is a good guide CS will find other Big Problems in the future.

I hear this statement frequently when P = NP is discussed, but as someone not involved in computer science I have a hard time understanding it.

Could you explain what some of those consequences be? Does P = NP really need to be proved to achieve those practical consequences? Can't it just be assumed to be true and then see what the result is?


NP questions take a long time to answer.

P questions take much less time.

The statement "P = NP" means that any NP question can be transformed into a P question. Answering the P question gives you an answer to the NP question.

So, if P = NP, then we can answer questions that we thought were slow, much more quickly than we would have thought possible.

Proving P = NP would (presumably) give you that method for transforming NP problems into P problems. So you would suddenly be able to figure things out easily that currently are hard and take a lot of computing power.

It's often assumed that P = NP would make cryptographic problems much easier to break, but apparently that's not really the case: http://world.std.com/~reinhold/p=np.txt

Yeah, the common complaint against "P=NP would break cryptography" is "but what if the polynomial algorithm for (say) SAT is n^10000?". That is not very likely, in my opinion. Even though "polynomial" is not equivalent to "efficient", in practice all problems that have a polynomial algorithm also have an efficient algorithm.

There are many problems that have polynomial algorithm, that is not especially fast for practical problems (matrix multiplication comes to mind for example, for which there isn't even any proof that fastest known algorithm is in fact optimal)

That's not a very good example - square matrix multiplication is O(n^1.5) (where n is the input size) and extremely fast in practice.

(Though, as you say, there could still be an O(n) algorithm - this is not known).

What I said is true even for linear programming (a P-complete problem) - it has polynomial and efficient algorithms - though the latter are actually not polynomial in the worst case. :)

Can you give a reference for a matrix multiplication algorithm which is O(n^1.5) ? thanks.

Note that I defined n as the input size...

If I remember correctly, any proof of P = NP would be have to be constructive, so it would immediately give us a fast algorithm for ALL NP-complete problems, as they can all be reduced to each other.

It isn't actually _necessary_ for a P=NP proof to be constructive. A constructive proof would probably be easier to reason through, but it is prossible to prove the existence of an algorithm without providing one.

Not so. Pure existence proofs can be non-constructive.

The first historical example of an important proof which was non-constructive was Hilbert's Basis Theorem in 1888, which solved a famous problem posed by Paul Gordon. It is now viewed as in some sense being the first salvo in the debate over constructivism in the 20th century. See http://en.wikipedia.org/wiki/David_Hilbert#The_finiteness_th... for more.

If P = NP were proved to be true, it would tell us that there are solutions to a certain class of problems. Assuming it's true doesn't give us those solutions.

Wikipedia should answer most of your questions regarding the consequences:


To be a little more precise, proving P=NP would mean that there are algorithms to solve those problems that take polynomial time (can be calculated with a polynomial).

NP problems can be solved, it's just slow (think brute force method, running through all the possibilities).

Not true. A polynomial time algorithm for NP complete programs is known (if P=NP that is). You just need to prove P=NP to prove that it is polynomial time.


Yeah, in general it's believed that any proof that P != NP would likely provide a lot of insight into which factors of an NP system make it difficult, and thus potentially allow us to divide even cleaner problems which are "easily" solved and problems which are "difficult to solve."

The huge consequences are if P = NP (which no-one really believes).

Emphasizing the "assuming this isn't a hoax" part, even if it is correct, I don't think it is a life-changing deal.

Computer Scientists have been assuming for years that P does not equal NP, so they've been doing research with that assumption already in place. Proving that the assumption is correct won't hugely change anything.

I'm not trying to knock down the greatness of this proof, but the repercussions aren't going to be that major.

I do not necessarily agree. There is a large number of open problems in computer science (especially in theory of complexity), and a correct proof of P != NP is very likely to use techniques that will help in resolving the other open problems. The techniques are also likely to change what we're learning in CS courses.

At least for this, we should have a reddit like system for peer reviewing, so that comments from all reviewers can be seen by everyone.

This is actually not a bad idea, and some scientific groups are pressuring to move to this model. See Yann LeCun's proposal ( http://www.lecun.com/ex/pamphlets/publishing-models.html ) for an example. ICML almost does this. The review is done privately (due to some well-discussed elsewhere issues with double anonymity), but there's a public discussion site for all papers http://mldiscuss.appspot.com/ . And people do use it, and for some papers you will find important information there.

Anyone got a link to any proper discussion?

Here is a good link. http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-n...

The suspence is killing me. As an observer I'm worried that there is some small error in the core of his argument that will cause the whole thing to unravel, as is often the case for proofs of this level. But I am hopeful that at least this may be a breakthrough that points to approaches to finally nail this great beast.

Yeah, I'd be curious to see what people who are still working in the field have to say. My red flags went up as soon as I read "statistical" in the abstract, since that could easily imply the common problem of assuming the existence of a secure PRNG. However, I haven't read the entire paper in depth (and likely won't have time to anytime soon), so I don't really know if that common trap was fallen into.

"is the application of probability theory"

My comment still stands.

I wonder, since you said your problem was that it "could easily imply the common problem of assuming the existence of a secure PRNG". Statistical mechanics involves real, true randomness, and the statistical comes in e.g. where you use statistical models of things at the micro scale to explain macro scale behavior. I don't get the impression that it involves statistics in the way you are using the word.

I think that whether P is NP or not could turn out to be undecidable and in fact one could add either equality or inequality as an independent axiom. The theory of computational complexity is about asymptotic results as the size of the instances goes to infinity and involves `there exists` statements whose meaning when applied to infinite sets is typically ambiguous. For instance, it turned out to be a matter of choice whether or not you want the set of all subsets of real numbers between zero and one to equal or to be strictly larger than the set of so called measurable sets. And whether or not a set is measurable can be characterized in a very concrete way about the possibility of approximating it by unions of intervals. So the question of whether or not there exist non measurable sets turned out to depend on `what you mean by set`in a way which people hadn´t thought about before. I suspect the question whether or not P is NP will depend on what we mean by P and NP in ways which so far no one thought about.

I think this is getting unwanted publicity. The author never released the proof in public or made any announcement. He only sent it to his expert friends so that they can point out potential flaws. May be this needs more research and putting him in spotlight is not helping anything.

Actually I think a leak is more helpful than either the author himself releasing the proof publicly or keeping everything under wraps.

Especially since the cold fusion thing[1], researchers have been loath to publicly announce revolutionary findings lest the findings fail to pan out. Researchers don't want to be known as publicity-seeking cranks, they want to be known as earnest and honest academics, which I suspect entails playing to the in-crowd and letting the system work[2]--the system being to talk to your colleagues before hand, release everything through peer-reviewed journals, and if everything passes muster, you've eliminated any risks to your reputation while achieving renown as the guy who proved P != NP.

On the other hand, having the proof publicly available to anyone who can understand it can massively parallelize the process of having it verified, or having errors found and potentially corrected. Meanwhile, the researcher's reputation is maintained, because he did the right thing, and if any errors are found he'll be judged to have acted prudently and conservatively in advance of having his findings reviewed.

[1] Fleischmann and Pons publicly claimed to have discovered a means to induce nuclear fusion at room temperature in 1989, but their results couldn't be replicated.

[2] And the system does work--this is no criticism.

Why isn't this in a peer-reviewed journal?

Because it hasn't undergone the proper level of review in order to be published in one. As it stands, there is no reason to take it seriously until it has been. There have been several posts of such papers here before, and I would highly advise the level of skepticism expressed by a fellow HNer with submissions like this c.f. : http://news.ycombinator.com/item?id=893877

Granted, but this author does not set off any crackpot flags: he's had plenty of prior publications in areas relevant to the proof and works in an industry research lab. Another poster points this out: http://news.ycombinator.com/item?id=1585999

Also, the paper was written using LaTeX. I have a friend who spent a semester helping to edit a small mathematics journal, and, with virtually 100% accuracy, you could tell the crackpot papers from the serious ones based simply on whether they used LaTeX or not.

Submissions in .doc format can also be immediately judged with near 100% accuracy.

Warranted. I just despise hype and sensationalism, particularly those concerning scientific breakthroughs. The title of the submission echoed similar hollow, dramatic claims and triggered my skeptical response.

So: What publicly traded firms benefit from a confirmation that P!=NP and are there any that lose (e.g., that were betting that P might == NP).

There's gotta be money in this news :-)

Assuming P provably != NP, banks and online retailers win (certain forms of encryption even theoretically can't be broken in P time), NSA supercomputer contractors might lose (certain forms of encryption even theoretically can't be broken in P time). UPS and FedEx likely lose (traveling salesman problem is NP-complete). Any other business that hinges on solving hard problems might lose, though counterintuitively, some might win--firms that do a really good job at approximate solutions to NP problems (is ITA an example?) are extremely talented at doing something really hard, whereas if P = NP, it would be easier for any old firm to develop a perfect solution.

But considering the fact that we've all been operating under the assumption that P != NP, the losers don't lose much (if anything) and the winners don't win much (if anything). If P = NP was proven, it's possible but not guaranteed that bank robbers and whatever software firms can pivot fast enough to exploit P = NP get huge windfalls, internet retailers would be ruined, banks who didn't shut off their data links fast enough would be ruined, etc. This worst case scenario hinges on a proof that took the form of a proof by counterexample, which happened to neatly solve an NP-complete problem in efficient P time. In better case scenarios, where other proof forms were used or the P-time solution was a horrific factor like O(n^100,000), online retailers would still lose a little if only based on media hype about the discovery scaring grandmothers away.

Ah, yes. No, I meant for the outside investor with early news of the purported proof who is willing to bet that it holds up.

For example, if there is some crypto company whose business is premised on hedging that a "P == NP" proof is just around the corner - short them. Alternatively, maybe buy the firm we think of as RSA. That kind of thing.

This paper hasn't yet got a lot of press attention and I'm only about 1/4 joking when I say I'm curious as to what effect it will have on various stocks if it isn't quickly debunked.

HP will get some prestige out of this, and not much else will happen short term. Almost everyone was already assuming P != NP.

And that doesn't even scratch the security implications. Cryptography based on complexity theory would become an instant relic. The only cryptography I know of that could stand up to a P vs NP solution is quantum cryptography. Of course, there's the distinct possibility the NSA and other government sponsored institutions are already aware of a solution.

I think there might be a way to solve all n-SAT problems (if reformulated using conjunctive normal form) in O(C) best up to worst O(C * N * N) time and O(C * N) space (C = number of AND-clauses, N = number of distinct literals from all clauses). would that still be polynomial and a valid proof or even possible (not a mathematician I am)? on the other hand, proving P = NP would not be wise, morally seen, would it?

Looking at the comments at https://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-... there are many points to clarify. Specially the intuition from statistic physics is not true in other examples and the difference is not clear. So ...

This is sad: This result, if correct, is the mathematical result of the century; however, it still hasn't appeared in Google News or Google trends.

It's a leaked PDF of an unverified proof which has yet to appear in any mainstream news source and is relevant, probably, to no one but computer scientists. So it's not even remotely surprising that it has not yet appeared on Google News. I have no idea what part of this you think is sad.

This result is much more relevant than either Perelman's proof of the Poincare Conjuncture or Wiles' proof or Fermat's Conjuncture, both of which got huge coverage in the mainstream media. If you think this is relevant to "no one but computer scientists" you are naive about the ramifications of the result.

The proof is unverified but most qualified sources say it is one of the best efforts in many years, it could even be the proof. This was the #1 news in most technical blogs yesterday.

The thing is, we can't talk about any result yet. So far, we only have a paper which didn't even underwent peer-review. I'm sure it will get enormous media coverage, but not sooner than it gets published.

* RSA and all our other current crypto algorithms breathe a collective sigh of relief

Wow. 100% of zac’s 724 karma points come from this post & his comments on it.

bñpg en el cual se puede jugar al mitico mario bros.


Everyone has been suspecting or assuming that P is a strict subset of NP for decades. Still, every time someone proposes a serious attempt at proving it, it's front page news and smart people will have to comb over the argument for months to have confidence that it's right.

The opposite proposition -- that P is equal to NP -- is widely doubted. But if there were a proof that P and NP were equal, anyone with a good data set could verify the proof in a few minutes. Just run the proof on a hard 3-SAT or whatever and observe the answer returning significantly before the heat death of the universe.

So what we think is true is insanely hard to verify but what we think is false is blazingly obvious to check. Chalk another one up for irony.

Actually that's only possible for one case -- a proof by counterexample. If, hypothetically, one were to prove that P=NP and only that an algorithm exists to solve NP-complete problems in P time, then that might be verifiable. However, even if that were the case, what if it turns out that the P-time algorithm complexity were O(n^100,000). That would be P-time, but not easily run for large NP-complete problems.

P is a strict subset of NP. If you can solve the problem in polynomial time, then you can verify a solution simply by generating it.

That means it's a subset and is trivial. To claim it's a strict subset means there's something in NP that's not in P.

See, all these monstrous old companies such as IBM and HP, they hire pure scientists such as Chaitin and this guy, and every once in awhile it pays off. I have feeling, yonger companies, with their own research departments, they are still very down to earth, and look for immediate profit from R&D.

To quote the announcement email which zacs posted earlier:

  This work was pursued independently of my duties as 
  a HP Labs researcher, and without the knowledge of 
  others. I made several unsuccessful attempts these 
  past two years trying other combinations of ideas 
  before I began this work.
So based off this solitary data point, it seems that while they might hire the brains it doesn't necessarily follow that they have free rein to work on esoteric "non-profitable" projects.

I don't know if "paid off" is the right word. If P were equal to NP, HP would probably benefit from finding out first. Assuming this is correct, though, all they get is the satisfaction of having hired the guy who happened to discover the proof in his spare time.

The two main consequences that would follow are (from Wikipedia):

"A proof that showed that P ≠ NP, while lacking the practical computational benefits of a proof that P = NP, would also represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place."

"Cryptography, for example, relies on certain problems being difficult. A constructive and efficient solution to the NP-complete problem 3-SAT would break many existing cryptosystems such as Public-key cryptography, used for economic transactions over the internet, and Triple DES, used for transactions between banks. These would need to be modified or replaced."

So basically we can focus on finding good approximations to NP problems and feel safe that this proof won't immediately jeopardize all of our bank accounts.

The above is false (par for Wikipedia). While RSA is based on the difficulty of factoring, DES (and 3DES) are not. The DES cipher is based on substitution/permutation and not number theory.

Also, factoring is known to be subexponential (e.g., GNFS). While there is no known polynomial time factoring algorithm, this may give some evidence that the integer factorization problem might be in P. Factoring is known to not be NP-complete and we hope it is not in P. While a proof that P=NP would be disastrous for RSA, a proof that P!=NP does not mean factorization is guaranteed to be safe against future advances in algorithms.

In other words, proof that P!=NP would be an amazing result but someone could still improve factoring algorithms.

What to do? Read paper or hit next months deadline...

P = NP if and only if N = 1 or P = 0

Beat you to it! http://twitter.com/peterc/status/20667523417

I didn't post it here because I realized it's a joke most people won't get and those who do get it won't find it funny ;-) This joke could even be a Reddit vs HN shibboleth as I just saw it made there too and it's being voted up.

P!=NP if and only if P=1 and N=1.

That paper is written just a smidgen above my reading level.

I always felt that it was logical that N != NP. However, I certainly couldn't prove it.


Can someone call Knuth? If he gives it the thumbs up then I'll believe it.

I'm not sure whether Knuth has the ergodic-theory background. Perhaps call Terry Tao?

Thanks for the info. I originally posted my request (half serious, half jokingly) when not many explanations in the thread were there to respond to the validity of the paper. Thanks for your insight.

Proving P ≠ NP is like finding the Higgs Boson. It doesn't change anything and a lot of people spent a lot of time showing that they have spent a lot of time changing nothing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact