Date: Fri, 6 Aug 2010 21:28:39 +0000
Subject: Proof announcement: P is not equal to NP
Dear Fellow Researchers,
I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts.
The proof required the piecing together of principles from multiple areas within mathematics. The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens. Second to this were the technical hurdles faced at each stage in the proof.
This work builds upon fundamental contributions many esteemed researchers have made to their fields. In the presentation of this paper, it was my intention to provide the reader with an understanding of the global framework for this proof. Technical and computational details within chapters were minimized as much as possible.
This work was pursued independently of my duties as a HP Labs researcher, and without the knowledge of others. I made several unsuccessful attempts these past two years trying other combinations of ideas before I began this work.
Comments and suggestions for improvements to the paper are highly welcomed.
Principal Research Scientist
I assume he included the paper in two font sizes to suit the reader's preference; no deeper meaning.
In fact, one may argue that by FOCS we'll have had so many graduate students, reading groups, and reviewers pour over this paper that we'll have either a consensus or have found an error.
I mean, often people discover problems in proofs --- but if the result was beautiful enough, they are usually able to repair the proofs. It's like debugging. (And I mean it, thanks to the Curry-Howard isomorphism.)
Also, from the blog post: "I see someone else has uploaded the paper. I should point out that in the email thread I got, Stephen Cook said “This appears to be a relatively serious claim to have solved P vs NP.". I would love to see that email thread.
There are good, free, PDF readers for every major operating system and for every major mobile device. I wish people would just link directly to PDFs.
Agreed, and the benefit of doing that is you get a Scribd link come up on HN too :-)
A big toolbar at the top, a big toolbar at the bottom, a big sidebar, none of which have any information I want to see; ads all over, and I have to sign up for their website to download the PDF.
That's not even as good as Acrobat (last I used it), never mind a lightweight PDF reader, never mind Chrome's integrated PDF rendering.
- The html5 transformation is not instant. Everyone gets the Flash reader till the HTML5 version gets ready. Which can be anywhere between minutes to hours (I don't know the avg time).
- The HTML5 version or the Flash is worse than a plain old PDF viewer. Have you used the Integrated PDF viewer in Chrome lately? PDF under chrome makes reading PDF fun again, its the fastest/smoothest PDF viewer I have ever used
Also in order for me to download the PDF version I have to either register or sign in with my facebook account? What nonsense is that?
*NYT being the exception.
I suppose he could have safely claimed this with Wiles's original proof as well, since it did have at least one non-trivial flaw that required amending. It's quite possible that the proof contains flaws, but will still hold up in the end, because the flaws can be corrected. But I feel that's a bit of a lame gamble.
But I cannot make a qualified guess if this might be the case here.
> If P≠NP has indeed been proved, my life will change so dramatically that having to pay $200,000 will be the least of it. […] If P≠NP is proved, then to whatever extent theoretical computer science continues to exist at all, it will have a very different character.
> I can afford $200k, but not in the same way Bill Gates can afford $200k.
As randomwalker says, it's not entirely obvious that Aaronson is betting against the proof.
* As far as I know this paper wasn't circulated for informal peer review before being made public; I heard no talk on the grapevine. (Edit: apparently it was circulated and someone other than the author made it public.)
* Therefore a proper assessment is going to take a while. Until then we can only speculate :-)
* While the crank attempts at P =? NP are statistically much more common (http://news.ycombinator.com/item?id=347295), this isn't one of them. The author is a legit computer scientist: http://www.hpl.hp.com/personal/Vinay_Deolalikar/
* On the other hand he hasn't published much on complexity theory and isn't known in that community. Which is weird but not necessarily a red flag.
* Looking at his papers, it's possible he's been working on this for about 5+ years -- he has two threads of research, basic and industrial, and the former line of publications dried up around 2004.
* On the other hand I don't think anyone knew he was working on this. The only known serious effort was by Ketan Mulmuley at U Chicago.
* It has been known that the straightforward combinatorial approaches to P =? NP aren't going to work, and therefore something out of left field was required (http://web.cs.wpi.edu/~gsarkozy/3133/p78-fortnow.pdf). Mulmuley's plan of attack involved algebraic geometry.
* This paper uses statistical physics. This approach doesn't seem to have been talked about much in the community; I found only one blog comment http://rjlipton.wordpress.com/2009/04/27/how-to-solve-pnp/#c... which mentions the survey propagation algorithm. (Deolalikar's paper also talks about it tangentially.)
* If the statistical physics method used here is powerful enough to resolve P != NP, then there's a good chance it is powerful enough to have led to many smaller results before the author was able to nail the big one. It's a little weird we haven't heard anything about that earlier.
* Finally, since the author is using physics-based methods, there's the possibility that he is using something that's a "theorem" in physics even though it is technically only a conjecture and hasn't actually been proven. Physicists are notorious for brushing technicalities under the rug. It would be very unlikely that the author didn't realize that, but still worth mentioning.
* If that is indeed what happened here, but the rest of the proof holds up, then we would be left with a reduction from P != NP to a physics conjecture, which could be very interesting but not ground breaking.
Conclusion: overall, it certainly looks superficially legit. But in non peer reviewed solutions of open problems there's always a high chance that there's a bug, which might or might not be fixable. Even Andrew Wiles's first attempt at FLT had one. So I wouldn't get too excited yet.
* If the statistical physics method used here is powerful
enough to resolve P != NP, then there's a good chance it
is powerful enough to have led to many smaller results
before the author was able to nail the big one. It's a
little weird we haven't heard anything about that earlier.
Which sucks, but intermediate results are important for science as a whole; if there's significant financial reason to withhold them, we're no better than the alchemists.
How history remembers you only matters after you're dead. Can you really tell someone to live a life of solitary confinement? The isolating parts of academia honestly suck. But everyone convinces themselves that academia is so great, in part because so many have gone before, suffering, down the same path.
Look at all the theoretical physicists and mathematicians. Do you think these people are HAPPY? Remember, if you leave math you are "dead." It's about sacrifice, not about your health or happiness as a human being.
The best part of academia is the collaboration, the social aspects. As for the environment, it could definitely be renovated to make accommodation for humanity. Obviously that will never happen. So there needs to be big CAUTION signs posted up whenever someone lauds the greatness of academia, saying it is not all fun and games, lots of people spend decades locked in offices hunched over pieces of paper.
Whenever I read these HN screeds against academia, I wonder what they are comparing it against. In my experience academia was way more interesting and fun than working as a professional programmer. The only thing good about the later is that it pays a lot.
However, Academia for the most part I think is just that sort of environment, I have commiserated with numerous PhD students who say the same thing. If one is into computers, both academia and computer engineering as career tracks are rather isolating, but some seem to like it, vs some get lonely, perhaps it's something psychological, and I'm not really intended to be such an introvert. Who knows.
I don't think academia is a bad career as long as one can get tenure at a good school, or in a good industry lab, so as to work with good people. I guess whether one can take it psychologically just depends on the individual.
Although I do agree that maybe if one had kids+wife or worked hard to be social, the 8 hours of isolation a day wouldn't be that bad. It might even be relaxing! So maybe professor is a good job for one in old age. But still I would worry about causing my students to suffer through isolation.
I guess I just wanted to caution people that a career in computers and/or academia can very isolating unless one picks the right path. Moderation in all things.
I fail to understand why you draw such a poor picture of a Maths / Physics PhD's life. Professors have a lot of interaction with students thus it is not exactly solitary confinement. Your picture of Academia is too dark.
I agree that there are people who get stuck in doing one Post-Doc after other and never get any tenure in life, surely it sucks for them but there are several Physicists and Mathematics PhD students who end up as a Quants on Wall Street and earn huge money.
If you read any posts / articles written by professors advising aspiring PhD students, you would rarely find them portraying a PhD life as fantasy filled life. So in my opinion Caution signs are already there.
Though I was just reading the synopsis, and the statistical physics part seems to be proven: "The 1RSB ansatz of statistical mechanics says that the space of solutions of random k-SAT shatters into exponentially many clusters of solutions when the clause density is sufficiently high. This phase is called 1dRSB (1- Step Dynamic Replica Symmetry Breaking) and was conjectured by physicists as part of the 1RSB ansatz. It has since been rigorously proved for high values of k. It demonstrates the properties of high correlation between large sets of variables that we will need."
I did not dig into the references yet, though.
Naturalization: This proof is not combinatorial, it is based on a model of formal logic. This is addressed by the author directly.
Relativization: This is proof by contradiction and relies on understanding the solution space of k-SAT instances, not on diagonalization.
Algebrization: This proof looks beyond the size of small circuit into how the circuit inputs interact and how this interaction spreads throughout the circuit.
Regardless, from reading the synopsis of the proof, I'm skeptical to the point of disinterest.
(I apologize for not providing a link -- but the only URL I was able to find from Google: http://www.cs.umd.edu/~gasarch/papers/poll.pdf is broken.)
"The implications of [P ?= NP] on the general philosophical question of [..] whether human creativity can be automated, would be profound."
How so? If P!=NP, that is obeyed by the brain as much as by computers and whatever trick our brains does to be creative despite of this, will be available to computers as well, regardless of P?=NP. What am I missing?
What it says is that if P == NP then the implications are profound for applications such as cryptography and on the general question of whether human creativity can be automated.
You can see why P == NP would have profound implications for crypto: many of our widely used techniques would turn out to, um, be easy to crack.
I believe the intent regarding human creativity is that humans often end up with a practical need to deal with some real-world instance of a size-able NP-complete problem. We don't know any way to efficiently find the optimal solution so we creatively deal with what we can figure out (the traveling salesman's optimization problem, placed into the real world, might count). Are our creative guessworks and heuristic discoveries essential? Or can automation based on P == NP eliminate the need for them - and do better than we do on these problems.
Let me put it another way, lets say that we finally discover that it is possible to time travel to the past. However, the energy needed to travel back int time is the equivalent of a million Suns because that is the amount of energy needed to warp space enough so that time will reverse itself. And we found proof of this when we observed the black holes of two galaxies colliding. So, even if it were possible it would still not matter. Practically speaking, it would still not be possible to travel back in time.
My gut feeling is that if it were to turn out that P==NP that would not necessarily mean that all of a sudden we would be able to find the exact algorithms that make it easy to crack cryptography algorithms easily.
Also, gut feeling doesn't work in math. I mean, not at all.
But, RSA (and probably other) cryptosystems aren't even thought to be NPC to solve... so they're probably unaffected. So, I guess it depends on the implementation details of the cryptosystem. Probably it wouldn't render them wide open, but it might make previously good keys far weaker, and mean that much longer must be spent on encryption for it to be safe (rather than now where a safe key can be decoded almost instantly on an embedded platform).
O(n^10000) is polynomial but really not easy.
If I remember correctly, optimization also collapses. So finding the best possible piece of music is also easy as deciding if a piece of music is good.
What if the polynomial is of order 1000 or higher?
I understand asymptotics and the convention that polynomial algorithms are "efficient". However, it might turn out that P=NP but the polynomial is so big that the found algorithm is impractical for normal sized instances.
In essence, the question P = NP? asks: if 'yes'-answers to a 'yes'-or-'no'-question can be verified "quickly" can the answers themselves also be computed "quickly"?
If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss...
Let's talk math: In some very formal, reductionist sense, what we're doing when we do math is to start with a set of axioms, operate on them using a fairly small set of logical rules, and arrive at theorems that seem interesting.
If we do this very carefully -- much more carefully than a human would ever normally do, showing 100% of our work -- it's trivial to check such a proof automatically. You just look at each step, and verify that that step follows from the axioms + logical rules you're using + previous steps. This checking is clearly (handwave) polynomial: at worst, at each step N, you have to examine every axiom, every logical rule, and each prior step to make sure that step N+1 is legal.
So proof checking is in P.
Let's say I have a set of axioms, and a logic, and I want to know whether a theorem T is true given that system. One strategy I could use is to decide to only consider proofs of length less than N, and generate every legal theorem that I can reach in fewer than N logical steps. When I'm done, I check my list of proven theorems and see if any of them match T. Since I can always my answer in polynomial time, I know this is in NP.
The try-everything-possible algorithm has the downside of being a bit exponential (since we expect N to be very (very!) large for any interesting theorem) and so also a bit useless.
But what if a P=NP proof can provide guidance on how we should combine our axioms in order to reach theorem T? We suddenly have not a proof-checker, but a proof-generator! Anything that we can state in a formal language, we can simply ask the prover to prove -- it will either give a proof, or say that a proof of size < N doesn't exist. (That wouldn't be a disproof, and it's a Goedel/halting problem kind of argument that we may never really know how large N needs to be.)
Finding new proofs is a large part of what's considered creative in mathematics. It's hard to over-state how math might change with a tool like that available. The field has seen any number of revolutions before -- e.g., the effort of hundreds who worked on the algebraic roots of polynomials pretty much only survives in homework exercises nowadays -- but automating proofs would be Big.
(Finding interesting conjectures is valuable, too, but I doubt Erdos would be nearly as famous if he'd only posed his questions without his resume of theorems behind them.)
If someone proved P=NP then unless we're mistaken about how hard we have to work to think creatively, it's likely that computers will end up being better than us at it, using this algorithm that I suspect would be difficult to backport to our brains. :)
Then again, it's possible that skilled practitioners of any creative field have stumbled across this polynomial time algorithm and don't know it, but that seems unlikely.
Mathematical statements are generally easy to state. If P=NP were true, then a valid proof could automatically and efficiently discovered. This would be a kind of creativity automated by technology.
Perhaps it's a legacy from the idea of a life-force or vis viva.
All of which means it's purely a motivating force, not a requirement. We've been motivated to be creative, so we are. Programs can be similarly guided towards ends we desire; perhaps the most direct analogy is genetic programming, where you determine which "breed" by their "fitness" value.
You also have to consider that our creativity has been created by the equivalent of billions of the individuals with more computational power than the most powerful computer in the world right now, running for thousands of years of recorded history. Absolutely nothing we have done with computers approaches this, especially when you consider just how unexplored computer A.I. really is. If we call it ten orders of magnitude more "power", we're making a phenomenally conservative estimate. What could you do with a computer over a billion times more powerful than everything that Google owns?
I'd say our less-than-10-billionth computational efforts have in fact created a couple creative things. Which puts them about on par.
If you mean "can never be simulated in machines in full" I would disagree.
The opposite proposition -- that P is equal to NP -- is widely doubted. But if there were a proof that P and NP were equal, anyone with a good data set could verify the proof in a few minutes. Just run the proof on a hard 3-SAT or whatever and observe the answer returning significantly before the heat death of the universe.
So what we think is true is insanely hard to verify but what we think is false is blazingly obvious to check. Chalk another one up for irony.
Edit: As someone else pointed out a few minutes ago http://www.hpl.hp.com/personal/Vinay_Deolalikar/ confirmations began arriving today. How soon before Vinay has a Wikipedia entry? For those who are qualified to evaluate this, I suspect a consensus as to validity will develop within weeks if not days. If thumbs up, he must be worthy of one of the outstanding large-cash-value math prizes. Perhaps even the Nobel?
Edit: Also the Goedel prize
There was a paper about how recognizing bad securities is a NP hard problem a while ago. So this is applicable. (Tongue-in-cheek.)
"Manuscript sent on 6th August to several leading researchers in various areas. Confirmations began arriving 8th August early morning. Final version of the paper to be posted here shortly. Stay tuned. "
And besides, the question is over the entire proof strategy and not the specific techniques involved. It seems plausible that one could give a relativizing proof using some method of calculation from statistical mechanics, for example.
Taking a methods from a different domain in no way shows that the mechanics of those method don't reduce to the same mechanics as a "natural proof". Further, it makes the actual process much more obscure.
Indeed, statistics in general seems like a hard approach for overcoming the Razborov-Rudich limit, since the limit is on showing that a "typical" function has certain properties and anything statical seems like it would be "typical".
But who knows really, maybe Razborov-Rudich isn't correct, since in fully generality it relies on things that "mathematicians generally believe" (the existence of certain pseudo-random functions) rather than things that are definitely proven - as far as I know.
Such proofs are said to relativize -- i.e., they are still valid relative to any oracle.
"Model theory" might be closer to "a" technique or at least a somewhat distinct set of techniques.
And other have noted, there's reason that a proof from statistical mechanics would relativize.
Assuming this isn't a hoax, and the proof holds up, this is front-page news kind of big deal.
As I recall, this has way more real-world practical usage than the Fermats Last Theorem proof.
Read the "Consequences of Proof" section in the Wikipedia article here:
 The responses below are correct. Proving P=NP means the world gets turned upside down. P != NP is already sort of assumed.
Looking at his page at HP Research, he has several other publications in legitimate areas, and seems to be a well qualified researcher.
He may be wrong in this paper, but there's no reason to suspect its a hoax or that he's some kind of kook (as many other comments have sort of implied).
b) The practical consequences of P=NP are immense. The practical consequences of P≠NP are that we can stop looking for computational unicorns and fairies.
We have for some time thought that P≠NP, but boy do people like those unicorns and fairies.
Similarly, most people believe the world is round, and we go around the Sun. This hasn't prevented serious flat-Earthers nor geocentrists from existing.
in 'a' above didn't you mean to write 'inequality of P and NP' ?
Could you explain what some of those consequences be? Does P = NP really need to be proved to achieve those practical consequences? Can't it just be assumed to be true and then see what the result is?
NP questions take a long time to answer.
P questions take much less time.
The statement "P = NP" means that any NP question can be transformed into a P question. Answering the P question gives you an answer to the NP question.
So, if P = NP, then we can answer questions that we thought were slow, much more quickly than we would have thought possible.
Proving P = NP would (presumably) give you that method for transforming NP problems into P problems. So you would suddenly be able to figure things out easily that currently are hard and take a lot of computing power.
It's often assumed that P = NP would make cryptographic problems much easier to break, but apparently that's not really the case: http://world.std.com/~reinhold/p=np.txt
(Though, as you say, there could still be an O(n) algorithm - this is not known).
What I said is true even for linear programming (a P-complete problem) - it has polynomial and efficient algorithms - though the latter are actually not polynomial in the worst case. :)
The first historical example of an important proof which was non-constructive was Hilbert's Basis Theorem in 1888, which solved a famous problem posed by Paul Gordon. It is now viewed as in some sense being the first salvo in the debate over constructivism in the 20th century. See http://en.wikipedia.org/wiki/David_Hilbert#The_finiteness_th... for more.
Wikipedia should answer most of your questions regarding the consequences:
NP problems can be solved, it's just slow (think brute force method, running through all the possibilities).
Computer Scientists have been assuming for years that P does not equal NP, so they've been doing research with that assumption already in place. Proving that the assumption is correct won't hugely change anything.
I'm not trying to knock down the greatness of this proof, but the repercussions aren't going to be that major.
The suspence is killing me. As an observer I'm worried that there is some small error in the core of his argument that will cause the whole thing to unravel, as is often the case for proofs of this level. But I am hopeful that at least this may be a breakthrough that points to approaches to finally nail this great beast.
My comment still stands.
Especially since the cold fusion thing, researchers have been loath to publicly announce revolutionary findings lest the findings fail to pan out. Researchers don't want to be known as publicity-seeking cranks, they want to be known as earnest and honest academics, which I suspect entails playing to the in-crowd and letting the system work--the system being to talk to your colleagues before hand, release everything through peer-reviewed journals, and if everything passes muster, you've eliminated any risks to your reputation while achieving renown as the guy who proved P != NP.
On the other hand, having the proof publicly available to anyone who can understand it can massively parallelize the process of having it verified, or having errors found and potentially corrected. Meanwhile, the researcher's reputation is maintained, because he did the right thing, and if any errors are found he'll be judged to have acted prudently and conservatively in advance of having his findings reviewed.
 Fleischmann and Pons publicly claimed to have discovered a means to induce nuclear fusion at room temperature in 1989, but their results couldn't be replicated.
 And the system does work--this is no criticism.
There's gotta be money in this news :-)
But considering the fact that we've all been operating under the assumption that P != NP, the losers don't lose much (if anything) and the winners don't win much (if anything). If P = NP was proven, it's possible but not guaranteed that bank robbers and whatever software firms can pivot fast enough to exploit P = NP get huge windfalls, internet retailers would be ruined, banks who didn't shut off their data links fast enough would be ruined, etc. This worst case scenario hinges on a proof that took the form of a proof by counterexample, which happened to neatly solve an NP-complete problem in efficient P time. In better case scenarios, where other proof forms were used or the P-time solution was a horrific factor like O(n^100,000), online retailers would still lose a little if only based on media hype about the discovery scaring grandmothers away.
For example, if there is some crypto company whose business is premised on hedging that a "P == NP" proof is just around the corner - short them. Alternatively, maybe buy the firm we think of as RSA. That kind of thing.
This paper hasn't yet got a lot of press attention and I'm only about 1/4 joking when I say I'm curious as to what effect it will have on various stocks if it isn't quickly debunked.
The proof is unverified but most qualified sources say it is one of the best efforts in many years, it could even be the proof. This was the #1 news in most technical blogs yesterday.
This work was pursued independently of my duties as
a HP Labs researcher, and without the knowledge of
others. I made several unsuccessful attempts these
past two years trying other combinations of ideas
before I began this work.
"A proof that showed that P ≠ NP, while lacking the practical computational benefits of a proof that P = NP, would also represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place."
"Cryptography, for example, relies on certain problems being difficult. A constructive and efficient solution to the NP-complete problem 3-SAT would break many existing cryptosystems such as Public-key cryptography, used for economic transactions over the internet, and Triple DES, used for transactions between banks. These would need to be modified or replaced."
So basically we can focus on finding good approximations to NP problems and feel safe that this proof won't immediately jeopardize all of our bank accounts.
Also, factoring is known to be subexponential (e.g., GNFS). While there is no known polynomial time factoring algorithm, this may give some evidence that the integer factorization problem might be in P. Factoring is known to not be NP-complete and we hope it is not in P. While a proof that P=NP would be disastrous for RSA, a proof that P!=NP does not mean factorization is guaranteed to be safe against future advances in algorithms.
In other words, proof that P!=NP would be an amazing result but someone could still improve factoring algorithms.
I didn't post it here because I realized it's a joke most people won't get and those who do get it won't find it funny ;-) This joke could even be a Reddit vs HN shibboleth as I just saw it made there too and it's being voted up.