There is also another interesting article  by Arkady Bolotin at Ben-Gurion University in Israel, who argues that the lack of macroscopic quantum behavior implies P!=NP. Basically, macroscopic quantum behavior would imply that the universe is computing the solution to a NP-hard problem in violation with currently accepted laws of physics. Also, see this debate between Bolotin and an educated commenter .
: http://www.scottaaronson.com/papers/philos.pdf [pdf]
I'm pretty sure this is BS.
The linked medium post does nothing to defend the implicit premise that the universe can be computed in a reasonable amount of time by classical computers.
Guess it wasn't clear enough that the two points aren't related.
"Similarly, Stephen Cook says:
...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a proof of a reasonable length, since formal proofs can easily be recognized in polynomial time. Example problems may well include all of the CMI prize problems."
If P = NP, the exponent may well something like x^100 or x^(# of atoms in the Universe)
"It is also possible that a proof would not lead directly to efficient methods, perhaps if the proof is non-constructive, or the size of the bounding polynomial is too big to be efficient in practice."
Given the long standing nature of the problem, my guess is that some new techniques or insights will be required to solve it one way or the other. Those may well give clues to solving NP/NP-Complete problems efficiently.
Until someone actually does it though it's still just speculation.
Of course, why this might be the case remains to be investigated.
The best approximation algorithm (it's NP-hard) runs in time O(n^(10^100)):
For more examples see here http://cstheory.stackexchange.com/questions/6660/polynomial-...
Do you have examples of large-constant algorithms in P that are not approximations of NP-Hard problems?
One example are all the consequences of the Robertson-Seymour theorem which (non-constructively) asserts that a large number of graph problems "is in P". For instance, we know that testing whether a graph can be embedded in the torus is in P, but the algorithm in question is completely infeasible.
I would expect a positive answer to P = NP to be completely useless in practice. A negative answer would similarly be rather unexiting by itself. What would be exiting are the tools for getting this answer...
One of those is that according to classical mathematics there are concrete problems with polynomial time algorithms that there is no way of producing, and if produced, there is no way of verifying. (Literally no way of verifying, pick an axiom system and there are problems for which the question of whether there is one more forbidden minor is undecidable in said axiom system.)
If you doubt that such algorithms have any real existence, you might be a closet constructivist...
That's basically Donald Knuth's position. http://www.informit.com/articles/article.aspx?p=2213858
This can lead to large constants, but doesn't have to. It depends on the value of the fixed parameter.
That finite list can be extremely long indeed. And in some cases, there is no way of finding or verifying the full list.
As an example, an algorithm to find the optimal solution to a Rubik's cube would actually be very difficult to do. However, finding a solution in a short enough time frame is quite easy. This is more true the larger of a cube you try and solve.
Contrast this with problems such as encryption, where we have specifically made problems where there is not a "best answer", but rather there is only a single answer that matters.
So complexity theory is confirming your intuition, which is that 'optimization type problems' are hard.
What about sequencing life? Supposedly we share a lot at the genomic level with animals we are vastly different from. Could a large degree of the polynomial that is life explain that divergence?
I am assuming you are pushing the same angle as the other poster. That is, most optimization problems are actually not "large constants" in the exponents. So, since most of what developers think of as "hard" are almost all classical NP problems, maybe it makes sense to look at other things we don't typically think of as computational.
Could just be nonsensical, though. I fully accept that.
Edit: I meant "small" coefficients above. Apologies.
NP: probably O(e^kn), k>1, where k close to 1 makes it 'fast', but still exponential
The objection is that k can be big for some polynomial algorithms but small (near 1) for other exponential ones, so we can have 'slow' polynomial algorithms and 'fast' exponential ones.
But in practice, k is usually small for polynomial algorithms and not close to 1 for exponential ones so simply knowing whether an algorithm is exponential or polynomial tells you something.
The 'hard' problems you gave me all (probably) fall into the exponential runtime class, whereas the easier ones (like finding any solution to a Rubiks) fall into the polynomial runtime. So your intuition of 'hard' vs. 'easy' matches complexity theory's evaluation, even though theoretically there could be really slow 'easy' problems and really fast 'hard' ones.
Internet lore and popular media then assume that immediately for example all encryption will be trivially broken. Mathematically it may be true because then all NP problems would be known to be polynomial, but there's still the issue of the practical steps involved factorising a huge number which may have enormous constants or very large exponents.
Or if it turns out we can reduce it to some other known polynomial problem, there's still the actual transformation which itself may be polynomial with large constants or exponents.
Maybe you meant that factorization hasn't been proven to be NP-complete, which is true. In fact it would be very surprising if it was NP-complete, because that would imply other surprising things (e.g. NP = co-NP).
Also, as others have pointed out, the constants involved in any "generic" algorithm for solving any NP problem in Polynomial type may be astronomical, so even if it turns out that P=NP, then it may not ever be feasible to actually such any algorithm for anything practical.
If a proof has 10,000 statements, we already couldnt generate it with an x^8 algorithm.
This seems to presuppose what it tries to prove.
"We see a similar “invisible fence” phenomenon in Leslie Valiant’s program of “accidental algorithms” . The latter are polynomial-time algorithms, often for planar graph problems, that exist for certain parameter values but not for others, for reasons that are utterly opaque if one doesn’t understand the strange cancellations that the algorithms exploit. A prototypical result is the following:
[Abstruse theorem elided, but it boils down to: there's this weird problem with a parameter k that can be proven to have polynomial-time solutions for some values of k and proven to be NP-hard for other values of k.]
Needless to say (because otherwise you would’ve heard!), in not one of these examples have the “P region” and the “NP-complete region” of parameter space been discovered to overlap."
Well, this is semi-popular technical writing directed at people who are not theory-of-computation researchers. If all scientific writing would be written like this, then all papers would be at least hundreds of pages long and skip important details.
It's absolutely wonderful -- and I think vital -- that a researcher takes the time to do popular writing, but most scientific writing can't be popular. This kind of writing is usually reserved for surveys, like this one. Popular renderings of groundbreaking research also exist in publications like Scientific American, and blog posts by researchers like Aaronson who like (and are good at) communicating their findings in this way, but the important details are often very much research-level and quite impenetrable to all but very specialized researchers.
Nothing wrong about directing a technical writing to people other than "theory-of-computation" researchers. Most people are not theory-of-computation researchers, not just laymen, but also also statistics researchers, topology researchers, geometricians, logicians, etc.
>If all scientific writing would be written like this, then all papers would be at least hundreds of pages long and skip important details.
GP obviously means all scientific writing aimed at the same kind of audience.
Theory-of-computation researchers would obviously not need such a volume at all in the first place: they already know this stuff, and they can just directly read pure research on P vs NP, not some summary of what P vs NP means.
That said, all scientific writing (including those aimed/produced by experts in a field) could take from the quality of the wording, and the attention to clarity. It doesn't mean it has to also follow the lengthy introduction to its subject.
In other words, it seems to me like we're splitting hairs, while the intended meaning was pretty clear from the GP's comment context.
In that case I agree. :)
Scott himself said that he had to do a lot of legwork to understand GCT. I can bet that this is the case for most theoreticians.
Complexity theory is a vast field; it's difficult to become acquainted with the disparate portions of the field. Surveys such as this are valuable for the field.
Edit This reminds me of an assignment my Discrete Math professor, Dr. Fishel, gave me, which shows how to create (in polynomial time) a Hamiltonian circuit in any undirected graph in which every vertex is of the order ceiling(n/2) where n is the number of vertices in the graph. A C++ implementation is provided. 
I like to describe GCT as “the string theory of computer science.” Like string theory, GCT has
the aura of an intricate theoretical superstructure from the far future, impatiently being worked on
today. Both have attracted interest partly because of “miraculous coincidences” (for string theory,
these include anomaly cancellations and the prediction of gravitons; for GCT, exceptional properties
of the permanent and determinant, and surprising algorithms to compute the multiplicities of
irreps). Both have been described as deep, compelling, and even “the only game in town” (not
surprisingly, a claim disputed by the fans of rival ideas!). And like with string theory, there are
few parts of modern mathematics not known or believed to be relevant to GCT.
This is way over my head, but anyone has any insight on that? Sounds intriguing.
For Mulmuley et al. (it's been a while since I was in the thick of this) it's finding an algorithm for computing a particular decomposition of a representation having to do with determinants and permanents. This essentially shows the matrix permanent formula can't be written as a determinant of a small matrix (of smaller formulas of the variables), which is the "algebraic" analogue of P vs NP, and which Mulmuley et al. believe will pave a road toward P vs NP. Last I heard there was a serious blow to this program. (edit )
For Ryan Williams, it's a more direct approach to separating complexity classes such as ACC (a small circuit class) and NEXP (a large turing machine class), where this is the most amazing explanatory technical document I have seen for almost anything. He also has this program whereby if you can find faster algorithms for boolean satisfiability (SAT)—that is, faster exponential-time algorithms—you get complexity class separations "for free."
"The one prediction I feel confident in making is that the idea of “ironic complexity theory”—i.e.,
of a profound duality between upper and lower bounds, where the way to prove that there’s no
fast algorithm for problem A is to discover fast algorithms for problems B, C, and D—is here to
With the help of a CTC you can solve any problem in class NP in polynomial time; in fact you can do better and solve anything in PSPACE in polynomial time; and the capabilities of quantum and classical computers are the same.
(The last of those is a result due to Scott Aaronson -- author of the survey article in the OP -- and John Watrous.)