A surprising amount of verbiage of this post is devoted towards responding to/shaming a particular person who used to comment frequently on Scott's blog, and who thinks it is not safe to assume that P!=NP. I find this off-putting. But sometimes people engage in mini-crusades on the internet... the interested reader can read the prequel and subsequent exchanges between Scott and Lubos throughout comment sections of a number of blogs.
AFAIK Lubos has never apologized (not that I ever really gave a lot of weight to his opinion).
Could you give me a time stamp for that? When I originally watched that video I didn't catch that, and I can't find it now going back through. I thought you were just describing the "rabbithole" and what going down and coming up taught you. So I thought you clearly understand much more about QM than most software people, but also said blatantly wrong and unsubstantiated things.
And I think you are also missing Motl's final point. He gets the core idea of what you were trying to do, but thought that it made a poor presentation.
> At that moment, I became totally unable to say which of his statements he made are meant seriously and which of them are ironic or statements he wants to disprove. I suppose he essentially understands those things, he just encapsulates them in a completely confused story.
5:20 "I will show you how that story can't possibly be true."
25:10 "So that is the end of step 1, I am now going to go on to step 2 and show you why the story that I have just told you can't possibly be true."
28:13 "The story I have told you up until now is in fact wrong." (And you should listen for another minute to the answer to the subsequent question."
I honestly can't imagine how I could possibly have made it any clearer. If you have a constructive suggestion I'm all ears.
I would like to give constructive criticism, but feel I still don't understand exactly what what you are trying to say in the presentation.
As it stands the message you seem to be trying to convey in the presentation is that the Copenhagen interpretation leads to a contradiction. This comes from having the "clearly false result" followed by the "Copenhagen is untenable" later. But it seems like from "The thing I am about to tell you is clearly false, but it seems to follow logically from the things they tell you in quantum mechanics 101. The trick is to figure out where the flaw in the reasoning lies" that your true belief is that there is a flaw in the logic instead which changes the meaning of the presentation dramatically.
So is your intent that everything after 28:00ish is completely accurate? Because that is the sense I get from it, and I think there are issues after that point.
At 43:19, you say that the Copenhagen interpretation is scientifically untenable. My understanding is that all the different accepted interpretations are mathematically equivalent so if you are trying to say something true "scientifically untenable" is a very poor choice of words.
Actually, what I actually said is more accurate than my off-the-cuff paraphrase in the above comment. There is no flaw in the reasoning. The reasoning is correct, and I say that in the presentation too. It is the premises that are false.
> your true belief is that there is a flaw in the logic
No. See above.
> My understanding is that all the different accepted interpretations are mathematically equivalent
Yes, that's true. The part of the Copenhagen interpretation that is scientifically untenable is the non-mathematical part, the part that says that there is this physical phenomenon called the collapse of the wave function which is non-unitary and outside the scope of the theory, but which causes real measurable effects (i.e. the destruction of interference). That part is demonstrably false, notwithstanding that it is a fair approximation to the truth in many interesting cases, which is the reason that it has survived as long as it has despite being demonstrably false.
What is also false (and the reason I have such a bee in my bonnet about all this) is all the pedagogical baggage associated with the Copenhagen interpretation, how it's intractably weird, and no one understands it, and you shouldn't even try to understand it because you'll "go down the drain" (Feynman's words). That's all bullshit. QM is not hard to understand. The key is that measurement is just large-scale entanglement, and there are no particles. That's it. That's the big reveal. This has been know for decades. I learned it 17 years ago and it was considered old news even then. There's some evidence that Heisenberg knew it but didn't want to advance the hypothesis because there was no way to settle the question experimentally at the time. But since Bell and Aspect and Zurek (and Cerf and Adami) there is no question that it is true. It is as well established a result as any in science, and has been for decades. But for some unfathomable reason it's still controversial.
In that case, I think your presentation was accurate to what you wanted to convey, but I disagree that the Copenhagen interpretation has a major problem.
It is perhaps harder to get a good grasp of than another interpretation, but the really hard parts of QM is in the math.
>> My understanding is that all the different accepted interpretations are mathematically equivalent
> Yes, that's true. The part of the Copenhagen interpretation that is scientifically untenable is the non-mathematical part, the part that says that there is this physical phenomenon called the collapse of the wave function which is non-unitary and outside the scope of the theory, but which causes real measurable effects (i.e. the destruction of interference).
I was not quite clear. I meant that all of the predictions are identical (thus leading to the math calculating them being the same). If that is not true, either I have been terribly mislead or you have a groundbreaking result in QM.
> That's all bullshit. QM is not hard to understand.
The really hard part about understanding quantum mechanics is dealing with the underlying math. It is not intuitive. Complex numbers tend to make an intuitive understanding hard. Properly understanding the Quantum Fourier Transform is not easy, and that doesn't even use measurement. I never touched the Copenhagen interpretation in my study of the QFT and it was still hard.
> I meant that all of the predictions are identical
That depends on what you mean. Formally, yes, all the predictions are identical. But informally, the Copenhagen interpretation predicts that the EPRG experiment should yield FTL communications.
> If that is not true, either I have been terribly mislead or you have a groundbreaking result in QM.
I don't know how groundbreaking they are, but QIT has led me to profound insights that Copenhagen could not, e.g.  .
It also leads to interesting questions, for example: on QIT it is immediately obvious that it is not possible to undo an entanglement without bringing the entangled particles physically back together again (because if this were possible that would lead to FTL). So it is a deep mystery why the two-slit experiment works at all, because every photon must be entangled at a minimum with the particle that emitted it. I don't know, but I suspect that if you pursued this line of thought it would lead to some insight about how lasers work.
Richard Feynman would be the poster child for this point of view were he alive today. Freeman Dyson defended it to me the last time I spoke to him, but that was 20 years ago so maybe he's come around.
According to a poll taken in 2013  Copenhagen was the most popular interpretation among physicists.
Sean Carroll apparently thinks Copenhagen is tenable  along with many of his commenters.
Here's the corresponding paper if you don't want to sit through the video. The puzzle is presented in section 3:
"Anyway, my goal here was never to convince Luboš. I was writing, not for him, but for my other readers: especially for those genuinely unfamiliar with these interesting issues, or intimidated by Luboš’s air of certainty. I felt like I owed it to them to set out, clearly and forcefully, certain facts that all complexity theorists have encountered in their research, but that we hardly ever bother to articulate."
Perhaps they needed to be shamed? Who knows. But it is transparent...
At one point in his post, Luboš actually compares computer scientists who find P≠NP a plausible working hypothesis to his even greater nemesis: the “climate cataclysmic crackpots.” (Strangely, he forgot to compare us to feminists, Communists, Muslim terrorists, or loop quantum gravity theorists.) Even though the P versus NP and global warming issues might not seem closely linked, part of me is thrilled that Luboš has connected them as he has. If, after seeing this ex-physicist’s “thought process” laid bare on the P versus NP problem—how his arrogance and incuriosity lead him to stake out a laughably-absurd position; how his vanity then causes him to double down after his errors are exposed—if, after seeing this, a single person is led to question Lubošian epistemology more generally, then my efforts will not have been in vain.
Anyway, now that I’ve finally unmasked Luboš
Shaming is to make one "embarrassed or guilty because of one's actions, characteristics, or associations."
He's literally holding up this guy's character, "vanity, arrogance, incuriosity, laughably-abusrd..."
I'm not sure what those characteristics have to do with the P=NP problem.
I'm not saying, perhaps, this guy ought not to feel shame for his prior comments. Possibly he should. But this article is a shaming article. It isnt absolved of that because someone else started it.
Perhaps using other words might improve the light:heat ratio here. And even then the fact merits consideration that, in some cases, ridicule is earned.
> ridicule is earned
Your comment only emphasises part of what I've already said.
Of course however, he may not. A person who deploys the ridicule of a person's character (over said, the ridiculing of the idea that P=NP) is one who should be aware that this itself has moral downsides and is open to fair criticism.
And I think you may too casually gloss over the point around connotations; a cursory review of the history of privilege rhetoric would suggest there's more substance there than might initially seem to be, and I think this case is parallel.
But discussing whether this is shaming has no more to do with P=NP than the phrases you quote. On that, I think Aaronson has a point, but I don't think there is much to be gained through speculating about the possibilities. If you work in security, you should be aware that at some undetermined point in the future, anything dependent on P≠NP may no longer be dependable.
Yes, and? I'm writing comments about the article, not about P=NP. Since the article contains ridicule, comments about that seem well justified.
I'm not a expert on complexity theory, nor have I claimed to be. The context of discussion in a comment section on an article is the article, not it's purported title.
I haven't criticized the article, and yet people seem to be acting as-if I have. I've only described what it contains. I have made no judgement about it.
> As there is clearly no possibility of inducing shame, then it seems unlikely that this was intended to do so, and so is not shaming
I think you know how bad that argument is. The article contains what it contains.
> I think you know how bad that argument is.
Intent is an essential element of shaming. If you have an argument to the contrary, you would be more persuasive if you were to state it, and not simply insinuate that there is one.
See, for example, however every major comedian speaks about Trump. That is ridicule and shaming of him, of his personality and character. Yet, he's mostly shameless.
I would guess that actually the people who display shame the least (or seem least self-aware about when it is appropriate to display shame, etc.) are those most likely to incite shaming in others.
Don't you get angry at someone laughing (in ridicule) at you? Doesn't this provoke, sometimes, a brooding preoccupation with ridiculing them back? Is it likely that would be effective?
When people are preoccupied in this way, they are dealing with their own internal emotional state (anger, resentment, etc.) -- they arent thinking, "well this will be effective".
By way of example: all of late-night TV.
If you think intent matters, then this is the wrong way round; whether it is a credible attempt to induce shame is a valid consideration. If you don't think intent matters, then it is beside the point. As you know, I do think intent matters, and when you take the exchange as a whole, I don't see it.
> ... (2) written with that effect...
It seems highly unlikely to have had that effect.
> ...and most importantly, (3) written with that technique.
This is where we differ, as I consider this to be the least important concern, well after intent and effect. If shame was neither intended or resulted, where is the shame to justify calling it shaming? Why should one be concerned about it?
The rest of your post does not appear to be applicable to this case, which, in my opinion, is comprised of the whole exchange, and not just TFA.
If you're saying it isnt a mutually ridiculing discussion, rather, it's taking place in a context of affected ridicule for the amusement of its interlocutors -- fine.
I don't know the context, if that is it, then I am wrong.
However from the author's comments here about how obnoxious the other guy is, I believe my view is more likely to be correct.
You can infer intent from style. You can infer intent from anything we all do it constantly as a requisite part of ordinary human interaction, and it is often very successful.
"Intent" isnt some radically internal part of an inaccessible void, it's the motivator for your actions which is often trivial to infer in a given context. However, with anything, inferences may be wrong -- this does not make them ordinarily so, impossible, or generally suspicious.
I think intent matters to (1) and (2), it doesnt matter to (3). The claim I was defending, that this article is (somewhat) shaming and this claim be interpreted as any combination of 1/2/3. Personally my view is that it is all of them (possibly, justifiably so).
The dispute is long since forgotten, but the bickering is still there, and it's confusing and pointless to people who weren't around when it happened.
1. X < 1
2. 49 < X < 51
3. X > 99
If we assume that each of these statements has a 50% probability of being true, then we trivially get a contradiction, because the probability of the union of these 3 options becomes 150%, because mutually exclusive probabilities are additive.
So the idea that a 50% chance of truth represents a state of no knowledge about the truth of a statement is just wrong. If we actually knew that any one of the above statements had a 50% probability of being true, we would clearly have more information about X than we would otherwise. There is no sense in which a 50% probability of truth is a useful default.
Similarly, if there are N possibilities for X, and no prior knowledge for X, then it's 1/N probability of X being on of those possibilities.
The key is what prior knowledge you bring to the table.
But your scenario isn't "no prior knowledge". In your scenario, you know X is a real number between 0 and 100, which is prior knowledge. And, so the correct thing to do, is use that prior knowledge to assign probabilities, which gives you 1%, 2% and 1% for each of your propositions, not 50%.
I don't agree. Suppose I carry out this process:
Step 1) Generate a random histogram over the bounded interval
Step 2) Pick a random number according to the distribution that histogram represents
If I repeat that process a large number of times (say a few million), what will be the distribution of the result? It will be approximately uniform, and the more times I repeat the process the closer to uniform I get. So the uniform distribution actually is special, as the average case of randomly chosen distributions, and hence why it should act as the default when the only thing we know about the distribution is an upper and lower bound.
Edit: In any case, the process you've described is just begging the question (i.e. it is circular reasoning). In order to prove that the uniform distribution is a natural choice for X, you assume that choosing uniformly from the set of all possible distributions is natural. Even if there was a finite number of possible distributions to choose from such that choosing randomly from that set according to a uniform distribution was a well-defined operation, you're still assuming that it's natural to assign equal probability to all options in order to prove that it's natural to assign equal probability to all options.
So, is this X actually an arbitrary real number? Because real numbers are not known to exist in the physical world. In the physical world, we will take some measurement to a finite number of digits, and also produce an estimate of the error in that measurement. The measurement and the error bounds are all described by rational numbers, of which there are countably many. We don't know whether the set of possible "real values" (as opposed to measurements – if that concept is even meaningful) is actually uncountable – for all we know, all the "real values" could be rational, or could be drawn from some countable subset of the reals. So, if we are dealing with some kind of measured physical quantity, we actually know that the interval [0,100] is countable. We also know that the value could only possibly be measured to some finite degree of accuracy, so we actually know that the interval [0,100] is a finite set.
> In any case, the process you've described is just begging the question (i.e. it is circular reasoning).
Maybe it is just an axiom? At some point we just have to accept some things as axiomatic, so why not accept that? It can't be used to derive a contradiction, and it feels self-evident to me (even if not to you), so I feel justified in accepting it as such.
Maybe we might be able to show that agents adopting the uniform axiom make better (more rewarding) decisions in practice than those who adopt a competing axiom do?
I don't have a proof, but my gut says yes.
Okay... what's your basis for this? Uniform prior is how Bayesians do it, even in unbounded intervals. It's especially natural when you have a bounded interval.
Again, the whole point is that you are not making use of any other prior information except for the interval bounds.
The whole argument though was that you are not using prior information. And so you cannot assign prior probabilities in a more intelligent manner than uniform.
If you do use prior information like say, that lots of people have tried to solve for P=NP and have not succeeded, you can assign a higher probability to P/=NP.
But that's only if you assume that people trying to solve for P=NP and not succeeding makes it more likely that P/=NP is true. It is an assumption that you have to make explicit and this is the assumption that is being argued for in the OP article.
The thing is that without having priors, you can't do better than uniform.
For example, for travelling salesman and vehicle routing there are many instances with known optimum.
Even for those unknown tight bounds are easily achieved.
It is certainly possible that P=NP, but it will involve basically starting most of the last 30 years of theoretical computer science again.
You could pretty easily write a parody of this article focused on taking square roots of negative numbers. For generations it looked obvious that no matter which number we tried to square, it would never result in a negative number. You can see the same sort of electric fence that can’t just be a coincidence: we can find the square root of zero, but even the slightest bit less than zero and suddenly there is no square root!
Obviously this sounds silly, because we know that mathemticians invented a new tool: imaginary numbers (a name apparently given mockingly by critical mathematicians), which can be used not only to find square roots of negative numbers, but also to solve problems that don’t obviously involve square roots of negative numbers.
P≠NP is not like that.
If you view mathematics as the exploration of progressively stranger objects, that there's such a gap between discovering two roots should make us cautious of discounting things after a 100 year search.
We don't know what we don't know.
It was something like 200 years between the discovery of sqrt(-1) and more-or-less universal acceptance of -1 as a number in the first place (yes, imaginary numbers predate the universal acceptance of negative numbers). European mathematics in particular favored geometric interpretation of numbers over algebraic interpretations until relatively recently, and while irrational numbers come about very easily in geometry, negative numbers don't have an immediately obvious interpretation, particularly if you exclude it is a convention of directionality.
And while there was some nuemerical evidence that the Polya conjecture was false, there really wasnt all that much evidence that the conjecture was false. The first counterexample is around 9 billion-- you could find it yourself if you wanted.
Proving P=NP doesn't require finding a polynomial time algorithm to solve NP-complete problems. Finding such an algorithm is a much greater challenge than proving that one can exist.
He has it all: extreme levels of arrogance, narcissism, pedantry, absolute lack of empathy, extreme aggressivity. He is also a notable science denialist: climate change denier, evolution denier...
Like how there are theories that begin "assuming the reimann hypothesis|goldbach conjecture is correct.."
Disclaimer.. it is my intended inference to show that p=np
But fair point, that is usually why I say such confidence in those technologies is disconcerting
Is this the crux of the post?
That seems like an extremely crap reason to believe a mathematical theorem. What am I missing?
I also don't find his heuristic convincing -- to the point his description is making me question if P=NP is actually possible after all: a close complex border that seems to yield increasing numbers of "close calls" sounds exactly like the situation where there's a lurking, sparse class of "touches" that we haven't thought of, particularly when under 100 years of research has been put into the topic.
What's the alternative?
As I see it, the possible ways to think about it are as follows:
1) You must think of it as 50/50 possibility of being true until an absolute proof is found.
2) You look at places where it could fail, and if it passes then you update the likelihood based on that.
To me (and as this post argues) it makes sense to follow (2). Not only is it more useful practically (you can use that assumption to solve other things, conditional on it being true) but also it reflects the reality of the world. Even if it turns out not to be true in every case it is clear that in the majority of cases it is true.
Maybe it is untrue but it is possible to define the set of cases where it is untrue. If this is the case, it seems likely that class is very small, so it is mostly true. That is the opposite of a proof of course, but it does seem reasonable to think about your confidence of it being true as being somewhat proportional to the likely size of the class.
close complex border that seems to yield increasing numbers of "close calls" sounds exactly like the situation where there's a lurking, sparse class of "touches" that we haven't thought of
That's not really how it works though. If it keeps happening and those classes don't touch, then it typically means they aren't as close as they seem (ie, there is an extra dimension in which they are a long way apart). In this case it seems like the P/NP divide is actually a thing which means things which appear close actually aren't, so if you analyze them in that space they don't appear close together at all.
Any NP-complete problem reduces to any other, so if P=NP for any of them then it's true in general. Some more efficiently than others, of course.
I guess my only response would be that some of the complexity classes in "some more efficiently than others, of course" might be impractical even if they are theoretically possible.
Not really relevant here, though.
When we find a single bridge, all the complexity classes change, so we can only really divide the problems here into "ones we've found polynomial time solutions for" and "ones we haven't".
There may be something interesting about the ones we havent -- or we just didn't think of something obvious/a lot of special cases with high degree polynomials.
It's worth mentioning the near collisions happen in the low-dimensionality region of the border -- which doesn't have the same separating forecast, as it becomes easier to touch in higher dimensions. We can think of a sphere and a cube -- in low dimensions, a sphere lies inside of a cube; in higher dimensions, the surface bulges past the sides. You just don't see it until something like the 17th dimension. (Similarly, knot theory explodes with complexity in higher dimensions.)
Taking a "scientific approach" to asymptotic problems just makes the CS people advocating it seem vaguely hacky, since they're not even advancing a proper density argument -- it's just such a low-dimensional, poorly bounded argument that it's clear they didn't make a real calculation, they tried to dress up personal feelings in science.
Consider this. You have to bet on either P=NP or P≠NP. Where do you put your money. That is the question we are trying to answer.
His description of "passing tests" actually sounds like the buildup of stress before breakage, where many near-faults are generated.
I'm less confident of the resolution post reading that -- so it seems extremely weird to present as evidence that P != NP, as it shifted my beliefs the exact opposite way.
It's not even anything close: the gambler's fallacy requires that the events you're trying to relate are uncorrelated.
The logical structure of P vs NP is anything but uncorrelated along its surface, and a rough surface along the low-dimensionality region is a good indication of a rough surface along the higher-dimensionality regions.
So -- the fallacy requires you be trying to correlate uncorrelated things; I am not. I'm correlating things that are going to be correlated because of how the spaces are constructed.
Is it? This strikes me as pretty contentious.
Clearly there's a reason you think that isn't true -- perhaps you could share it instead of hiding behind just calling it "contentious".
It seems the stronger claim to suppose that the lower dimensions are rough and the top are smooth, especially when you think about where that roughness comes from.
Let's imagine a class "UP" which is the useful polynomial algorithms -- things we might actually compute.
The arguments raised here are a heuristic that UP != NP, which it certainly supports. My objection is that not only does that argument for UP != NP not support P != NP, it actually is evidence for P = NP, because UP seems to get asymtotically close to NP, and we know there are portions of P outside of UP -- suggesting we can find a "high" P region that sits atop UP and gets nudged over the line as UP approaches NP.
The suggestion that UP approaches NP but there's no P that crosses bears some supporting. Our disagreement seems to be over the obviousness of that relationship, and how much UP resembles P as a whole.
There are (apparantely) so many near misses for which it isn't clear why it should be a near miss. A great explanation for this near missing is the underlying fact that all NP problems contain some 'brute force' core that can never be polynomial.
Also note that the blog post does not at all concern your class UP. None of these are algorithms designed to be run. They are all about exploring abstract complexity classes.
You're correct that stormtroopers will never hit, wrong that stormtroopers are representative of the empire, and wrong about the correlation between stormtrooper hits and empire hits, because you're ignoring higher order effects (eg, while stormtroopers always miss, the empire must hit something to be the "bad guy" -- and they do: Luke's parents, Alderaan, Obi-Wan, Hoth, Luke's arm, etc.
But purely from the existence of stormtroopers consistently missing, you can infer there's a Darth Vader that must score some hits, or the plot wouldn't work. I think we might be in a similar case for P ?= NP, and you're all way too blasse predicting from stormtroopers.
Very little research has been done on P - UP, where UP is all polynomials with degree less than 10,000 and coefficients less than 10^100.
In fact, I don't know a single P-UP algorithm that wasn't explicitly constructed to be -- do you?
I just think most of us are subtly talking about UP, not P. (Including the blog post.) And that you've incorrectly inferred a "plot" when all you've observed is "redshirt fire".
You keep assuming we only look at UP, whereas actually our theoretical work explicitly looks at P. Often, we don't even care about actual efficiency of the algorithm. The constants might be astronomical, but we still like the algorithm for its asymptotic properties.
I'm not versed in recent research, but I imagine the composition argument above often leads to polynomial performance with stupid degrees. I expect people only compute the actual degree when they thing it might be low.
Again though, this is not my field of research.
I think it’s advisable to keep an open mind on the issue, and not simply on the actual question itself; it would not be surprising to me if the solution comes from something like adopting a different framework.
For example, time bounds and complexity classes smell a lot like conservation laws: “transforming such and such arrangement into such and such arrangement requires at least this much computation.”
That said, it’s possible (a) we’re currently failing to consider some “term”, (b) generally ignoring this term doesn’t cause problems when proving lower bounds but (c) the complex border and your “sparse touches” correspond to situations where the missing term plays a more significant role.
That’s a case where P probably isn’t equal to NP but keeping an open mind at least leads in more interesting directions, imho.
I also think people don’t take seriously the possibility of P being equal to NP but with intrinsically high degree. I say this not to be cute—“what if p is np but still de-facto intractable?”—but because I don’t think anyone has a great intuition for, say, what kinds of algorithms have polynomial solutions of minimum degree, say, 8...at least not in the same way we have good intuition for which algorithms are linear, nlogn, n^2, n^3, and so on.
It’s hard for me, at least, to feel overly confident in the “we’ve been working on finding a fast algorithm for seventy years and gotten nowhere” when our algorithmic intuition vis-a-vis higher polynomial degree seems so under-developed.
Even if you don’t consider it likely that p equals np, you can still follow this line of thought and consider the possibility that these “sparse touches” may be cases where the slippery problems like graph isomorphism (etc) correspond to problems with polynomial running times of (unexpectedly) high degree...and thus we keep finding these sparse touches along a seemingly-complex border because we don’t yet have a solid intuition for the capabilities of polynomial algorithms of high degree.
And so on and so forth. P probably isn’t NP but being dogmatic about it is neither fun nor interesting.
His argument seems to be "well, the border doesnt touch using under degree 100 polynomials, so it must never touch!"
I'm disinclined to believe that order 10^374738393874 polynomials can be accurately forecast by order under 100 polynomials, and suspect that portion of the border simply hasn't been examined at all or only in the most basic cases.
To summarize the argument in more general terms: the class of algorithms in P is those where you can derive some global property in terms of increment local decisions. For example, building the shortest path between two nodes in a graph can be done by always picking the closest node. By contrast, NP-hard algorithms have the problem that incremental local decisions don't let that happen: you might need to recolor an entire optimally-colored subgraph to admit a new node.
Furthermore, we know from research that there tends to be a very sharp transition from P to NP-hard in terms of transformation, where relaxing a single condition goes straight from P to NP-hard without any intermediate "we don't know where there is" space. This tends to suggest that it's not really so much a question of problems being P or NP, but rather of instances being intrinsically easy to hard.
The question of P=NP then is really about whether these hard instances are really exponentially hard, or can we embed them into a simpler instance using some combinatorial construct that merely makes it look exponentially hard. So it's still possible for P=NP, but the practical question of "will we get an efficient, guaranteed for all instances, solution to these problems?" is answered in the negative. When you do the combinatorial embeddings to make the polynomial time algorithms work, you end up getting constants that look like 3^2^2^4, and there's no way to shrink those constants to practical ones.
But it's absolutely insane to take the impossibility of low-constant, low-degree embedding as evidence as a general lack, given the rich structure of "large" numbers.
Which is what it's being used as in this post.
(On a side not, I love this kind of scientific approach to believing unproved theorems, and Polya has a lot of thoughts on the topic)