
The Scientific Case for P≠NP - apsec112
https://www.scottaaronson.com/blog/?p=1720
======
mixedmath
This is from 2014. In this post, Scott describes various reasonings why one
might consider that lead one to suspect that P!=NP.

A surprising amount of verbiage of this post is devoted towards responding
to/shaming a particular person who used to comment frequently on Scott's blog,
and who thinks it is not safe to assume that P!=NP. I find this off-putting.
But sometimes people engage in mini-crusades on the internet... the interested
reader can read the prequel and subsequent exchanges between Scott and Lubos
throughout comment sections of a number of blogs.

~~~
lisper
I can personally attest to the truth of Scott's characterization of Lubos
Motl's personality. Lubos once publicly called me a "category 5 loon" [1] for
something I said in a presentation on quantum mechanics [2]. What he didn't
seem to realize was that the thing he was attacking me for was presented as a
straw man. There's a lengthy introduction where I said, "The thing I am about
to tell you is clearly false, but it seems to follow logically from the things
they tell you in quantum mechanics 101. The trick is to figure out where the
flaw in the reasoning lies." (And BTW, I can still fool card-carrying
physicists with this puzzle even today.)

AFAIK Lubos has never apologized (not that I ever really gave a lot of weight
to his opinion).

[1] [https://motls.blogspot.com/2012/10/evading-quantum-
mechanics...](https://motls.blogspot.com/2012/10/evading-quantum-mechanics-
again.html)

[2]
[https://www.youtube.com/watch?v=dEaecUuEqfc](https://www.youtube.com/watch?v=dEaecUuEqfc)

~~~
thethirdone
> What he didn't seem to realize was that the thing he was attacking me for
> was presented as a straw man. There's a lengthy introduction where I said,
> "The thing I am about to tell you is clearly false, but it seems to follow
> logically from the things they tell you in quantum mechanics 101.

Could you give me a time stamp for that? When I originally watched that video
I didn't catch that, and I can't find it now going back through. I thought you
were just describing the "rabbithole" and what going down and coming up taught
you. So I thought you clearly understand much more about QM than most software
people, but also said blatantly wrong and unsubstantiated things.

And I think you are also missing Motl's final point. He gets the core idea of
what you were trying to do, but thought that it made a poor presentation.

> At that moment, I became totally unable to say which of his statements he
> made are meant seriously and which of them are ironic or statements he wants
> to disprove. I suppose he essentially understands those things, he just
> encapsulates them in a completely confused story.

~~~
lisper
> Could you give me a time stamp for that?

5:20 "I will show you how that story can't possibly be true."

25:10 "So that is the end of step 1, I am now going to go on to step 2 and
show you why the story that I have just told you can't possibly be true."

28:13 "The story I have told you up until now is in fact wrong." (And you
should listen for another minute to the answer to the subsequent question."

I honestly can't imagine how I could possibly have made it any clearer. If you
have a constructive suggestion I'm all ears.

~~~
thethirdone
I thought your original quote was direct quote which is why I was confused. If
it was a direct quote I likely would have had a very different interpretation
of what the presentation.

I would like to give constructive criticism, but feel I still don't understand
exactly what what you are trying to say in the presentation.

As it stands the message you seem to be trying to convey in the presentation
is that the Copenhagen interpretation leads to a contradiction. This comes
from having the "clearly false result" followed by the "Copenhagen is
untenable" later. But it seems like from "The thing I am about to tell you is
clearly false, but it seems to follow logically from the things they tell you
in quantum mechanics 101. The trick is to figure out where the flaw in the
reasoning lies" that your true belief is that there is a flaw in the logic
instead which changes the meaning of the presentation dramatically.

So is your intent that everything after 28:00ish is completely accurate?
Because that is the sense I get from it, and I think there are issues after
that point.

At 43:19, you say that the Copenhagen interpretation is scientifically
untenable. My understanding is that all the different accepted interpretations
are mathematically equivalent so if you are trying to say something true
"scientifically untenable" is a very poor choice of words.

~~~
lisper
> I thought your original quote was direct quote

Actually, what I actually said is more accurate than my off-the-cuff
paraphrase in the above comment. There is no flaw in the reasoning. The
reasoning is correct, and I say that in the presentation too. It is the
premises that are false.

> your true belief is that there is a flaw in the logic

No. See above.

> My understanding is that all the different accepted interpretations are
> mathematically equivalent

Yes, that's true. The part of the Copenhagen interpretation that is
scientifically untenable is the non-mathematical part, the part that says that
there is this physical phenomenon called the collapse of the wave function
which is non-unitary and outside the scope of the theory, but which causes
real measurable effects (i.e. the destruction of interference). That part is
demonstrably false, notwithstanding that it is a fair approximation to the
truth in many interesting cases, which is the reason that it has survived as
long as it has despite being demonstrably false.

What is also false (and the reason I have such a bee in my bonnet about all
this) is all the pedagogical baggage associated with the Copenhagen
interpretation, how it's intractably weird, and no one understands it, and you
shouldn't even try to understand it because you'll "go down the drain"
(Feynman's words). That's all bullshit. QM is not hard to understand. The key
is that measurement is just large-scale entanglement, and there are no
particles. That's it. That's the big reveal. This has been know for _decades_.
I learned it 17 years ago and it was considered old news even then. There's
some evidence that Heisenberg knew it but didn't want to advance the
hypothesis because there was no way to settle the question experimentally at
the time. But since Bell and Aspect and Zurek (and Cerf and Adami) there is no
question that it is true. It is as well established a result as any in
science, and has been for decades. But for some unfathomable reason it's still
controversial.

~~~
thethirdone
> Actually, what I actually said is more accurate than my off-the-cuff
> paraphrase in the above comment. There is no flaw in the reasoning. The
> reasoning is correct, and I say that in the presentation too. It is the
> premises that are false.

In that case, I think your presentation was accurate to what you wanted to
convey, but I disagree that the Copenhagen interpretation has a major problem.

It is perhaps harder to get a good grasp of than another interpretation, but
the really hard parts of QM is in the math.

>> My understanding is that all the different accepted interpretations are
mathematically equivalent

> Yes, that's true. The part of the Copenhagen interpretation that is
> scientifically untenable is the non-mathematical part, the part that says
> that there is this physical phenomenon called the collapse of the wave
> function which is non-unitary and outside the scope of the theory, but which
> causes real measurable effects (i.e. the destruction of interference).

I was not quite clear. I meant that all of the predictions are identical (thus
leading to the math calculating them being the same). If that is not true,
either I have been terribly mislead or you have a groundbreaking result in QM.

> That's all bullshit. QM is not hard to understand.

The really hard part about understanding quantum mechanics is dealing with the
underlying math. It is not intuitive. Complex numbers tend to make an
intuitive understanding hard. Properly understanding the Quantum Fourier
Transform is not easy, and that doesn't even use measurement. I never touched
the Copenhagen interpretation in my study of the QFT and it was still hard.

~~~
lisper
There's a difference between _understanding_ QM and _doing_ QM. (See the
preface of Griffith's book [1] for an explanation of what I mean by that
distinction.) _Doing_ QM is indeed hard. But it has been claimed by many (with
again Feynman being the canonical example) that _understanding_ QM is hard
too, so hard in fact that one should not even try. That is false. Anyone who
understands high school algebra and what a complex number is can understand
QM.

> I meant that all of the predictions are identical

That depends on what you mean. Formally, yes, all the predictions are
identical. But informally, the Copenhagen interpretation predicts that the
EPRG experiment should yield FTL communications.

> If that is not true, either I have been terribly mislead or you have a
> groundbreaking result in QM.

I don't know how groundbreaking they are, but QIT has led me to profound
insights that Copenhagen could not, e.g. [2] [3].

It also leads to interesting questions, for example: on QIT it is immediately
obvious that it is not possible to undo an entanglement without bringing the
entangled particles physically back together again (because if this were
possible that would lead to FTL). So it is a deep mystery why the two-slit
experiment works at all, because every photon must be entangled at a minimum
with the particle that emitted it. I don't know, but I suspect that if you
pursued this line of thought it would lead to some insight about how lasers
work.

[1] [http://www.fisica.net/mecanica-
quantica/Griffiths%20-%20Intr...](http://www.fisica.net/mecanica-
quantica/Griffiths%20-%20Introduction%20to%20quantum%20mechanics.pdf)

[2] [http://blog.rongarret.info/2014/12/quantum-teleportation-
dem...](http://blog.rongarret.info/2014/12/quantum-teleportation-
demystified.html)

[3] [http://blog.rongarret.info/2014/10/parallel-universes-and-
ar...](http://blog.rongarret.info/2014/10/parallel-universes-and-arrow-of-
time.html)

------
rcthompson
More generally, we can show that the attitude that regards any unproven
statement as having a 50% probability of being true is trivially self-
contradictory. Suppose X is an unknown number, and all we know is that it's
between 0 and 100. We can make the following statements about X, none of which
is provable with the information we currently have:

1\. X < 1

2\. 49 < X < 51

3\. X > 99

If we assume that each of these statements has a 50% probability of being
true, then we trivially get a contradiction, because the probability of the
union of these 3 options becomes 150%, because mutually exclusive
probabilities are additive.

So the idea that a 50% chance of truth represents a state of no knowledge
about the truth of a statement is just wrong. If we actually knew that any one
of the above statements had a 50% probability of being true, we would clearly
have more information about X than we would otherwise. There is no sense in
which a 50% probability of truth is a useful default.

~~~
newen
You are misapplying probabilities here. If you assume no knowledge, and you
have two options for X, true or false, then it's a 50% probability of X being
true.

Similarly, if there are N possibilities for X, and no prior knowledge for X,
then it's 1/N probability of X being on of those possibilities.

The key is what prior knowledge you bring to the table.

~~~
rcthompson
The random variable X has an infinite number of possible values (real number
between 0 and 100). But each of the statements I made has two options: true or
false. Despite each having two options, there's no way that all 3 statements
can have a 50% probability of truth. The whole point of my comment was that
"no prior knowledge" cannot possibly be equivalent to "equal probability of
all options".

~~~
skissane
> The whole point of my comment was that "no prior knowledge" cannot possibly
> be equivalent to "equal probability of all options".

But your scenario isn't "no prior knowledge". In your scenario, you know X is
a real number between 0 and 100, which is prior knowledge. And, so the correct
thing to do, is use that prior knowledge to assign probabilities, which gives
you 1%, 2% and 1% for each of your propositions, not 50%.

~~~
rcthompson
You don't have any knowledge of X beyond the fact that it's between 0 and 100.
It could be uniformly distributed, it could be the number of heads in 100 coin
flips, or it could be the average number of yards that a kickoff was returned
for in all NFL games last season. It could even just be 73. Assuming a uniform
distribution is an arbitrary prior assumption that you're making with no
support from any data, and it's no more or less reasonable than any other
prior assumption you could make about the distribution. There's nothing
special about the uniform distribution that makes it some kind of natural or
default prior for a random variable on a bounded interval.

~~~
skissane
> There's nothing special about the uniform distribution that makes it some
> kind of natural or default prior for a random variable on a bounded
> interval.

I don't agree. Suppose I carry out this process:

Step 1) Generate a random histogram over the bounded interval

Step 2) Pick a random number according to the distribution that histogram
represents

If I repeat that process a large number of times (say a few million), what
will be the distribution of the result? It will be approximately uniform, and
the more times I repeat the process the closer to uniform I get. So the
uniform distribution actually is special, as the average case of randomly
chosen distributions, and hence why it should act as the default when the only
thing we know about the distribution is an upper and lower bound.

~~~
rcthompson
If you're trying to say that the uniform distribution represents an average of
the uncountably infinite set of possible distributions on an interval, I don't
see how you can justify that. "Generate a random distribution on an interval"
is not an obviously well-defined procedure, so I don't think you can say
anything about what the distribution produced by "averaging" every possible
distribution would look like. Which matches the fact that you have no
information on what the distribution of X looks like, other than that it is
zero outside of the interval [0,100].

Edit: In any case, the process you've described is just begging the question
(i.e. it is circular reasoning). In order to prove that the uniform
distribution is a natural choice for X, you assume that choosing uniformly
from the set of all possible distributions is natural. Even if there was a
finite number of possible distributions to choose from such that choosing
randomly from that set according to a uniform distribution was a well-defined
operation, you're still assuming that it's natural to assign equal probability
to all options in order to prove that it's natural to assign equal probability
to all options.

~~~
skissane
> If you're trying to say that the uniform distribution represents an average
> of the uncountably infinite set of possible distributions on an interval, I
> don't see how you can justify that.

So, is this X actually an arbitrary real number? Because real numbers are not
known to exist in the physical world. In the physical world, we will take some
measurement to a finite number of digits, and also produce an estimate of the
error in that measurement. The measurement and the error bounds are all
described by rational numbers, of which there are countably many. We don't
know whether the set of possible "real values" (as opposed to measurements –
if that concept is even meaningful) is actually uncountable – for all we know,
all the "real values" could be rational, or could be drawn from some countable
subset of the reals. So, if we are dealing with some kind of measured physical
quantity, we actually know that the interval [0,100] is countable. We also
know that the value could only possibly be measured to some finite degree of
accuracy, so we actually know that the interval [0,100] is a finite set.

> In any case, the process you've described is just begging the question (i.e.
> it is circular reasoning).

Maybe it is just an axiom? At some point we just have to accept some things as
axiomatic, so why not accept that? It can't be used to derive a contradiction,
and it feels self-evident to me (even if not to you), so I feel justified in
accepting it as such.

~~~
rcthompson
I think it's probably reasonable to say it's an axiom. That's compatible with
my claim that it's an arbitrary choice that is not inherently more correct nor
better supported by any information than any other choice, except perhaps in
the sense that it feels better or more intuitive.

~~~
skissane
Well, what happens if we adopt alternative axioms (e.g. some sort of
assumption of a normal distribution instead of a uniform one)? Are those
alternative axioms consistent?

Maybe we might be able to show that agents adopting the uniform axiom make
better (more rewarding) decisions in practice than those who adopt a competing
axiom do?

I don't have a proof, but my gut says yes.

------
lokerfoi
What made me realize that P?=NP is not that important for practical issues is
the unbelievable effectiveness of heuristics. They seem to get extremely close
to the optimum than best approximation algorithm known for the problem.

~~~
lorenzhs
That's not entirely true. I'm a PhD student in theoretical computer science
and some of my colleagues work on (hyper-)graph partitioning, both of which
are NP-complete. For most real-world instances, nobody has any clue what the
optimum cut/imbalance/... is. It would be very useful to evaluate the quality
of the heuristics, though. A 0.5% improvement is suddenly much more impressive
if you can show that you're only half as far from the optimum as your closest
competitor. Or are you not seeing any improvement on instance X because
whatever you're trying isn't working, or is it because you've already got the
optimum?, etc

~~~
lokerfoi
One of the best public examples is LKH solver for travelling salesman. I'm not
familiar with problems for which proving optimality of a solution to a
specific problem instance is not accomplished.

For example, for travelling salesman and vehicle routing there are many
instances with known optimum.

Even for those unknown tight bounds are easily achieved.

------
Grue3
Ok, but how long did it take to find a polynomial algorithm for primality
checking (published in 2002)? [1] And NP-complete problems are even harder
than that! The history of mathematics is rife with conjectures that would be
considered true if viewed "scientifically", but have an extremely large
counterexample that disproves the whole thing. [2]

[1]
[https://en.wikipedia.org/wiki/AKS_primality_test](https://en.wikipedia.org/wiki/AKS_primality_test)
[2]
[https://en.wikipedia.org/wiki/P%C3%B3lya_conjecture](https://en.wikipedia.org/wiki/P%C3%B3lya_conjecture)

~~~
CJefferson
Nothing comes close to P=NP. Again and again, across thousands of problems,
almost everything ends up being P or NP-hard, and never both.

It is certainly possible that P=NP, but it will involve basically starting
most of the last 30 years of theoretical computer science again.

~~~
baddox
I love the “electric fence” examples, but I’m not as skeptical as Scott that
perhaps humans simply haven’t invented the mathematical tools required to
break this fence. Isn’t that something that has happened plenty of times?

You could pretty easily write a parody of this article focused on taking
square roots of negative numbers. For generations it looked obvious that no
matter which number we tried to square, it would never result in a negative
number. You can see the same sort of electric fence that _can’t_ just be a
coincidence: we can find the square root of zero, but even the _slightest_ bit
less than zero and suddenly there is no square root!

Obviously this sounds silly, because we know that mathemticians invented a new
tool: imaginary numbers (a name apparently given mockingly by critical
mathematicians), which can be used not only to find square roots of negative
numbers, but also to solve problems that don’t obviously involve square roots
of negative numbers.

~~~
pvg
It sounds silly because the analogy is a bit silly. Mathematicians didn't
spend centuries trying and failing to take square roots of negative numbers,
endlessly studying this as some sort of important question or goal. Complex
numbers popped up fairly quickly after the serious study of the roots of
polynomials began. They're intrinsic to it.

[https://en.wikipedia.org/wiki/Complex_number#History](https://en.wikipedia.org/wiki/Complex_number#History)

P≠NP is not like that.

~~~
SomeStupidPoint
It was something like 1500 years between the discovery of squareroot 2 and
squareroot -1.

If you view mathematics as the exploration of progressively stranger objects,
that there's such a gap between discovering two roots should make us cautious
of discounting things after a 100 year search.

We don't know what we don't know.

~~~
pvg
The _irrationality_ of the square root of 2 was known long (longer than 1500
years) before the invention of complex numbers, never mind root 2 itself. But
whatever the duration, there wasn't relatively uninterrupted, continuous study
of mathematics over it - that's a fairly recent thing. Mathematical (and
other) knowledge was routinely lost or not seriously pursued at all. This
supposed 'gap' between two fairly arbitrary discoveries doesn't inform our
understanding of the development of mathematics itself as much as you seem to
think.

------
amelius
Nice post, but he loses sympathy with me for shaming that guy.

~~~
throwaway8537
Normally I would agree, but it should be noted that Lubos is a _major_
asshole, and deserves every bit of shaming he gets. He's easily in the top 3
biggest assholes I've had to deal with in my academic career, and that's
saying something.

He has it all: extreme levels of arrogance, narcissism, pedantry, absolute
lack of empathy, extreme aggressivity. He is also a notable science denialist:
climate change denier, evolution denier...

~~~
amelius
Could be. But how difficult is it to just ignore him, or if there is such an
urge, write in a way that subtly addresses him while other readers have no
clue of what is going on, or use phrases such as "some people say ..."? Plenty
of good options, if you ask me.

~~~
naasking
You're missing the history where Lubos was spamming Aaron's other articles on
these topics with all of the claims that Aaron deconstructs in this post.
Readers of Aaron's blog were rightly confused about what to think. This post
is the proper response to help your readers make sense of this.

~~~
evanb
"Scott", or "Aaronson", but not "Aaron".

------
EGreg
Reminds me of this

[http://www.lbatalha.com/blog/feynman-on-fermats-last-
theorem](http://www.lbatalha.com/blog/feynman-on-fermats-last-theorem)

------
mcguire
"Telling truth to competence."

Brilliant.

------
justifier
Are there other theories that rely on the assumption that p≠np?

Like how there are theories that begin "assuming the reimann
hypothesis|goldbach conjecture is correct.."

Disclaimer.. it is my intended inference to show that p=np

~~~
SAI_Peregrinus
The existence of one-way functions is one of those things that requires P≠NP.
So all cryptographic hash functions would be broken.

~~~
justifier
I was thinking along the lines of a practical and novel solution to another
problem built on top of the assumption p≠np

But fair point, that is usually why I say such confidence in those
technologies is disconcerting

------
grzm
(2014)

------
SomeStupidPoint
> Because, like any other successful scientific hypothesis, the P≠NP
> hypothesis has passed severe tests that it had no good reason to pass were
> it false.

Is this the crux of the post?

That seems like an extremely crap reason to believe a mathematical theorem.
What am I missing?

I also don't find his heuristic convincing -- to the point his description is
making me question if P=NP is actually possible after all: a close complex
border that seems to yield increasing numbers of "close calls" sounds exactly
like the situation where there's a lurking, sparse class of "touches" that we
haven't thought of, particularly when under 100 years of research has been put
into the topic.

~~~
nl
_That seems like an extremely crap reason to believe a mathematical theorem.
What am I missing?_

What's the alternative?

As I see it, the possible ways to think about it are as follows:

1) You must think of it as 50/50 possibility of being true _until_ an absolute
proof is found.

2) You look at places where it could fail, and if it passes then you update
the likelihood based on that.

To me (and as this post argues) it makes sense to follow (2). Not only is it
more _useful_ practically (you can use that assumption to solve other things,
conditional on it being true) but also it reflects the reality of the world.
Even if it turns out _not_ to be true in every case it is clear that in the
majority of cases it is true.

Maybe it is untrue but it is possible to define the set of cases where it is
untrue. If this is the case, it seems likely that class is very small, so it
is _mostly_ true. That is the opposite of a proof of course, but it does seem
reasonable to think about your confidence of it being true as being somewhat
proportional to the likely size of the class.

 _close complex border that seems to yield increasing numbers of "close calls"
sounds exactly like the situation where there's a lurking, sparse class of
"touches" that we haven't thought of_

That's not really how it works though. If it keeps happening and those classes
don't touch, then it typically means they aren't as close as they seem (ie,
there is an extra dimension in which they are a long way apart). In this case
it seems like the P/NP divide is actually a thing which means things which
appear close actually aren't, so if you analyze them in that space they don't
appear close together at all.

~~~
Filligree
> Maybe it is untrue but it is possible to define the set of cases where it is
> untrue. If this is the case, it seems likely that class is very small, so it
> is mostly true. That is the opposite of a proof of course, but it does seem
> reasonable to think about your confidence of it being true as being somewhat
> proportional to the likely size of the class.

Any NP-complete problem reduces to any other, so if P=NP for any of them then
it's true in general. Some more efficiently than others, of course.

~~~
nl
This is a fair point.

I guess my only response would be that some of the complexity classes in "
_some more efficiently than others, of course_ " might be impractical even if
they are theoretically possible.

~~~
Filligree
Which might be the saving grace should the equality turn out to be true, given
the sheer number of scary-powerful algorithms that happen to be NP-complete.
Certainly it'd be convenient if they were solvable, but... probably not safe,
you understand?

Not really relevant here, though.

