There's also a slatestarcodex post about Scott Aaronson: http://slatestarcodex.com/2015/01/01/untitled/
First, even the CS's nightmare (where P=NP) may mean little practical difference (e.g. the simplest 'NP' takes O(n^1000)).
Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
One interesting, physical case is the spontaneous emission of a photon from an excited atom. It looks like an exponential decay, but the theory shows that the decay have to be polynomial (I don't remember it's power, if more like 1/t^3 or 1/t^6, but not too high). Up to my knowledge, we haven't detected the polynomial tail (because when it is, the probabilities are so low).
So, is there a physical effect that cares about computational complexity? Or a phenomenon in which it matters that we can simulate it in "only" O(n^100), not O(2^n)?
The No-SuperSearch Postulate (there is no physical means to solve NP-complete problems in polynomial time) is not purely mathematical conjecture. It's also a claim about the laws of physics. The postulate includes P != NP as a special case, but it is stronger.
If you assume that the postulate is true, you can use it to explain things like why quantum mechanics is linear, why adiabatic systems have small spectral gaps, why timeline curves don't exist, black hole information paradox, impossibility of relativity computers, why QM-amplitudes are not directly observable. The question of how much physics you could derive from No-super-search postulate is interesting.
QM is linear for many other reasons (the simplest related to not being able communicate faster than light); smallness of spectral gaps is AFAIK a conjuncture.
And in general, playing with causality will bring many other problems.
One relevant example is protein folding, which is NP-complete. Why then have we not found arrangements such that protein conformations are solutions to NP-complete problems? It's reasonable to postulate that nature does not actually solve NP-complete problems and that almost all carriers of problematic instances died off (not completely, since prions exist after all).
If this is the case then we can make a prediction that there exists some polynomial time algorithm able to predict conformations of biologically encountered instances in reasonable time to high accuracy and that a program search such as deep learning will one day find this algorithm given sufficient data. That would be a huge deal.
The following links argue more or less that from a bayesian perspective, a prior weighted heavily on smaller exponents for the case P=NP, is more reasonable given the examples and rarity of realistically intractable polynomial time algorithms (and how quickly they seem to fall to reasonable with focused attention).
Paragraph 3 in http://www.scottaaronson.com/democritus/lec6.html
Argument 1 in https://windowsontheory.org/2013/05/06/reasons-to-care-in-ho...
People have been pursuing this path for a long time. See, for example, http://www.ef.uns.ac.rs/mis/archive-pdf/2011%20-%20No4/MIS20... for an algorithm for the traveling salesman problem based on how ants work.
Sure, it may. But there are precious few examples in the wild where we're dealing with crazy powers like that. I think the bigger (but still quite small) failure in P's ability to measure "ease" of computation is that it only considers the worst case. A good example is graph isomorphism. There was recently a development of a quasipolynomial algorithm for graph isomorphism, which had the best bound we've found for the problem yet. However, with our existing algorithms it was extremely difficult to find examples where they didn't run in polynomial time, even when actively looking for them. But again, this is the exception, not the rule.
> Second, for many problems even O(n^2) is too much (think of big data problems). (Also, given that computation requires physical resources, you cannot scale it arbitrarily.) So even in P the actual exponent matters a lot.
Very true, but TCS spends a lot of time developing new algorithms to shrink exponents as well. A lot of the methods and skills needed for making better algorithms to get problems into P also apply to moving them around within P. And many of the same tactics for trying to figure out exactly how P and NP fit together apply to trying to figure out the same for P and BQP.
Since you have a background in quantum physics and (I assume) care about that field, are you aware that quite a bit of the TCS community works on quantum problems? There have been plenty of results that stand to be very physically relevant. Some good examples are the recent (quite rapid) progress in analyzing symmetric cryptography schemes' vulnerability to attacks by a quantum computers. Interestingly, many are starting from the same jumping off that asymmetric attacks did; namely Simon's algorithm (an important stepping stone towards Shor's algorithm).
The only thing I am criticizing here is the _metaphysical_ implications of computational complexity classes (and also, jargon which is misleading for not TCS people - e.g. O(n^100) s not "efficient" or "fast"). Furthermore, as you pointed, some problems have practical solutions which usually work efficiently, even if not always or give approximate, but still - useful, solution.
> There have been plenty of results that stand to be very physically relevant
What I meant was not "can be applied to physical systems" (I agree!), but "is a foundation for physical laws" (I am highly skeptical and I am yet to see an example).
Let's see if I can provide a very rudimentary intuition about why this might happen. YMMV, and I'm sure this isn't the whole story.
O(n^2) basically implies that we have to look at every pairwise combination of elements. Higher powers might imply needing to look at combinations of multiple elements / paths / whatever. In practice, these algorithms tend to be more or less tractable.
O(2^n), on the other hand, generally implies a brute force search, where we need to consider every possible solution, or every possible combination of elements. This is essentially the same as saying we can't do much better than guesswork.
So O(n^k) versus O(2^n) is basically "looking at small groups of elements of known size" versus "trying every possible solution."
This isn't the whole story—there are problems in between the two groups, though a couple of big ones have been reduced to polynomial time in recent years. But to understand the "metaphysical" interpretations that some CS people sometimes loosely apply to the two classes, it helps to look at the actual distribution of real world algorithms. It's pretty clumpy. There's an empirical element here.
Such is the case with P and the notion of being "easy" to compute. Yes, some algorithms in P are actually qualitatively "hard" and some outside are actually qualitatively "easy". But in the vast majority cases, finding that a problem is in P means we've drastically reduced its difficulty, and proofs that a problem isn't in P establish bounds that make it unfeasible in the general case.
In what sense do you mean "metaphysical"?
Personally, I would say computational complexity is both "physical", in the sense that it gives fundamental limits to what can/cannot be done in the physical world using some amount of resources (say, anything obeying Blum's axioms). I would say it's also "metaphysical", in the sense that those limits are derived purely mathematically, and are "imposed on" the physical world; rather than, for example, by observing the physical world and imposing consistency with those observations on some mathematical model. I think of these computational "laws of nature" in a similar way to the "law of nature" that e^(i*pi) + 1 = 0.
In other words, we can imagine (AKA develop a consistent mathematical model of) a world operating by Newtonian physics; we can do the same for a world operating by special or general relativity; and for the standard model; we can do it for worlds with N dimensions; with Euclidean, hyperbolic or spherical geometry; and so on. All of these "make sense", in a self-consistent way, and it's up to physicists to figure out which of those models best describes our physical world.
On the other hand, there don't seem to be self-consistent models of universal systems which can solve their own halting problem, or models which don't posess the hierarchical complexity classes we've found (although, of course, the machine models may differ, we may introduce hierarchies of halting oracles, etc.). In that sense, these seem to be "laws of nature" which are intrinsic to our models, by virtue of their mathematical definition, rather than being imposed from the outside in order to agree with observations.
So yes, there is a physical cost to non-reversible computation.
"Ultimate physical limits to computation" http://arxiv.org/pdf/quant-ph/9908043v3.pdf
"Limits on fundamental limits to computation" http://arxiv.org/pdf/1408.3821v2
"The physics of information processing superobjects: daily life among the Jupiter brains" http://diyhpl.us/~bryan/papers2/The%20physics%20of%20informa...
I thought this was the other way around: the theory predicts exact exponential decay, but experiments aren't entirely clear that that is always the case. Depending on which textbook or paper you read, some say the deviations from exact exponential decay are just experimental error, while others say they are evidence of additional effects that the exponential decay law does not account for.
From quantum field theory, you get polynomial decay and it's quite fundamental (related to causality, holomorphic functions (Kramers–Kronig relations)) and if it were not working, a lot of physics would need rewriting.
I remember it from a lecture, many years ago. Some googling brings me to this thread with links to papers:
I think you're oversimplifying, although I agree that I was also oversimplifying when I said that the theory predicts exact exponential decay.
What I get from that stack exchange thread and the linked papers is this: in an idealized (and not physically possible for a real system) model in which every individual decay is statistically independent from every other and in which all of the quantum states that are possible end points of the decay are unoccupied, the theory predicts exact exponential decay. But in real physical systems, neither of those conditions is exactly true. Individual members of an ensemble of decaying quantum systems (such as radioactive atoms in a sample) interact with each other (either directly or indirectly via states in the environment), so their decays are not completely independent; and any real decaying system has an external environment which includes some occupied "target" quantum states. When these effects are taken into account, the theory predicts a decay law which is often called "power law" but which I would describe as more of a "truncated exponential": it replaces an infinite series of power law terms (the exact exponential) with a finite number of them. The actual measured decay law in a given experiment will depend on how much the experiment probes the portion of the state space where the above effects are significant (for example, a decay law measured over a very long time can deviate measurably from exponential where a measurement over a shorter time would not).
Yes, in these subtle effects backaction may be important. (The whole non-exponential decay is because the "emitted" EM field acts back on the particle.)
I'm not seeing that in the linked papers. I'm seeing descriptions of exponential decay under certain conditions, and deviations from that under other conditions.
Please don't generalize one or a few computer scientists to the whole group.
By the way, a similar thing can be said for the halting problem. We may not be able to prove mechanically whether any program halts, but if we can do it for any program appearing in practice, that may be good enough. Still the proof is of immense theoretical value.
s/computer scientist/some theoretical computer scientists
(You are right that I missed a quantifier; this regex would also fix a typo.)
> Still the proof is of immense theoretical value.
I enjoy mathematics, also in its pure form. Sure, there are many things nice, valuable and beautiful. Some are of practical use.
Yet, misusing it is a problem. Some otherwise great minds (like Roger Penrose) make some questionable claims about consciousness based on unjustified implications for the real world.
Alas, we can't even do that. Eg I have a simple program that systematically looks for a counterexample to Fermat's Last Theorem, and halts if it finds one.
We know now that this simple and comparatively natural program will never halt. But the proof is crazy.
Most cryptography rests on open problems so I don't think the question is settled yet. In either case, the answer is of immense importance.
Firewalls[1,2], for starters.
Metaphysics (and physics) is concerned with uncovering the fundamental truths of the universe. Computational complexity indicates this may be unachievable.
The waterfall conjecture was stated as :
'Given any chess-playing algorithm A that accesses a "waterfall oracle" W, there is an equally good--chess-playing algorithm A', with similar time and space requirements that does not access W.'
From this we can assume that the article is a metaphor for us as external observer given semantics for computation. Thus by providing the reduction we showed that the computation in and of itself does not have semantics. However, why would we assume there to exists A'? Even if there is A', can't we reduce A -> A', and if we can reduces W to A, then by proxy W -> A'?
Could somebody help me?
Could you elaborate on the oracle approach? Because I interpreted it as being the external observer in this case. From computational complexity I know that an oracle solves NP problems in polytime by a nondetermnistic turing machine, but I am not sure how this oracle stuff loops back to the philosphical argument.
The proof provided by Aaronson is really opague as to what and how it proves that the argument is invalid.
To say philosophers should focus on unfalsifiable ideas is to say philosophers should be antipragmatic and inconsequential. One could easily argue that many are, but to encourage it? That's not necessary.
We need to focus on everything, and think about everything. We shouldn't ask "our philosophers" to "think" about things. We should all learn how to think and become better philosophers for ourselves, and find a way to openly contribute the progress of our thoughts.
Philosophy is the origin of all the sciences. Starting with Mathematics, as various topics became tractable they became self-standing disciplines, leaving the non-tractable ones behind. Thus inherently philosophers are the ones who work on the really hard problems.
(True, epistemology remains in philosophy but I would argue that the tractable parts moved into mathematics).
AI suffers from the same problem -- once something becomes well accepted people say, "well THAT isn't AI; AI is <some other intractable domain>".
Ultimately any thought can be advanced with further thinking because concepts and ideas are progressive. Our mind is an incredible machine, and for anyone to feel thinking is only for philosophers is truly doing themselves a disservice. We all need to learn to think harder. And our "philosophers" need to find ways to teach how to think better and harder and newer thoughts. What anyone thinks or thought of in the past is more history than philosophy.
What does that even mean? The entire question only makes sense from a human point of view.
From the FAQ
> If a story has had significant attention in the last year or so, we kill reposts as duplicates.
So having more than a year passed I guess this is ok?
The pointers to past discussions are helpful if there was a real discussion then.
I strongly dislike all those "but look, it has been submitted before (but nobody ever commented)", because it adds absolutely nothing, but in this case it's different.
Alas, they are sort-of a standard.