Hacker News new | past | comments | ask | show | jobs | submit login
Why Philosophers Should Care About Computational Complexity (scottaaronson.com)
140 points by sweis on Aug 8, 2011 | hide | past | web | favorite | 36 comments

A given Turing machine M either accepts, rejects, or runs forever (when started on a blank tape)... [W]hich one it does is an objective fact, independent of our formal axiomatic theories, the laws of physics, the biology of the human brain, cultural conventions, etc. (p. 43)

Nice way to put it. As Franzen points out in "Inexhaustibility", although non-logicians sometimes find it puzzling, the above is related to what mathematicians mean when they say that a statement is true without further qualification. For instance, take Gödel's first incompleteness theorem. It states that if F is any consistent formal system capable of proving statements about whether or not arbitrary Turing machines halt, then there are true statements which cannot be proved within F. (Here, consistent means that it's not possible within F to prove both a statement "S" and its negation "not S".)

In the same sense, its true (as Aaronson wrote in "Logicians on Safari") that "there’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis."

They already do. Incompleteness Theorem ("some logical statements are true but are unprovable", e.g., "this program will terminate" for some arbitrary programs a.k.a. the Halting Problem) is one of the most significant achievements in logic.

Here's philosophers' take on Kurt Godel:


I am, however, curious as to how other problems in computational complexity figure in philosophy, e.g., P vs. NP completeness. Wish I had the time to read the essay.

I haven't read the paper, but here's one example I've seen. Assume for the moment that the universe is closed, complete (there is nothing but our universe), and deterministic under the hood. Obviously, nobody in the universe has free will, right? Because it's all closed and all outcomes are predetermined.

But maybe that's not a useful definition of free will. The conventional, fuzzy definition of free will has an omniscient narrator in it; "I do not have free will if the omniscient narrator knows in advance everything I will do" is a reasonable expansion of the conventional idea. But in my hypothesized universe, there is no omniscient narrator. There are only various entities with various degrees of computational power, with a varying but rather quantifiable amount of information that any entity can obtain about any other.

Suppose it can be demonstrated that it is computationally infeasible for any entity due to being too complex to ever predict the actions of any human-sized (or greater) other entity, even if granted all the remaining resources in the universe to compute the actions. Or suppose it is demonstrated to be quite easy. Either way, that would be an interesting philosophical contribution, no?

(I'm just sketching the idea here. I'm not trying to defend it or attack it. Oh, and one of the foundations of philosophy is that there is never One True Definition; all loaded terms in this post are ultimately ill-defined, and I've avoided the distraction of even beginning to nail them down on purpose.)

Your suggestion confuses an epistemic limitation with a metaphysical one. Just because no one knows what we're going to do next doesn't mean there isn't a fact of the matter about it.

While the idea of God looking down on us from on high is a compelling story, if you take God away but leave determinism it doesn't do anything to resolve the problem: our actions are still pre-determined and cannot be changed.

Of course, whether or not that makes us free is a further question. A compatibilist [1] would claim that we are.

[1] http://plato.stanford.edu/entries/compatibilism/

"Confuses" is one way to read it. "Shines a spotlight on the intersection of" would be a somewhat friendlier way to read it. Is a fact that can't be known, even in theory, even a sensibly a "fact"? I deny your implicit claim that that isn't an interesting question because the answer is transparently obvious.

Epistemology becomes much richer when fed with mathematical proofs of statements like this, and some older snap answers at least have to be reconsidered.

I don't think it's by any means uninteresting, and I agree that complexity theory has a role to play in enriching epistemology. That's why I said elsewhere in this discussion that I think this kind of work is important.

However, it's not in any way obvious that there are metaphysical implications to the limits this sort of result would place on knowledge. Facts are (for present purposes) simply to be identified with physical states of affairs. If some physical state of affairs is unknowable (and "System S will be in state R at time T" certainly looks like a physical state of affairs), does that mean it does not (or will not) actually obtain? I have a hard time seeing how one could make that argument, although of course that shouldn't stop you, if you think you have a good angle!

To address the particular example under discussion: if you take free will to consist of the impossibility of predicting an agent's actions and conclude that because this prediction is impossible, we are free, you still need to provide a further argument that your definition of free will is the correct one.

I'm not prejudging whether or not you could come up with a convincing argument to that effect—it might well be possible. But to my mind, the point of the omniscient narrator example is to add determinism to the system: because our actions are known in advance, they can be known in advance; if they can be known in advance, they must be determined. I take this to be the implicit argument when people suggest that we're not free because God knows what we will do.

Obviously there are different ways in which we can understand the modalities of time, and I won't try to pretend that this isn't a contentious area.

Well, how you interpret whether or not what we don't know will occur next can still be considered a fact depends on your core worldview (e.g., compatibilism, deterministic, fatalistic, etc.).

I subscribe to the belief that there are four types of knowledge and beliefs: ontological subjectivity, ontological objectivity, epistemological subjectivity, and epistemological objectivity. Various thoughts can be categorized into each one of these depending on what is being expressed.

Compatibilism is just one of those "answers" that defines away the problem. "Oh, if we just say that free will is determined then problem solved! I'm a genius!"

Also, FWIW omniscience is not logically incompatible with free will.

I think that's a bit unfair to the compatibilist position. Ultimately an account of free will should include a definition of the concept, and an explanation of how it figures into our intuitions about particular cases. In particular it should explain situations in which the given definition conflicts with our intuitions, and obviously determinism is a big part of that.

The incompatibilist argument is at heart very straightforward: free will is incompatible with determinism; determinism is true; therefore free will does not exist. When pressed on the second premise, the incompatibilist can further assert that indeterminism is incompatible with free will, and that determinism and indeterminism exhaust the possibilities (this is just an instance of the law of the excluded middle). Therefore regardless of which one of them holds, free will does not exist.

However, even this argument requires a definition of free will, in order to demonstrate its incompatibility with (in)determinism. It thus suffers from the same 'problem' that you allege compatibilism does: the compatibilist can simply say that the incompatibilist is defining the problem into existence.

I'm curious to know what you think an answer which didn't suffer from the problem you outline would look like.

> They already do. Incompleteness Theorem ("some statements can neither be proved nor disproved", e.g., The Halting Problem which is incomplete) is one of the most significant achievements in logic.

Aaronson covers this, pointing out that those results are both in computability theory, not complexity.

Ah, thanks for the correction. I always thought of complexity as being a subset of computability theory (much like NP-Complete problems are a subset of Complete problems), but you are right-- they're separate disciplines.

Will need to read this, of course. My fault for replying before at least attempt to scan over the full essay linked from the post.

> (much like NP-Complete problems are a subset of Complete problems)

That sentence is messed up in a bunch of ways. You probably meant: "Turing-undecidable problems are a subset of NP-hard problems."

Or perhaps he meant that NP-complete problems are a subset of NP-hard problems? But that wouldn't be analogous...

(Actually, are uncomputable problems necessarily NP-hard? This sounds obvious at first, but then you realize, wait, why would the reduction be polynomial time? If we assume our undecidable problem is at least as hard as the halting problem there's an obvious constant-time reduction, but what if it's intermediate? Is this something that's known?)

Uncomputable problems are not necessarily NP-hard. You can for example encode a version of the halting problem such that the instances are simply very long. For example, consider the language L defined by k is in L iff k=2^2^2^n for some n and nth Turing machine halts on the blank tape (some reasonable listing of Turing machines). In order to feed an NP-hard problem to an L-oracle, one needs to ridiculously expand the length of the problem far more than polynomial length so this certainly can't be done in polynomial time.

Ah, good one. Hooray for padding tricks!

I hate it when a pedantic correction backfires. I should have been more careful. :)

You're right to raise that question. It's a classical result of computability theory that there exists an undecidable problem that is strictly easier than the halting problem: http://en.wikipedia.org/wiki/Turing_degree#Post.27s_problem_...

Also I just realized that just because something is at least as hard as the halting problem, doesn't necessarily mean it stays so when you restrict to polynomial-time reductions. So maybe even what I stated above isn't true. :-/ Well, the halting problem is NP-hard, at any rate. :P

How could undecidable problems possibly be a sub-set of NP-Hard problems? All the complexity classes fall within the realm of computable problems.

No, NP-hard problems can still be solved, just not in polynomial time.

> I always thought of complexity as being a subset of computability theory

But while that's certainly true in a sense, that doesn't imply what you stated -- just because they care about computability theory doesn't mean they care about this aspect of it.

No, incompleteness and its generalizations fall within computability theory, not complexity theory.

In regard to your second point, you say "other problems in computational complexity". Again, you mentioned computability theory, not CCT. They are different endeavors. Aaronson was careful to say that this is not another essay on philosophy and computability.

The incompleteness theorem has an interesting "workaround": http://stirling-westrup-tt.blogspot.com/2011/07/tt-ns-2823-u...

This is basically aimed at analytic philosophers. Continental philosophers generally aren't interested in this sort of thing.

What evidence do you have for this?

It's interesting that one of the references Aaronson cites [63] is the pg essay "How to do philosophy" http://www.paulgraham.com/philosophy.html

This looks fascinating, and in yours truly is guaranteed at least one reader, once I have some time to devote to it. I think there's a lot of room for interesting work in this area.

The problem with Godel and friends is that they point to the limits of rationality. Without a transcendent rationality (and morality) upon which to discurse, philosophers are out of a job, so to speak. If you use logic to understand everything, the last thing you want to deal with is somebody telling you the limits to logic. So they blow him off.

At least I think the above is true for "analytic" philosophers. The "continentals", on the other hand, have been engaging in an extended attack on transcendent rationality since Nietszche who said, to paraphrase, that every transcendent metaphysics was an attempt to justify a morality, and every morality was facilitated a (usually inarticulate) "will to power." (This line sort of starts with Hegel, who posited that rationality emerged from history and the development of a collective human spirit, but wasn't "out there" before that). Godel seems like ammunition for these guys, but I don't think they have picked him up much. I guess if you are anti-rationalist, the last thing you read up on is advanced logic, even if it is so advanced it comes to its own frontier.

Um, OK. There's a lot wrong here.

For starters, philosophers are plenty comfortable with the limits of rationality, and were poking at that long before Gödel came along. Similarly, "using logic to understand everything" is a pretty sweeping generalization, and one which quite likely commits an equivocation the way you're trying to use it.

As for analytic philosophy, well, you seem to have a lot of misconceptions about what exactly that means. Analytic philosophy's certainly done its fair share of dumb things, but blowing off Gödel for showing "the limits to logic"? Not sure where you get that one from; for the most part, analytic philosophers have been spending the last century or so on things like the problem of demarcation; Gödel's work with respect to formal systems is basically irrelevant to that and to plenty of other things philosophers spend their time on.

I believe Alain Badiou might be an exception here.


The book "The Mathematics of Novelty" seems like a pretty good introduction to Badiou. As with most non-fiction published by re.press it is creative commons: http://re-press.org/books/the-mathematics-of-novelty-badiou%....

I've always considered philosophy as a sort of the mind's struggle for its own life. The introspective mind somehow can't validate itself without somehow having the absolute last say about something.

Given this track, a philosophical mind will produce a lot of thoughts, models, and theories of how anything that already exists could exist in the first place. That is itself an interesting mind-game that in some cases might even produce an insight that is applicable to something real, but generally that's where it should stay.

The mind can't think itself into existence, even if it wants to believe it can.

If it does the result bears resemblance to as if someone would try to describe forest but only be allowed to draw straight lines in one color. At best it could be exemplary line art, at worst it's just ridiculous and childish and most people will turn the other way.

It seems nobody has mentionned Gregory Chaitin[1] yet, and in my opinion it's kind of mandatory when discussing this very topic. So now it's done :-). I like his work a lot. His not really working on this subject anymore (now he's developping a new field called metabiology which is also fascinating but not very relevant here) but he used to work on Algorithmic Information Theory and he wrote a lot of books and essays (you can find almost everything on his webpage) on the subject we're discussing here. _The Limits of Mathematics_[2] might be the most relevant to Hacker News users.

[1] http://www.cs.umaine.edu/~chaitin/ [2] http://news.ycombinator.com/item?id=1725936

It seems nobody has mentionned Gregory Chaitin[1] yet

With good reason. The logician Torkel Franzen said it well in "Gödel's Theorem: An Incomplete Guide" [Chapter 8, p 148]: ``Chaitin's claim to have discovered "the amazing fact that some mathematical statements are true for no reason" has no apparent content, but seems rather to be based on the general associations surrounding the word "random".''

Unabridged work posted on the Electronic Colloquium on Computational Complexity: http://eccc.hpi-web.de/report/2011/108/

one interesting perspective on complexity is that of Ashby and his law of requisite variery. http://en.wikipedia.org/wiki/Variety_(cybernetics)#The_Law_o...

this work was later manifested in the work of stafford beer creating a control center in chile during the 70s.(http://en.wikipedia.org/wiki/Project_Cybersyn).

I think irreducibility is well received in philosophy. Evolutionary Biology not so much for political reasons. Sure there are are plenty of models that incorporate emergent phenomena but it's a heated area.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact