Hacker News new | past | comments | ask | show | jobs | submit login

Your question hinges on the definition of "proof" and what it means to "compute" a proof.

The shortest answer I can give is no. There are no proofs that are beyond computation. There's some subtlety here and your final question of whether there is mathematics that is beyond computation is a much deeper question.

The question becomes far more interesting if you replace "fundamentally" with "efficiently." But that's getting ahead of myself.

Modern mathematics and logic view (formal) proofs as computation. A formal proof is a purely mechanical process that should require no creativity whatsoever to follow, just as a computation is a purely mechanical process that should require on creativity whatsoever to carry out.

If you require a non-mechanical leap (say of creativity) then it is no longer a "proof" (each leap corresponds to a "leap of logic" colloquially).

Modern mathematical standards usually mean that every informal mathematical proof should in theory be formalizable into a formal proof. If this is not possible, the informal mathematical proof is most likely insufficiently rigorous.

The creativity then lies rather in the construction of the proof rather than its verification.

Note though that given infinite time, again all proofs can be generated mechanically. Given that all proofs can be verified mechanically, you just keep generating every possible string and verifying it until you find a proof that either proves or disproves your statement in question. This is an extremely long and utterly impractical process, but it demonstrates that in theory this is possible.

There is a subtlety here. Mathematical statements can be independent of the set of axioms that you consider when deriving a proof, that is the axioms may not be strong enough to either prove or disprove the statement. This mechanical enumeration process cannot find that a statement is independent of the starting axioms, since it cannot know when to stop looking for a proof or disproof if it has not yet found one. This is analogous to the halting problem, where a computer cannot tell if a program has not yet halted whether it will halt in the future (e.g. by waiting just another few seconds more) or if it will never halt.

Hence in a certain sense the verification of mathematics is entirely bound by computation. The creation of it, given infinite time is also bound by computation. However, given that we do actually care about efficiency, that's where things get interesting.

Okay so what about the things that sibling comments touch on. Such as:

1. Godel's incompleteness theorem(s)

2. Arithmetical hierarchy (and the closely related notions of Turing jumps and Turing degrees)

3. (Not touched upon by other comments, but potentially relevant) Tarski's undefinability of truth

4. (Also not touched upon by other comments, but also an interesting one) P =? NP

1 - 3 apply to humans as well as machines. 4 is the only one in my eyes that's interesting.

1. Godel's incompleteness theorems are statements about the limitations of formal systems, but this limitation holds for humans as well. That is, modern mathematics is usually stated to be built on the foundations of ZFC set theory. All the constraints that Godel's incompleteness theorems place on ZFC (namely that there will always be statements independent of ZFC and that we cannot use ZFC to prove itself logically consistent) also hold for all humans using ZFC. The only advantage that humans have is that we can decide if ZFC is too restrictive to switch formalisms and use something else (i.e. the creativity in constructing a proof). This is the creative spark that is not yet captured by machines (although who knows with AGI).

2. The arithmetical hierarchy is a generalization of the following phenomenon: for universally quantifiable statements, i.e. "for all" statements such as "all cows have spots" or "all people are mortal," it is easy to disprove them since that requires only a single counterexample, but it is hard to prove them, since you need some way to show that the statements holds for all exemplars. On the other hand for existentially quantifiable statements, i.e. "there exists" statements such as "there exists a cow which is a sphere" or "there exists a person who is mortal," the opposite is true. It is easy to prove them since you only need to find one example, but it is difficult to disprove them, because you need to show that any potential exemplar is incorrect. The arithmetical hierarchy generalizes this by examining the "difficulty" of statements where "for all"s and "there exists"s interleave with each other. It turns out that "there exists" corresponds to the halting problem (there exists a number of steps after which the machine halts) and "for all" corresponds to the "not-halting" problem (for all finite numbers of steps the machine will not have yet halted). The hierarchy is an observation that each time you interleave a "there exists" and a "for all" you make the halting problem even "harder" (an oracle that magically gives you the answer to the halting problem for a lower number of interleavings will have its own halting problem it cannot decide in higher numbers of interleavings). An example of an interleaving is the question of whether a function is total. This is equivalent to the statement "for all inputs, the function halts." However, we just said that halting is a "there exists" statement. So we can further reduce the statement to "for all inputs, there exists a number of steps such that the function halts after such a number of steps." Notice the interleaving of "for all" and "there exists." However, note again that all the limitations I've described here apply both to computers and humans.

3. Tarski's undefinability of truth is the statement that for essentially all interesting logical systems, there is no way to write a logical predicate that takes another logical statement as an argument and is true if and only if that logical statement is true (essentially a logical version of the halting problem). Again this is a problem that afflicts both humans and machines and is an artifact rather of the attempt to formalize something (the heart of logic and mathematics).

4. This is the interesting question. So far I've outlined a brute-force way for a computer to enumerate all proofs. This clearly does not help actual human mathematicians do mathematics. Can we do better than that? Especially since we don't need to brute-force the validity of a proof? Humans certainly can do this better for specific instances of proofs (in much the same way that humans can prove specific programs halt, but cannot give a general formula for doing so). Is there a globally better way of doing this? Scott Aaronson delves into this far better than I could here: https://www.scottaaronson.com/papers/philos.pdf.

Thank you for taking the time to write such an elaborate and still accessible answer to my question!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact