 This might be a very naive question but I was wondering: are there mathematical proofs that fundamentally cannot be computed? And I don't mean that in the Ackermann function sense of the words, I mean is there mathematics that is inherently beyond computation?(still watching the video)EDIT: follow-up, since the replies helpfully point out that yes, it is: does this limit this kind of software, or can theorem proving software get around this by human intervention? Yes, we have proofs (which probably can be proven formally!) that there are things which cannot be proven. That is essentially the incompleteness theorem The smallest 'practical' example I know, is a finite state machine with a proof that we cannot proof (within a specific set of axioms) whether it will stop or not. It's not even very big: 1919 states. > Yes, we have proofs (which probably can be proven formally!) that there are things which cannot be proven.That is a very subtly different question than the one I was trying to ask. Or.. well.. I think it is. What I was trying to ask was if there are formally provable theories that cannot be expressed in a way that would let a computer "compute" the proof.I think that this is not the same as proving that there are unprovable theories.EDIT: I just noticed my phrasing was off in the original question.. gah, this reminds me of conversations with my mathematician friends IRL where I would always trip over my own sloppy use of human language. What I mean with "computing" is verification, not execution > What I was trying to ask was if there are formally provable theories that cannot be expressed in a way that would let a computer "compute" the proof.Do you mean "find the proof" or "verify the proof"? That difference is critical.Theorem-provers that work to find proofs have made many strides, but as of today humans easily run rings around them. That's not to say it can never be done. Years ago good Go players could trounce machines, and that is no longer true. So: Today, most proofs cannot practically be found by a machine, but they are getting better and there's no fundamental reason computers can't be excellent at it.Verifying proofs is something a computer is par excellence at. To verify a proof, the proof checker simply needs to verify that each step is valid. No human can compare to the ability of a computer to be picky :-).In some systems it may not be possible to express a proof of a particular claim in a reasonably short sequence. But that would be a limitation for both humans and computers. In both cases, the answer would be the same: find a way to switch to a "different" system where it's easier to express the proof. In some sense normal math does this; people routinely define new terms, and then build on those definitions so that everything else is simpler. I really hope machine learning will succeed in producing systems that surpass contemporary mathematicians in not only finding proofs, but also suggesting definitions and theorems to simplify a database of mathematics. What you're talking about can be formalized via https://en.wikipedia.org/wiki/Arithmetical_hierarchy#The_ari... where your notion of "computable" likely corresponds to some set between (Sigma_1 intersect Pi_1) and (Sigma_1 union Pi_1) depending on some nuance. In any case, you can show by diagonalization that there are statements strictly outside these sets--that is, first order propositions that cannot be verified or refuted by computer. The question was whether there are statements provable by humans but not computers, which is either trivially impossible or guaranteed, depending on whether you allow the computer to use the same axioms as the people. It depends on how you interpret it, I interpreted it as asking if there are provable statements that can't be "computed" directly. Uh... well... can I claim the "sorry, not a mathematician so my reasoning about these matters is quite sloppy, relatively speaking"-defense?(The responses here have been very nice despite that though, thank you everyone!) I cannot pretend to truly follow the actual mathematics you linked and summarized (I appreciate the attempt!), but the conclusion in the last sentence answers my intended question. Thank you! It's a mistake though. Humans can't prove any of those propositions either. Oh. That's kinda typical, hahaha. I guess this topic can get quite subtly and tricky, even for trained mathematicians? > That is a very subtly different question than the one I was trying to ask. Or.. well.. I think it is. What I was trying to ask was if there are formally provable theories that cannot be expressed in a way that would let a computer "compute" the proof.It depends on the logical system + axioms that your formally provable theory is in respect to, and "how soon" you design the computer that verifies the proof. Recall that a proof is a finite sequence of applications of rules of inference on zero or more axioms to derive a new statement that may be regarded as an axiom for applications later in the sequence, so that the last step derives the theorem.If there are a countable number of rules of inference and a countable number of axioms, and each rule of inference has finite valence (i.e. infer from finitely many formulae-patterns as hypotheses), then yes, there is a way of encoding the rules of inferences and possible axioms of the logical system that a Turing machine (TM) will find a proof of any provable theorem (i.e. verify it), simply by enumeration of axioms in rules of inferences in the positions in a sequence that constitute the proof, while checking whether the axioms legally fit the rules of inference. (It is always possible to check in finite time whether a (finite-length) formula satisfies the pattern it needs to satisfy by a rule of inference).On the other hand, if there are an uncountable number of rules of inference or an uncountable number of axioms, there may be a computer that cannot verify all the formally provable theories. For example, trivially, if there are an uncountable number of axioms, then the axioms admit no encoding; that is, there is no set of finite strings in a finite alphabet that is able to uniquely label all the axioms. But an axiom is itself a formally provable theorem; its proof is just introducing itself, yet that proof cannot be represented for all the axioms. (dependent on definition of proof; this example may not be so if we accept a zero sequence as a proof; but in general this illustrates the fact that there are simple theorems in some formal systems + uncountable axioms that cannot be computed).Yet, if we know beforehand the theorem and rules of inferences and axioms used in some proof of it, and the rules of inference all have finite valence, then that proof only invokes finitely many rules of inference on finitely many axioms, such that we can encode a fragment of the logical system + axioms + expressible formulae such that the TM can verify all formally provable theorems in that fragment, including the desired theorem. It's not a FSM but a Turing machine. It is quite easy to prove if a FSM with 1919 states reaches a certain state or not. You just run it and within 1919 steps it either reaches the target state or returns to a previous state that it already visited, creating a cycle. I think Godel's has been proven in Coq, but I also heard that it relies on library functions that don't make sense to use when proving incompleteness... and that if you weaken what's in the library to account for it, I'm not sure if the proof still works. Then again, I think that thought came from a notable crank, so I'm not sure it's true. In what sense is a proof valid yet can't be written down? A proof of a statement A may be valid in one formal system X but not capable of being written down in another formal system Y (and more strongly, there may be no proof at all that can be written down in Y that proves A or not-A, even if the statement A itself can be written down in Y).It helps to be precise. Stating the incompleteness theorem imprecisely generally just makes people confused about what it could even mean. Strictly speaking, an incompleteness theorem has to be proven within a particular formal system X (the "meta-system"). It expresses--within X--that "for every consistent formal system Y expressible in X, there is a statement A expressible in Y such that there is no proof in Y of A or not-A". Here Y is the "object system".The statement A is only unprovable within Y in particular. If it is expressible in X as well, it may be provable (or disprovable) within X. Indeed, one can trivially construct a formal system in which A is provable and which can express Y by adding A as an axiom to X.The real, philosophically significant, non-particular-formal-system-specific import of the incompleteness theorem is not just that there are unprovable statements (which isn't really true, and is in fact kind of mathematically meaningless, outside of the context of a specific formal system, given that any statement or its negation can be taken as an axiom, and all we have to judge such choices is a non-mathematical notion of "self-evidency") but that (as the name suggests!) no formal system is complete--every formal system can express statements that it can't prove. It's all about that universal quantification. Yes, lots of them, more than most people might guess.It’s hardly naive either considering so much has been learned about the limitations of mathematics only in the last century.Highly recommend this short videohttps://youtu.be/O4ndIDcDSGcThe title mentions Gödel, but it’s a broad treatment that really discusses a lot of the context of the subject.Keep in mind there are multiple reasons why proofs can become problematic.Gödel deals with some of them, other problems go unproven for centuries until our techniques catch up. Some proofs are impractical being nearly infinitely complex, etc. There's an entire theory of non-computable functions. Probably the most vivid introduction is the "busy beaver function." Your question hinges on the definition of "proof" and what it means to "compute" a proof.The shortest answer I can give is no. There are no proofs that are beyond computation. There's some subtlety here and your final question of whether there is mathematics that is beyond computation is a much deeper question.The question becomes far more interesting if you replace "fundamentally" with "efficiently." But that's getting ahead of myself.Modern mathematics and logic view (formal) proofs as computation. A formal proof is a purely mechanical process that should require no creativity whatsoever to follow, just as a computation is a purely mechanical process that should require on creativity whatsoever to carry out.If you require a non-mechanical leap (say of creativity) then it is no longer a "proof" (each leap corresponds to a "leap of logic" colloquially).Modern mathematical standards usually mean that every informal mathematical proof should in theory be formalizable into a formal proof. If this is not possible, the informal mathematical proof is most likely insufficiently rigorous.The creativity then lies rather in the construction of the proof rather than its verification.Note though that given infinite time, again all proofs can be generated mechanically. Given that all proofs can be verified mechanically, you just keep generating every possible string and verifying it until you find a proof that either proves or disproves your statement in question. This is an extremely long and utterly impractical process, but it demonstrates that in theory this is possible.There is a subtlety here. Mathematical statements can be independent of the set of axioms that you consider when deriving a proof, that is the axioms may not be strong enough to either prove or disprove the statement. This mechanical enumeration process cannot find that a statement is independent of the starting axioms, since it cannot know when to stop looking for a proof or disproof if it has not yet found one. This is analogous to the halting problem, where a computer cannot tell if a program has not yet halted whether it will halt in the future (e.g. by waiting just another few seconds more) or if it will never halt.Hence in a certain sense the verification of mathematics is entirely bound by computation. The creation of it, given infinite time is also bound by computation. However, given that we do actually care about efficiency, that's where things get interesting.Okay so what about the things that sibling comments touch on. Such as:1. Godel's incompleteness theorem(s)2. Arithmetical hierarchy (and the closely related notions of Turing jumps and Turing degrees)3. (Not touched upon by other comments, but potentially relevant) Tarski's undefinability of truth4. (Also not touched upon by other comments, but also an interesting one) P =? NP1 - 3 apply to humans as well as machines. 4 is the only one in my eyes that's interesting.1. Godel's incompleteness theorems are statements about the limitations of formal systems, but this limitation holds for humans as well. That is, modern mathematics is usually stated to be built on the foundations of ZFC set theory. All the constraints that Godel's incompleteness theorems place on ZFC (namely that there will always be statements independent of ZFC and that we cannot use ZFC to prove itself logically consistent) also hold for all humans using ZFC. The only advantage that humans have is that we can decide if ZFC is too restrictive to switch formalisms and use something else (i.e. the creativity in constructing a proof). This is the creative spark that is not yet captured by machines (although who knows with AGI).2. The arithmetical hierarchy is a generalization of the following phenomenon: for universally quantifiable statements, i.e. "for all" statements such as "all cows have spots" or "all people are mortal," it is easy to disprove them since that requires only a single counterexample, but it is hard to prove them, since you need some way to show that the statements holds for all exemplars. On the other hand for existentially quantifiable statements, i.e. "there exists" statements such as "there exists a cow which is a sphere" or "there exists a person who is mortal," the opposite is true. It is easy to prove them since you only need to find one example, but it is difficult to disprove them, because you need to show that any potential exemplar is incorrect. The arithmetical hierarchy generalizes this by examining the "difficulty" of statements where "for all"s and "there exists"s interleave with each other. It turns out that "there exists" corresponds to the halting problem (there exists a number of steps after which the machine halts) and "for all" corresponds to the "not-halting" problem (for all finite numbers of steps the machine will not have yet halted). The hierarchy is an observation that each time you interleave a "there exists" and a "for all" you make the halting problem even "harder" (an oracle that magically gives you the answer to the halting problem for a lower number of interleavings will have its own halting problem it cannot decide in higher numbers of interleavings). An example of an interleaving is the question of whether a function is total. This is equivalent to the statement "for all inputs, the function halts." However, we just said that halting is a "there exists" statement. So we can further reduce the statement to "for all inputs, there exists a number of steps such that the function halts after such a number of steps." Notice the interleaving of "for all" and "there exists." However, note again that all the limitations I've described here apply both to computers and humans.3. Tarski's undefinability of truth is the statement that for essentially all interesting logical systems, there is no way to write a logical predicate that takes another logical statement as an argument and is true if and only if that logical statement is true (essentially a logical version of the halting problem). Again this is a problem that afflicts both humans and machines and is an artifact rather of the attempt to formalize something (the heart of logic and mathematics).4. This is the interesting question. So far I've outlined a brute-force way for a computer to enumerate all proofs. This clearly does not help actual human mathematicians do mathematics. Can we do better than that? Especially since we don't need to brute-force the validity of a proof? Humans certainly can do this better for specific instances of proofs (in much the same way that humans can prove specific programs halt, but cannot give a general formula for doing so). Is there a globally better way of doing this? Scott Aaronson delves into this far better than I could here: https://www.scottaaronson.com/papers/philos.pdf. Thank you for taking the time to write such an elaborate and still accessible answer to my question! There is a correspondence between proofs and programs (curry-howard) so I guess if there are non computable functions, there are non computable proofs? Search: