> What precisely are you lot referring to? [...] glancingly familiar with Ω / Chaitin's constant
Huh? I'm not referring to Chaitin's constant; the first few digits of that aren't going to help with much. I'm referring specifically to the halting sequence H as I mentioned in my post. See Vitányi's "Kolmogorov's Structure Functions and Model Selection" or any of Levin's papers on algorithmic complexity. (Vitányi uses a definition for H where a program's input is the empty tape rather than itself, but this is mainly a difference in convention).
You can take a formal logical theory F (e.g., Q, PA, ZFC, ...) and since the proofs are recursively enumerable, iterate over all proofs equivalent to the statement "p_i halts" or "p_i never halts" in the language of F for program p_i (as indexed lexicographically on a reference universal Turing machine).
As an example, for any halting program p, an unbounded search through PA will eventually find a proof that p halts (assuming soundness of PA). But there are certain non-halting programs for which PA cannot prove the non-halting behavior whereas ZFC can. So in terms of mutual algorithmic information I(a : b) = K(a) + K(b) - K(a, b), PA's inability to prove the non-halting behavior of p implies I(p : PA) = 0 whereas I(p : ZFC) > 0 since ZFC can prove this. More specifically K(p) - K(p | ZFC) > 0, given that the description of ZFC is minimal.
The above implies that I(ZFC : H) > I(PA : H), where H is the halting sequence, so we can say that ZFC is "more powerful" than PA in its ability to prove non-halting behavior of Turing machines. But a minimal description of ZFC and PA is still a finite amount of information, and in fact Chaitin's Incompleteness theorem states that for any formal logical system F, there is a constant C_F beyond which F cannot prove any string has higher Kolmogorov complexity than C_F. I suspect C_F is pretty close to K(F), but I haven't seen a proof of that fact anywhere. This is essentially another way to state Gödel's first incompleteness theorem, with additional detail about the limit of what formal system F can or cannot do.
So when I was referring to the few binary digits that correspond to "most of mathematics", I'm talking about the I(ZFC : H) bits of mutual algorithmic information between ZFC and H. We know there is a 745 state binary Turing machine that encompasses all of ZFC’s behavior (see Scott Aaronson's posts about this), but many people think the minimal size of a machine that computes all that ZFC can prove about the behavior of Turing machines is far smaller than that.
Huh? I'm not referring to Chaitin's constant; the first few digits of that aren't going to help with much. I'm referring specifically to the halting sequence H as I mentioned in my post. See Vitányi's "Kolmogorov's Structure Functions and Model Selection" or any of Levin's papers on algorithmic complexity. (Vitányi uses a definition for H where a program's input is the empty tape rather than itself, but this is mainly a difference in convention).
You can take a formal logical theory F (e.g., Q, PA, ZFC, ...) and since the proofs are recursively enumerable, iterate over all proofs equivalent to the statement "p_i halts" or "p_i never halts" in the language of F for program p_i (as indexed lexicographically on a reference universal Turing machine).
As an example, for any halting program p, an unbounded search through PA will eventually find a proof that p halts (assuming soundness of PA). But there are certain non-halting programs for which PA cannot prove the non-halting behavior whereas ZFC can. So in terms of mutual algorithmic information I(a : b) = K(a) + K(b) - K(a, b), PA's inability to prove the non-halting behavior of p implies I(p : PA) = 0 whereas I(p : ZFC) > 0 since ZFC can prove this. More specifically K(p) - K(p | ZFC) > 0, given that the description of ZFC is minimal.
The above implies that I(ZFC : H) > I(PA : H), where H is the halting sequence, so we can say that ZFC is "more powerful" than PA in its ability to prove non-halting behavior of Turing machines. But a minimal description of ZFC and PA is still a finite amount of information, and in fact Chaitin's Incompleteness theorem states that for any formal logical system F, there is a constant C_F beyond which F cannot prove any string has higher Kolmogorov complexity than C_F. I suspect C_F is pretty close to K(F), but I haven't seen a proof of that fact anywhere. This is essentially another way to state Gödel's first incompleteness theorem, with additional detail about the limit of what formal system F can or cannot do.
So when I was referring to the few binary digits that correspond to "most of mathematics", I'm talking about the I(ZFC : H) bits of mutual algorithmic information between ZFC and H. We know there is a 745 state binary Turing machine that encompasses all of ZFC’s behavior (see Scott Aaronson's posts about this), but many people think the minimal size of a machine that computes all that ZFC can prove about the behavior of Turing machines is far smaller than that.