What precisely are you lot referring to?
I'm familiar with the Halting Problem, and glancingly familiar with Ω / Chaitin's constant(s). But I'm not familiar with any kind of uncomputable sequence which we know how to get the first few binary digits out of that has profound things to say about math and is somehow related to the Halting problem.
And trying to Google it doesn't help as I just get search result noise about the Halting problem itself with no reference to any special kinds of sequence.
See my other reply. It’s not Chaitin’s constant but the mutual information between ZFC and the halting sequence, which is precisely I(ZFC : H) bits.
If you had H, you could optimally compress any non-random string of data of length n in O(n) time. The rough sketch of how you would do this is to create brute-force search programs and then instead of actually running them just look up whether they halt or not via the halting sequence. By chaining together a few of these μ-operator functions, you can build up the whole compressed string 1 bit at a time.
Since we only have a few bits of H, we can’t compress at that level of magic, but what we do have still represents everything computable that ZFC can describe, which for the most part includes every algorithm we use in daily life.
> What precisely are you lot referring to? [...] glancingly familiar with Ω / Chaitin's constant
Huh? I'm not referring to Chaitin's constant; the first few digits of that aren't going to help with much. I'm referring specifically to the halting sequence H as I mentioned in my post. See Vitányi's "Kolmogorov's Structure Functions and Model Selection" or any of Levin's papers on algorithmic complexity. (Vitányi uses a definition for H where a program's input is the empty tape rather than itself, but this is mainly a difference in convention).
You can take a formal logical theory F (e.g., Q, PA, ZFC, ...) and since the proofs are recursively enumerable, iterate over all proofs equivalent to the statement "p_i halts" or "p_i never halts" in the language of F for program p_i (as indexed lexicographically on a reference universal Turing machine).
As an example, for any halting program p, an unbounded search through PA will eventually find a proof that p halts (assuming soundness of PA). But there are certain non-halting programs for which PA cannot prove the non-halting behavior whereas ZFC can. So in terms of mutual algorithmic information I(a : b) = K(a) + K(b) - K(a, b), PA's inability to prove the non-halting behavior of p implies I(p : PA) = 0 whereas I(p : ZFC) > 0 since ZFC can prove this. More specifically K(p) - K(p | ZFC) > 0, given that the description of ZFC is minimal.
The above implies that I(ZFC : H) > I(PA : H), where H is the halting sequence, so we can say that ZFC is "more powerful" than PA in its ability to prove non-halting behavior of Turing machines. But a minimal description of ZFC and PA is still a finite amount of information, and in fact Chaitin's Incompleteness theorem states that for any formal logical system F, there is a constant C_F beyond which F cannot prove any string has higher Kolmogorov complexity than C_F. I suspect C_F is pretty close to K(F), but I haven't seen a proof of that fact anywhere. This is essentially another way to state Gödel's first incompleteness theorem, with additional detail about the limit of what formal system F can or cannot do.
So when I was referring to the few binary digits that correspond to "most of mathematics", I'm talking about the I(ZFC : H) bits of mutual algorithmic information between ZFC and H. We know there is a 745 state binary Turing machine that encompasses all of ZFC’s behavior (see Scott Aaronson's posts about this), but many people think the minimal size of a machine that computes all that ZFC can prove about the behavior of Turing machines is far smaller than that.