
Who Can Name the Bigger Number? - jeremynixon
http://www.scottaaronson.com/writings/bignumbers.html
======
GotAnyMegadeth
When I was around 14 years old I was baby sitting a kid who was 3 or 4 or
something. He came up to me and said "I bet I can say a bigger number than
you." clearly proud of what he had just learnt in preschool. I accepted the
challenge, he took a deep breath and said "Five hundred!", so I said "Five
hundred and one". He looked determined and said "Ok then! Five hundred and
fifty hundred and five hundred million hundred five hundred" (Or something
that makes no sense like that). So I said "Five hundred and fifty hundred and
five hundred million hundred five hundred, and one". He looked horrified and
said "Can you just say 'and one' for any number?" I told him yes, and he said
"Cool!".

~~~
poizan42
And every since has that poor kid believed that "Five hundred and one and one"
was a valid number.

~~~
Xophmeister
It kind-of-is, it's just not particularly well-formed/ambiguous
linguistically.

~~~
GotAnyMegadeth
A bit like 99 in French: quatre-vingt-dix-neuf

four twenties and ten and nine

~~~
eitland
I knew the Danes where counting in a crazy way but I didn't know the French
did this as well.

One more reason I'm happy English is the most common international language I
guess.

~~~
Xophmeister
Number systems by complexity:
[http://www.sf.airnet.ne.jp/ts/language/number.html](http://www.sf.airnet.ne.jp/ts/language/number.html)

------
dmichulke
Given the other comments I wonder whether the Meta-Algorithm for big numbers
is basically:

Compose a state space that expands as quickly as possible in terms of the
argument(s) and define some property of the state space as the result.

Looking at it this way, I'd say:

"Hyper-exponentiation" (Ackermann, tetration and so on) is basically counting
the number of states in a well-defined state space.

BB(X) is equivalent to generating automatically the state space (via a search
over all possible state spaces that can be constructed with a TM with X
states) and returning the size of the maximum.

This means that BB(X) is smaller than the sum of the sizes of all possible
state spaces constructible via that TM.

But BB(X+1) is aware of that trick (summing state space sizes) and uses it
already (or more probably something better), so it doesn't really matter and
you might as well just increase X by one to include all the tricks you had in
mind for BB(X).

Anyone thoughts on that?

~~~
cousin_it
BB_2(X) uses a different trick. It asks for the largest number computable by
any program of length X that's allowed to call a BB oracle with arbitrary
arguments. That's more powerful than any finite expression involving BB and
computable functions, like BB(F(X)), BB(F(BB(X))), etc.

More generally, I expect there to not be any "meta-algorithm" for naming big
numbers. Every big step will require a new insight. A similar problem is
inventing notations for large ordinals:
[http://en.wikipedia.org/wiki/Ordinal_notation](http://en.wikipedia.org/wiki/Ordinal_notation)

~~~
Veedrac
I looked it up and this page showed up:
[http://web.mit.edu/nelhage/Public/big-
number.txt](http://web.mit.edu/nelhage/Public/big-number.txt). Pretty
interesting.

------
meric
If BB is the highest paradigm published, and it is assumed BB numbers, given
enough time and resources, can be computed, and you can't use BB2, why can't
you simply go:

    
    
        BB(BB(BB(BB(BB(11111)))))
    

Repeat as many BB's as you have space on your 1000 character card. You might
even use math notation to define a new function where

    
    
        BB-(n) = [BB(BB(.......]
                  <---- n ----->
    

And then you might have:

    
    
        BB-(BB-(BB-(1111)))(BB-(BB-(BB-(1111)))(11111))
    

Or some such monstrosity.

~~~
jerf
So this turns out to be less interesting mathematically than you might think,
because the entire thing about Busy Beaver numbers is that it is using the
full power of computation available to it for a given number of states, so as
soon as the "normal" Busy Beaver reaches a number of states sufficient to
describe whatever trick you're interested in pulling, the Busy Beaver sequence
itself already uses that trick!

One of the most important sentences in understanding them is one that's easy
to pass by in the original work: "Indeed, already the top five and six-rule
contenders elude us: we can’t explain how they ‘work’ in human terms."

That is, while we humans are thinking we're all clever by defining repeated
iterations of the BB ruleset itself, all of these things that we think we are
being so clever with are actually very _very_ compactly described by Turing
Machine states. Meanwhile, even by BB(6) the TM is metaphorically "using"
incomprehensible tricks to be larger than we'd even dream. If repeated
iterations of BB'ness can be trivially described in, say, 10 states, and let's
say we reserve 4 states for BB(4) to describe the number of times to iterate,
our hand-constructed "clever" re-iteration of the BB procedure will be _blown
away_ by some other 10-state machine that we can't even imagine would ever
terminate.

So on the one hand, yes, BB(BB(20)) is incomprehensibly larger than merely
BB(20), and yet, on the other hand, in a profound way, it also doesn't matter.
Busy Beaver in a sense shows us numbers that have all the practical properties
of infinity, even if they are not technically infinite. In a sense BB(30) <
BB(31) is obviously true, yet, ultimately, a humanly meaningless statement,
since grasping either number in any sense beyond its mere definition is
impossible. We might as well say that "blibble" is less than "bloo". And not
merely impossible in the sense that it is impossible to even really properly
grasp just how far it is to Alpha Centauri, but impossible in the sense that
we are literally incapable of even constructing mathematical tools that will
let us grab on to and manipulate such numbers... deeply, profoundly,
computationally impossible for us to understand, not merely "mind-blowing",
but _impossible_ for us to understand in the absolute strongest sense of the
term.

Similarly for trying to catch up to BB with yet more iterations of
exponentiation... the procedure we humans use is shockingly, shockingly simple
to describe programmatically, which means that BB automatically already
encompasses practically all such tricks very early in its sequence, which
means you can't defeat it that way.

Busy Beaver is metaphorically (again) much smarter than you, and it's not
worth your time to try to outsmart it to make yet again bigger numbers.

This also, probably correctly, implies that attempting to adjudicate some
contest in which some people write BB(BB(BB(x)))) vs some other BB-based
notation is also impossible and you'd actually fail out of the contest for
writing an ill-defined number, as if we can't compare the two numbers for
which is larger, even in principle, it is for the purposes of this contest,
ill-defined. Busy Beaver in a sense also sets the limit of what a well-defined
number even can be, and is thus in some sense the natural limit of this
contest by virtue of simply _transcending_ through sheer incomprehensible size
all our computationally-naive human tricks for making big numbers, or even
just numbers of any size at all.

It's really a profound sequence.

~~~
tempestn
> we are literally incapable of even constructing mathematical tools that will
> let us grab on to and manipulate such numbers

I think this is really the key to your point. Part of me wonders whether this
should even make the example in the essay, BB(111), invalid. If there is no
way in our universe to ever know exactly what BB(111) is, can it really be
considered a well-defined number? That said, I'm no mathematician, so I'm
probably off-base here.

My first though reading the challenge was a series of 7^7^7^7... since the 7s
would nest together nicely (unlike 9s) :P

~~~
lmm
It is well defined - it's "just" a matter of computation. There are a small
number of possible 111-state turing machines - well, ok, there's actually a
very large number, since it's one of those things that grows exponentially,
but still, it's very much a finite number. For each of those possible turing
machines, either it halts or it doesn't, something a programmer should be able
to grind out; if it halts then it necessarily does so after some finite number
of steps. So in principle we could compute all of those and then take the max
of them. In practice it's a lot of steps - but we understand all the steps, so
it would just be a matter of "cranking it out".

Conversely if you want to insist on a "constructible" number then you kind of
have to insist on unary notation. Maybe a computer can give you a numerical
value for "11^11^11^11", but can you really consider it well-defined if you
can't put that many pebbles in a row? There are a few mathematicians who take
this kind of view - see
[http://en.wikipedia.org/wiki/Ultrafinitism](http://en.wikipedia.org/wiki/Ultrafinitism)
\- but it's very much an extreme position.

~~~
tcai
I have some issue with how you're generating the BB numbers, though it's not
necessarily mathematical. Fundamentally, "cranking it out" is exactly what you
cannot do to generate the BBs: following any one specific set of instructions
for generating the BBs is a futile exercise.

The problem arises when you say that "a programmer should be able to grind
out" whether or not a TM halts or not, which you use to get around the fact
that a TM cannot solve the halting problem. However, I'd question if this is a
trivial exercise: while we certainly are capable of recognizing infinite loops
in the code we write, I'm not certain that humans can identify arbitrary
infinite loops. Obviously, whether or not we can isn't a trivially answerable
question, as it comes down to whether or not our brain's neural networks can
be modeled by a sufficiently large TM, and even if it cannot be modeled by a
sufficiently large TM, what differences between our brains and a TM exist and
why those would effectively allow us to solve the halting problem.

So I'd question whether finding the BBs is "'just' a matter of computation",
because I'm not convinced that humans can solve the TM halting problem.

~~~
gd1
Very good point. There's no reason to think we can solve the Halting Problem
any more than a TM can, so the 'solved' BBs mentioned in the article are only
solved in the sense that we puny humans had a look at all possible programs
generated by BB(3) for example, focused on the ones that didn't appear to be
halting, and decided by inspection that they wouldn't halt. That inspection
process is not mathematically well-defined, so I'd argue that BB numbers
should be inadmissible for the contest.

~~~
jerf
The inspection can include proofs that the loopers loop, with the full
mathematical power of the word "proof" behind it, so I'm willing to trust the
known numbers. But the proofs can not be general (or at least not past a
certain point which probably comes up pretty quickly), and will get ever more
difficult to perform as we dig ever further down the sequence.

------
panic
There was a contest to produce programs generating large numbers in a version
of C with integral types that hold arbitrarily-large integers:
[http://djm.cc/bignum-rules-posted.txt](http://djm.cc/bignum-rules-posted.txt)

You can see the results here: [http://djm.cc/bignum-
results.txt](http://djm.cc/bignum-results.txt) \-- there's some interesting
stuff in there!

~~~
cousin_it
Yeah, that was a great contest as well. Though the numbers involved are
necessarily much smaller than even the BB numbers, because they must be
computable.

Also see this old HN discussion where Eliezer proposed a good way to beat the
winner:
[https://news.ycombinator.com/item?id=1542770](https://news.ycombinator.com/item?id=1542770)

------
crayola
Why not mention factorials? 9!! is way bigger than 9^9^9. And you can write
many exclamation marks in 15 seconds.

~~~
vacri
Wow, 9!! is so large that the exponent almost needs it's own exponent.

    
    
        9^9^9 =~ 2e77
        9!! =~ 1.6e1859933

~~~
peterjmag
Correct me if I'm wrong, but aren't you supposed to evaluate stacked exponents
from the top down? That is, 9^(9^9) instead of (9^9)^9. If that's the case,
9^(9^9) is much larger than 9!!. Though I'm not sure how _much_ larger, since
I couldn't find any big integer calculators online that would give me an
actual result for 9^(9^9).

~~~
cradle
9^(9^9) is ... wait for it... 4e369693099 bigger than 9!!

[http://www.wolframalpha.com/input/?i=9%5E9%5E9+-+9%21%21](http://www.wolframalpha.com/input/?i=9%5E9%5E9+-+9%21%21)

~~~
BhavdeepSethi
Shouldn't it be (9!)!?
[http://www.wolframalpha.com/input/?i=9%5E9%5E9+-+%289%21%29%...](http://www.wolframalpha.com/input/?i=9%5E9%5E9+-+%289%21%29%21)

------
JoshTriplett
It seems notable that so many different variations of "naming large numbers"
all end up mapping iterated applications of a function into a new function
that takes a number of iterations as a parameter. Knuth's up-arrow notation
defines iterated exponentiation, iterated iterated exponentiation, and so on.
Ackermann's function makes the number of up-arrows just a numeric parameter to
the function.

But you could go further than that: A(A(10)) is much bigger than A(1000), so
you can get larger numbers faster by iterated applications of the Ackermann
function. Turn the iteration into a function, and let B(n) be n applications
of A to n. Iterated application of B would be even faster, so turn that
iteration into a function: let C(n) be n applications of B to n.

But this process itself is iterative. So, define a new function: let iterA(n)
be the nth level of iterated Ackermann functions applied to n, where iterA(1)
is A(1), iterA(2) is B(2), iterA(3) is C(3), and so on.

Now, iterA can be applied iteratively. So, repeat again. The resulting
function can be applied iteratively, so repeat again.

And this whole process of turning iteration into a function and then iterating
that function is itself an iterative process, so it can be turned into a
function...

~~~
meric
So the competition is not really to name the biggest number but rather to
write the deepest iterator in 1000 characters and apply it to the highest
paradigm function. More like a programmer's challenge.

Let A = {fastest published sequence}. # This is a macro for BB, Ackermann's,
etc.

Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^10(A,
10)

~~~
cousin_it
IMO you can't win big number competitions by such a low-level approach. At
least notice that the concept of "deepest iterator in 1000 characters" is
itself mathematically definable, and define it as D(1000) or something. Then
try to find connections between that and BB numbers... Coming up with the
"highest paradigm function" is the important part of the challenge, iteration
is unnoticeable by comparison.

------
chinpokomon
I have been thinking about this problem for a year or so now. I was introduced
to the concept watching a Numberphille video and several hours later read this
article.

I've been taking a somewhat metaphysical approach to thinking about it.
Instead of thinking about a Turing Machine as just a head on an endless tape,
can we not consider the Universe to represent the upper limits for our
calculations? Just as Turing suggested that we can take the two dimensional
representation of a mathematical equation and represent it in one dimension,
can we represent the entire state of the Universe in a similar way on a
hypothetical Turing Machine? The halting problem is then a question of whether
or not the Universe has a halting state.

To extend from that, any machine that can be conceived must fit within the
bounds of the Universe. Of course BB(Universe) is incalculable, but I think
that it defines an upper limit. It is pointless to consider a Turing Machine
that features an endless tape if an endless tape is an impossibility.

In not sure if this adds to the discussion, but I haven't had anyone else I
could discuss this with until now.

~~~
bjackman
I suspect a physicist would say you're idea rests on an a view of the universe
where everything is deterministic and knowable and encodable.

You also have to remember that some quantities are going to be continuous,
while TMs are discrete. Even an "infinite" TM can't truly encode real numbers
(I think) because its infinity is of the countable variety, while Real numbers
are uncountably infinite.

~~~
chinpokomon
True, I am making an assumption that it is deterministic.

If we presume that the Big Bang model of the Universe is true, then in the
earliest moment of time, can the state of the Universe be measured? If it can
be measured, how does it transition from a measurable state to an immeasurable
one?

I'll have to give some more thought to what you've said, but I wanted to give
you a basis for why I believe it could be deterministic. Clearly I'm making
some assumptions, and based on my postulate, this is a state that we could
never measure ourselves. I don't know if our inability to make these
measurements removes the possibility of determinism.

------
swatow
Mathematician Edward Nelson has a very interesting take on "big" numbers. He
claims that the exponential function is not necessarily total, and a number
like 2^1000000000000 might not actually exist. The reason he singles out
exponentiation is that the reasoning that exponential numbers are "real"
numbers, is circular (impredicative). According to Nelson, speaking about such
numbers might lead to condradictions, just like speaking about "the set of all
sets that don't contain themselves" leads to a contradiction.

He also relates these issues to Christian philosophy, which I find very
interesting. In particular, he claims that the _a priori_ belief in the
objects defined by Peano Arithmetic, is equivalent to worshipping numbers, as
the Pythagoreans did.

I think this is the best starting point if you're interested in reading about
his ideas:
[https://web.math.princeton.edu/~nelson/papers/warn.pdf](https://web.math.princeton.edu/~nelson/papers/warn.pdf)

~~~
hellbanner
I'll read the paper, it sounds interersting. But isn't all Math "circular" \--
Goedels Second Incompleteness Theorem yeah?

[http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_the...](http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Implications_for_consistency_proofs)

~~~
swatow
Goedel's incompleteness theorem uses Peano Arithmetic, which Nelson claims is
(possibly) inconsistent. He develops his own "predicative" arithmetic, to
which Goedel's theorem does not apply.

(large pdf)
[https://web.math.princeton.edu/~nelson/books/pa.pdf](https://web.math.princeton.edu/~nelson/books/pa.pdf)

~~~
hellbanner
I'll read it, thanks. I always (layman) understood Goedel's theorem as meaning
you can be complete or consistent but not both. So you can have a language
that can describe everything but you'll have paradoxes, or you'll have no
paradoxes but not be able to describe everything (all possibilities) in your
system

~~~
swatow
Yes, Goedel's Theorem says you can be complete or consistent but not both, but
only about systems that are stronger than Peano Arithmetic, i.e. systems that
contain the axioms of Peano Arithmetic, as well as any other axioms.

Usually this is left out, because Peano Arithmetic is treated as a MVP for
mathematics. But Nelson claims Peano Arithmetic may be inconsistent, and
proposes a weaker system.

~~~
hellbanner
Wow. I have a lot of reading to do. Thanks!

------
gglon
Now the contest for programmers: Write the biggest number in one line of code
(80 characters, keywords count as 1 character, whitespace - 0) using computer
language of choice(standard library with infinite integer type) that any
reasonable programmer can prove that it is actually a finite number.

~~~
evanb
Mathematica has important whitespace for indicating multiplication, and it's
not clear what counts as a keyword, so here are 80 copy-and-pasteable
characters:

    
    
      u[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]];Nest[u[#^#^#][#]@#&,9,u[99][9]@9]
    

u[n][a][b] gives a (Knuth's up arrow)^n b. The after-the-semicolon expression
computes

    
    
      f(f(f(f(... u[99][9][9] fs total ... f(9) ... ))))
    

with the function f(n)=u[n^n^n][n][n]. This clearly results in a finite
number, since it is just iterated iterated iterated iterated ... (finitely
many "iterated"s) ... iterated exponentiation of finite numbers.

However, even when I try to compute (after $RecursionLimit=Infinity)

    
    
      Nest[u[#^#^#][#]@#&,2,u[2][2]@2]
    

my kernel crashes. This number is BIG.

There is one obvious way to make this number even bigger: make the base case
yield a^b. However, then it's not Knuth's up arrow notation, so it's harder to
debug by looking at the wikipedia page :). I used all my tricks (like using @)
to get rid of extraneous characters, which gave me space to put #^#^# as the
first argument of u. I still had 1 character remaining, so a 9 became 99. If
you can squeeze a few more characters #^#^# and 99 should be substituted for
u[#][#]@# and 9.

[https://en.wikipedia.org/wiki/Knuth's_up-
arrow_notation](https://en.wikipedia.org/wiki/Knuth's_up-arrow_notation)

~~~
evanb
I just realized 99 should be replaced with "9!".

~~~
evanb
Using the infix special form ~ we can cram in another ^#:

    
    
      u[n_][a_,b_]:=If[n==0,a b,Nest[a~u[n-1]~#&,1,b]];Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
    

I should also note that I'm not confident as to which of

    
    
      Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
      Nest[#~u@#~#&,9,9~u[9^9^9^9]~9]
    

is larger.

~~~
gglon
Very nice. Mathematica can clearly do the job. But I feel like there is still
a lot of room for improvement. Clearly though, the proof would be more and
more difficult.

Here is my modification:

    
    
      M=Nest;
      u[f_][n_][a_]:=If[n<1,f@a,M[u[f][n-1],a,a]];
      u[#][#@9][#@9]&@(u[#!&][#][#]&)
    

82 chars total.

comments:

    
    
      (*start with definition of Knuth up arrow*)
      u1[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]]
      (*let treat 1 as symbol and take 1 == b == a *)
      u2[n_][a_]:=If[n==0,a a,Nest[u[n-1],a,a]]
      (*next define for arbitrary function f  instead of multiplication*)
      u[f_][n_][a_]:=If[n==0,f@a,Nest[u[n-1],a,a]]
      (*numerical example when we take n<3 instead of n==0*)
      u[#! &][#][#] &@3 = u[#! &][3][3] = 10^1746 
      (*Next take the function f and parameters a to be: *)
      f = u[#!&][#][#]&
      a = f@9
      (*compute final number*)
      u[f][a][a]
      (*those 3 steps are shortened to: *)
      u[#][#@9][#@9]&@(u[#!&][#][#]&)

~~~
evanb
Smart. It didn't occur to me to have the base case be an arbitrary function.
Yours is much larger than mine. One comment: M=Nest; is a waste of characters.
I tried that in my solution too, but it wound up costing me an extra character
;). So I think you're down to 75 characters. It might make sense to remove the
factorial, and change the base case of u to f@f@f@f@a.

~~~
gglon
Thanks, I forgot to remove M=Nest that was beneficial in my previous attempts
where I used 3 Nests.

But remember:

Never forget that it is a waste of energy to do the same thing twice, and that
if you know precisely what is to be done, you need not do it at all. \--- E.
E. ``Doc'' Smith (1930)

    
    
          ...the optimal solution avoids all pattern.
                                            --- Douglas Hofstadter (1983)
    

[http://djm.cc/bignum-results.txt](http://djm.cc/bignum-results.txt)

So I would recommend to avoid things like f@f@f@f@a where there is clearly a
pattern.

------
speakeron
If you're interested in big numbers, David Metzler's "Ridiculously huge
numbers" sequence on youtube
([https://www.youtube.com/watch?v=QXliQvd1vW0](https://www.youtube.com/watch?v=QXliQvd1vW0))
is well worth a watch.

He gives great explanations of things like the Conway chained arrow notation
and the fast-growing hierarchy and just keeps on making bigger and bigger
numbers.

------
baddox
> Be precise enough for any reasonable modern mathematician to determine
> exactly what number you’ve named, by consulting only your card and, if
> necessary, the published literature.

Does this mean that the modern mathematician must be able to assemble all the
digits of the number's representation (say, in decimal)? Given some
formalization of this requirement, there would be a fairly set upper bound on
the acceptable integers.

~~~
baddox
It occurs to me that you could formalize the requirement by saying that there
must be a provably terminating algorithm that produces precisely the decimal
representation of the integer. That way, numbers like Graham's number can be
submitted despite there being no physical way to represent the number in
decimal (in the observable Universe).

~~~
gizmo686
Does the algorithm need to terminate? It seems like convergence would be
sufficient. For example, we have no terminating algorithm to produce pi, but
we have plenty of ways to precisly express it.

Of course, given the nature of this problem, you can define your converging
algorithm to terminate when it is withing a certain error of what the "pure"
answer would be. Of course, it is possible you would underestimate the number,
and if someone could beet you by using the same method with a slightly
different termination condition, so you may want to add 1...

~~~
baddox
Good point, although this problem involves only integers, so I'm not sure if
"convergence" makes sense. Pi, of course, is a computable number, implying
that there is a terminating algorithm that, given input n, outputs the nth
digit of pi's decimal representation.

------
trishume
I really enjoyed this article, it touches on so many different areas of math
and computer science and connects them in interesting ways.

------
viraptor
I don't get why the BB(x) numbers should grow faster than A(x). They're harder
to compute - true. They're more difficult to prove right - true. But I cannot
find anything in the article that proves it actually grows faster than
Ackerman numbers.

Unless you can always program A(x) in less than x steps on a Turing machine?

~~~
j2kun
No, BB is _impossible_ to compute (as a sequence, that is). A(x) is not.

~~~
viraptor
It's impossible to compute them - sure. But the claim was that: "The sequence
of Busy Beaver numbers, BB(1), BB(2), and so on, grows faster than any
computable sequence."

The explanation was that: "Because if a Turing machine could compute a
sequence that grows faster than Busy Beaver, then it could use that sequence
to obtain the D‘s—the beaver dams." But these goals seem different to me.

Another sequence you cannot compute is "foo(x)" \- the number of integers
lower than x for which algorithm "bar(x)" doesn't halt. It's impossible to
compute if "bar(x)" doesn't always halt. But the sequence cannot grow faster
than "x" itself.

Also, it doesn't matter that you cannot compute them if every TM of a given
size can be analysed and proven halting. It doesn't have to be computed (as
in, automatically checked).

The fact you can compute A(x) would only be an issue if you could compute A(x)
with a machine of size smaller than x.

~~~
j2kun
I think you are missing some basic concepts here. There is a single Turing
machine (of constant size) which, when given any integer n, outputs A(n). This
is the statement that A(n) is computable. One can prove that there is no such
Turing machine for the function BB(n) or any function f(n) that bounds BB(n).
Such a function would be used to solve the Halting problem as follows: given a
TM M and a string x, compute k = f(|M|) and run the TM on x for k steps. If
the simulation halts within k steps, output "halt" and otherwise output "loops
forever."

Your example of foo/bar(x) is not quite correct. If you fix a program bar,
then it is not necessarily impossible to characterize (and hence compute) the
set of strings for which bar(x) halts. This is not the statement of the
Halting problem, and as far as I know there is no theorem that says such a
function "bar" exists.

~~~
jkramar
Sure it does, just let "bar" be a UTM.

~~~
j2kun
Good point. My stupid mistake.

------
qrendel
1) My submission: modifying Ackermann's function to describe a sequence of
Busy Beaver numbers of sequential hypercomputation systems. E.g. ABB(n+1) =
BB_n(n) Of course it's the same problem, you could just as easily do AABB(n+1)
= ABB_n(n), but still grows faster than a Busy Beaver sequence for any single
level of the hypercomputation hierarchy.

2) Question: I understand the argument for noncomputability of Busy Beavers
given, but couldn't you just argue that BB(n) is finite, therefore BB(n) is a
computable sum of 1+1+...+1, therefore BB(n) is computable given a finite
amount of time, so any given number in the sequence is itself computable,
therefore by mathematical induction all elements in the sequence are
computable? Clearly not, but I don't understand why this doesn't work.

~~~
mstoehr
To say that the sequence is computable means that you must present a Turing
machine, call it T, that will produce BB(n) in finitely many steps after
taking input n. Our specific Turing machine T will have a fixed number of
rules call it N rules, then by the definition of the Busy Beaver T(N+1) <=
BB(N) = T(N) but BB(N+1) > BB(N) so T(N+1) does not compute BB(N+1). The flaw
in your inductive proof is that you have shown there is a Turing machine that
can compute BB(N) for each N, but you haven't shown that the same Turing
machine can compute all numbers in the sequence.

~~~
qrendel
And just to be clear here, this isn't just because it can't recognize which
numbers are the Busy Beavers as it arrives at them while summing ones
indefinitely? If T(N) tries to compute BB(N) by summing ones then, since BB(N)
is finite, then it just fails to halt? But then there would be finite numbers
that can't be described as the sum of one from 0 to BB(N)? It's just hard to
reconcile the concept of indefinite-but-finite running time, all numbers being
able to be expressed as a sum of ones, and the BB sequence still not being
computable.

Edit: Ah wait, I get it now. Misunderstanding was in confusing "Turing
machine" with "universal Turing machine." Any given T(N) will have a finite
number of steps it can complete before halting, and since a universal Turing
machine or equivalent (e.g. rule 110) can emulate any arbitrary Turing
machine, I was modeling it as just letting an arbitrary UTM run forever. But
the UTM is just emulating a specific T(N) each time, and you'd have to keep
going up emulating different machines for each N to get to each next BB
without running out of steps. So you can write programs to calculate arbitrary
BBs indefinitely, but no single program to calculate them all. Hence the
infinite sequence is noncomputable despite any finite length of it being
computable. Thanks for the point in the right direction.

------
sixQuarks
Here is the most entertaining article I've ever read about large numbers:
[http://waitbutwhy.com/2014/11/1000000-grahams-
number.html](http://waitbutwhy.com/2014/11/1000000-grahams-number.html)

------
tux3
Loved this article, I'm really interested by the BB numbers in particular and
how they relate to the halting problem.

And I though I could win this silly game with my Knuth's Up Arrow notation and
vague knowledge of Ackermann's sequence!

~~~
iopq
BB(6) could be bigger than Graham's number but we don't even know, that's how
big it is

the busy beaver function grows so fast that for some number N it's always
going to be larger than any number we can describe in normal math notation

so if you define Graham's number as a function of N like g(N) where Graham's
number is g(64), busy beaver still grows faster and for some N it will be
bigger

------
infogulch
"If you want to use higher-level Busy Beavers in the biggest number contest,
here’s what I suggest. First, publish a paper formalizing the concept in some
obscure, low-prestige journal."

I wonder if anyone has followed through with this?

------
stillsut
Is there some N such that BB(N) can produce any "arbitrarily large" number?
Not infinity because that would mean it doesn't halt, but that there is no
number which can not be exceeded by saying in effect "plus 1".

Or, as N becomes large and can emulate the english language use berry's
paradox style algorithm to show there is no limit to what size you want.

Another approach would be to use a probability-based event for halting, and so
it is always possible that a series of coin flips _could_ keep coming up heads
1,000,000 times in a row, or a zillion times, see: st. petersburg paradox.

~~~
j2kun
>Is there some N such that BB(N) can produce any "arbitrarily large" number?
Not infinity because that would mean it doesn't halt, but that there is no
number which can not be exceeded by saying in effect "plus 1".

This would imply BB(n) is not well defined. To the contrary, BB(n) is finite
for every n.

------
im3w1l
Ok, I think I may have the ultimate answer to this, a sequence that grows
optimally fast. It goes like this:

Imagine 2^(7n) copies of the judge of this competition. For every judge create
a different bit sequence of length 7n. Decode it from ascii and ask if it is a
valid entry of the competition. Take the largest valid entry of those
imaginary competitions. That is my entry. In order for this procedure to be
consistent, this entry must be longer than n characters. Hence it will end
with some padding you may disregard: asdfasdfasdfasdfasdfasdfasdfasdfasdfasdf.

~~~
cousin_it
If you're allowed to submit entries that are mutually recursive with the
decision procedure of the judges, I'm afraid the competition becomes less well
defined. For example, it's easy to create a paradoxical entry by adding 1 to
your entry. Also see Berry's paradox:
[http://en.wikipedia.org/wiki/Berry_paradox](http://en.wikipedia.org/wiki/Berry_paradox)

~~~
im3w1l
Yeah, I agree that it becomes less well defined. The problem with Berry's
paradox is that it refers to itself. I circumvented paradox, by only refering
to shorter descriptions of the number, than the description I gave. If that
strategy works really depends on what the judge thinks though. Assuming he
accepts refering to his own potential judgements of _shorter sequences_ but
not longer or equally long, the solution would work and the rules would be
paradox free.

~~~
cousin_it
Yeah, that might avoid paradox at the cost of requiring a specific kind of
judge. But then I think you need to fully specify the judge's reasoning, and
that's a very difficult problem, it might be even AI-complete. I don't know of
any computer program that would be able to judge the descriptions of current
winning entries.

Maybe this quote from the googology wiki will set you on a more productive
track:

 _Googologists generally avoid many of the common responses such as
"infinity," "anything you can come up with plus 1," "the largest number that
can be named in ten words," "the largest number imaginable," "a zillion," "a
hundred billion trillion million googolplex" or other indefinite, infinite,
ill-defined, or inelegant responses. Rather googologists are interested in
defining definite numbers using efficient and far reaching structural schemes,
and don't attempt to forestall the unending quest for larger numbers, but
rather encourage it. So perhaps a more accurate description of the challenge
is: "What is the largest number you can come up with using the simplest
tools?"_

[http://googology.wikia.com/wiki/Googology#History](http://googology.wikia.com/wiki/Googology#History)

------
codeflo
Here's a philosophical question that's been bothering me for a while. For
large enough n, we can construct an n-state Turing machine that attempts to
prove a contradiction in our current most powerful axiomatic system (let's say
ZFC). We must assume that this machine loops forever, but Gödel's
incompleteness theorem implies that we can't prove that.

What does this construction imply about BB(n)? In what sense is the BB
sequence even well-defined if we can prove that it can't be determined?

(Edited for clarity.)

~~~
jbapple
> In what sense is the BB sequence even well-defined if we can prove that it
> can't be determined?

I think it may help you to understand how the BB sequence can be well-defined
if you change your definition of "we can prove".

A proof is a sequence of logical deductions in an axiomatic system, so what
"we can prove" depends a lot on which axiom system you're using. For some
types of questions, it helps to make that explicit.

For instance, in weak axiom systems, we can't prove that every hydra game
terminates. [0] In very strong axiom systems, ones so strong we call them
inconsistent, we can prove everything! People argue over intermediate systems
sometimes by pointing out things they can prove that are counter-intuitive.
[1][2]

For any terminating Turing machine, there is a (possibly very long) proof in a
very weak axiom system of that fact: the full trace of the execution. (For
non-terminating machines, this is not true.) So, if ZFC, a rather strong axiom
system, cannot prove a machine halts, it does not halt.

For another example of this kind of thing, see Terry Tao's discussion of the
consequences of the independence of Goldbach's conjecture from ZFC on
MathOverflow. [3]

[0]
[https://en.wikipedia.org/wiki/Goodstein%27s_theorem](https://en.wikipedia.org/wiki/Goodstein%27s_theorem)
[1]
[https://en.wikipedia.org/wiki/Freiling%27s_axiom_of_symmetry](https://en.wikipedia.org/wiki/Freiling%27s_axiom_of_symmetry)
[2]
[https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox](https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox)
[3] [http://mathoverflow.net/a/27764](http://mathoverflow.net/a/27764)

~~~
fbastiat
But if the machine that stops when it has proven ZFC is inconsistent does not
halt, then surely it means there is no proof of the inconsistency of ZFC?
Hence ZFC is consistent? Which is contradicted by Godel.

I would think instead ZFC can't prove _or_ disprove that this machine halts,
which just means it's undecidable wether it halts or not.

This also implies that for sufficiently large n, we won't be able to prove
that BB(n) = N, or even BB(n) < N for any N using the ZFC axioms. Of course,
although they can't compute or bound its value, they can still easily prove
that BB(n) is finite.

~~~
jbapple
> But if the machine that stops when it has proven ZFC is inconsistent does
> not halt, then surely it means there is no proof of the inconsistency of
> ZFC?

I'm not sure what this question means. Are you referencing an earlier
discussion of a machine that demonstrated a proof of "0=1" using the axioms of
ZFC? If so, what is your concern?

When thinking about undecidability/independence from a computational
perspective, it's helpful to be cautious about the phrase "does not halt". How
would you know a machine "does not halt"? You might run it for a long time and
observe that it has not halted __yet __, but that 's not the same.

Instead, you can talk about "a proof in $AXIOM_SYSTEM that this machine does
not halt".

So, using that terminology, consider a machine that enumerates all valid
proofs of ZFC and checks if their conclusion is "0 = 1". Certainly, if ZFC is
inconsistent, this machine will find a proof and halt. However, if there is no
such proof, then ZFC is consistent, the machine will not halt, but there will
be no proof in ZFC that the machine will not halt.

This ability to enumerate the proofs is related to semidecidability/recursive-
enumerability.

When you say "for sufficiently large n, we won't be able to prove that BB(n) =
N", I agree. There is some n that is so large that there is a Turing machine
of size n that does loop, but the fact that it loops will not be provable in
ZFC.

~~~
fbastiat
Well you said "if ZFC, a rather strong axiom system, cannot prove a machine
halts, it does not halt.". This particular statement is what I was replying
to. I agree that a non-halting Turing machine is not a very well defined
concept in the general sense (hence my reply), but I would think the onus is
on the one who used it first to define what they meant by it :-)

~~~
jbapple
> I would think the onus is on the one who used it first to define what they
> meant by it :-)

That's a very fair criticism.

My writing is muddy, for sure. Let me try again. You ask:

> But if the machine that stops when it has proven ZFC is inconsistent does
> not halt, then surely it means there is no proof of the inconsistency of
> ZFC? Hence ZFC is consistent? Which is contradicted by Godel.

I do not think this is contradicted by Goedel. Goedel's theorem implies that
ZFC can't prove its own consistency unless it is inconsistent. So, how can
this machine that searches for an inconsistency in ZFC (by enumerating all
possible valid ZFC proofs so that for any proof p, there is some time t such
that the machine checks p by time t) run forever, thereby proving ZFC
consistent?

It is by the unprovability in ZFC (unless ZFC is inconsistent) of the fact
that this machine will run forever.

The fact that this machine will run forever implies the consistency of ZFC,
and the consistency of ZFC implies the machine will run forever.

~~~
fbastiat
After looking more closely at your linked post by Terence Tao, are you by
chance basing your statements such as "it does not halt" in what he calls the
informal platonic reasoning system (which presumably assumes ZFC as well as
its consistency)?

If so, I think I understand what you mean and I agree with you.

My thoughts were a bit confused on these topics so my posts probably contain
some reasoning errors, but in the end I think what matters is we agree on the
answer to OP's question, i.e.

For any given theory T at least as strong as ZFC (and maybe even some weaker
ones) then

\- BB as function is well defined in T \- However there is some number n_T
after which T can't prove any upper bound for BB(n) for any n > n_T

I think this is a very interesting result because the fact that any axiomatic
system will be powerless to describe this function's growth after only a small
number of steps expresses pretty well how mind numbingly fast it grows.

------
lifthrasiir
xkcd forum has an interesting (and longish) thread playing this exact game.
[1] Starting with nine thousands, it quickly becomes a mathematical nightmare
or something.

And also, the Big Number Duel. [2]

[1]
[http://forums.xkcd.com/viewtopic.php?t=7469](http://forums.xkcd.com/viewtopic.php?t=7469)
[2]
[http://tech.mit.edu/V126/N64/64largenumber.html](http://tech.mit.edu/V126/N64/64largenumber.html)

~~~
cousin_it
Also see this mathoverflow post by Scott Aaronson as a follow-up to the
article: [http://mathoverflow.net/questions/34710/succinctly-naming-
bi...](http://mathoverflow.net/questions/34710/succinctly-naming-big-numbers-
zfc-versus-busy-beaver)

------
ttctciyf
As an adjunct to this, if large numbers interest you there's a really
excellent and long-standing set of pages about them at
[http://www.mrob.com/pub/math/largenum.html](http://www.mrob.com/pub/math/largenum.html)
which has some fascinating incidental detail and covers infinities as well as
large finite numbers.

------
mcherm
One of my favorite articles... well worth the re-read even if (as I have) you
have seen it before. You may notice something you had forgotten, as I had
forgotten the existence of the "busy-beaver-like-function" for machines more
powerful than Turing machines.

------
Randgalt
24 is the highest number:
[https://www.youtube.com/watch?v=RkP_OGDCLY0](https://www.youtube.com/watch?v=RkP_OGDCLY0)

------
jbb555
I would argue that 9^9 mentioned in the article for example _is not a number_.
It is a method for calculating a number. So most of the article is invalid

~~~
ubercow13
So is 387420489. The method goes, first you take 9, then you multiply 8 by 10
and add that to 9, then you multiply 4 by 100 and add that to what you have...

------
im3w1l
Take number theorethic conjecture. Look for counterexamples. Return first
counterexample. You can't prove I didn't win!

------
chacham15
> Take 9^9^9, for example. This behemoth, equal to 9^387,420,489

...if the author is here, he should correct this.

~~~
primitivesuave
Exponentiation is right associative.

------
chanandler_bong
Name it? Sure - I will call it "Bob". What Bob's value is, I have no idea.

------
gesman
1/0-1

I won! :-P

(because infinity is not allowed, so i had to subtract one)

------
yyhhsj0521
9!!!!!!....!! factorial

~~~
jhonnycano
9!!!!!!....!! + 1

