
Graham's Number Is Too Big to Tell How Big It Is - sjcsjc
https://blogs.scientificamerican.com/roots-of-unity/grahame28099s-number-is-too-big-for-me-to-tell-you-how-big-it-is/
======
Scarblac
Graham's Number used to be the biggest number I could name, but now I
understand that TREE(3) (
[https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem](https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem)
) is bigger still by a truly incredible amount (and TREE(1) = 1, and TREE(2) =
3).

Unfortunately I don't understand that Wiki page.

The page links to
[https://en.wikipedia.org/wiki/Friedman%27s_SSCG_function](https://en.wikipedia.org/wiki/Friedman%27s_SSCG_function)
, which is supposedly even bigger but I can't follow the text of the first
page so won't attempt to read that.

~~~
speakeron
TREE(3) is indistinguishable from zero compared to Loader's number[1], which
basically takes every bit pattern up to some n and expresses this as a program
in the Calculus of Constructions. This system is a bit weaker than being
Turing complete, but the programs do always terminate (this makes the number
computable compared with, say, the Busy Beaver number which does a similar
thing with Turing complete programs).

It also has the geek cred of being represented by an obfuscated C program (the
unobfuscated verson is also available[2]).

[1]
[http://googology.wikia.com/wiki/Loader%27s_number](http://googology.wikia.com/wiki/Loader%27s_number)

[2] [https://github.com/rcls/busy](https://github.com/rcls/busy)

~~~
Retric
I can trivially define a function that's vastly larger than that. What's
interesting is when these numbers are useful for something.

k(1) = TREE(1), K(2) = TREE(TREE(2)), K(3) = TREE(TREE( <(repeats (TREE(3)
time (TREE(3)))

Sequence 1, 2, non comprehensibly large number.

Now define m(x) in terms of k and m(x-1).

~~~
speakeron
> k(1) = TREE(1), K(2) = TREE(TREE(2)), K(3) = TREE(TREE( <(repeats (TREE(3)
> time (TREE(3)))

What you've made here is what's known in googology circles as a 'salad
number'. It doesn't make a conceptual leap above what the TREE function can do
anyway and is kind of similar to saying TREE(3) + 1.

SCG(13) is much bigger than your K(3) and Loader's number is much bigger than
SCG(13). There are (computable) numbers bigger than Loader's number which are
mentioned in the googology link in my previous post (Greedy clicque sequences
and finite promise games).

~~~
Retric
Yea, the point was to pick something trivial. K(3) is larger than TREE( any
obvious function to a non mathematician of any printable number).

But it's the m(x) that's actually large in that example.

~~~
speakeron
> But it's the m(x) that's actually large in that example.

Sure, it's large (mindblowingly so), although I'm not quite clear exactly how
you define m(x).

It's hard to get a grasp on just how much vastly larger each of these
'rockstar' numbers is to the previous one when you ascend this conceptual
hierachy. (To a certain extent, that's the point of the article.)

You could make a K(K(K(...(n)...) function where you painted a K on each
particle in the visible universe (about 10^80) and it still wouldn't come
close to Loader's number. The point is that you can't simply recurse TREE and
hope to get a number that's conceptually any bigger than TREE itself.

~~~
Retric
Yea, at some point you really can't tell if function 1 grows faster than
function 2. Anyway to pick a large m(x).

m(0) = 1; m(x) is the recursion of all finite functions defined using
k(m(x-1)) symbols that produce larger numbers than their input defined at
stage m(x-1).

So a 1 step function F1(x) = X + X and F2(x) = X ^ X,... including any
function that has been included in a mathematical paper to this point that
takes one ore more finite numbers as an input.

And because these are all finite functions call them recursively in whatever
order produces the largest output starting with the initial input (x). Now, to
size this depends on what initial notations are allowed, so m(1) is a finite
number.

Next level could then include F#(x) where F1(F2(x)) etc.

PS: The above is roughly equivalent to saying infinity +1 and is not
interesting IMO. I can also drop the k from the above, but again the point is
to use the dumbest approach that works.

~~~
speakeron
Sounds a bit like Yudkowsky's number[1] (your main step is the first one,
everything else is salad), which is a computable version of Rayo's number[2])

[1]
[http://googology.wikia.com/wiki/Yudkowsky%27s_Number](http://googology.wikia.com/wiki/Yudkowsky%27s_Number)

[2]
[https://en.wikipedia.org/wiki/Rayo%27s_number](https://en.wikipedia.org/wiki/Rayo%27s_number)

~~~
Retric
Sure, but you need a large seed number at some point for functions like this.
ex: Rayo's used 10^100.

So, they are all going to be take large value and use it to boost some other
function m(x). I just happen to pick a large f(x).

------
vinceguidry
Obligatory:
[https://www.scottaaronson.com/writings/bignumbers.html](https://www.scottaaronson.com/writings/bignumbers.html)

(Yes, I know this gets posted every time one of these topics comes up, but
there's always someone new who hasn't read it yet.)

~~~
AndrewOMartin
I assumed your link was going to be this one about a "graspable" description
of the number 52! (factorial).

[https://czep.net/weblog/52cards.html](https://czep.net/weblog/52cards.html)

Start a timer that will count down the number of seconds from 52! to 0. Then
walk around the world along the equator, at the very leisurely pace of one
step every billion years. The equatorial circumference of the Earth is
40,075,017 meters. After you complete your round the world trip, remove one
drop of water from the Pacific Ocean. Now do the same thing again: walk around
the world at one billion years per step, removing one drop of water from the
Pacific Ocean each time you circle the globe. The Pacific Ocean contains 707.6
million cubic kilometers of water. Continue until the ocean is empty. When it
is, take one sheet of paper and place it flat on the ground. Now, fill the
ocean back up and start the entire process all over again, adding a sheet of
paper to the stack each time you’ve emptied the ocean.

Do this until the stack of paper reaches from the Earth to the Sun. Take a
glance at the timer, you will see that the three left-most digits haven’t even
changed.

~~~
Nursie
Just to snark a little - that model assumes a constant earth. After a billion
years (the first step) there will be no water left in the pacific anyway. If
there's even a pacific ocean by then due to plate tectonics...

Sorry.

~~~
cokernel
The page being quoted points that out:

> Of course, in reality none of this could ever happen. Sorry to break it to
> you. The truth is, the Pacific Ocean will boil off as the Sun becomes a red
> giant before you could even take your fifth step in your first trek around
> the world. Somewhat more of an obstacle, however, is the fact that all the
> stars in the universe will eventually burn out leaving space a dark, ever-
> expanding void inhabited by a few scattered elementary particles drifting a
> tiny fraction of a degree above absolute zero. The exact details are still a
> bit fuzzy, but according to some reckonings of The Reckoning, all this could
> happen before you would've had a chance to reduce the vast Pacific by the
> amount of a few backyard swimming pools.

~~~
Nursie
Well in which case I apologise :)

------
SAI_Peregrinus
David Metzler's series "Ridiculously Huge Numbers"[1] is a good introduction
to this. It goes rather well beyond puny little integers like Graham's number.

Loader's number has already been mentioned here, but I don't see any mention
of the Busy Beaver function[2]. BB(n) is the maximum number of 1s that can be
written of a 2-color, n-state halting Turing machine starting from a blank
tape and counted after the machine halts. This is of course uncomputable, and
grows very quickly. Rayo's function (Rayo(n) = the smallest natural number
greater than all natural numbers that can be uniquely identified by a first
order set theory expression of at most n symbols) grows even faster, but that
requires a non-recursively enumerable set theory or the use of non-provable
functions.

[1]
[https://www.youtube.com/watch?v=QXliQvd1vW0&list=PL3A50BB9C3...](https://www.youtube.com/watch?v=QXliQvd1vW0&list=PL3A50BB9C34AB36B3)
[2]
[http://googology.wikia.com/wiki/Busy_Beaver](http://googology.wikia.com/wiki/Busy_Beaver)

~~~
speakeron
> David Metzler's series "Ridiculously Huge Numbers"[1] is a good introduction
> to this. It goes rather well beyond puny little integers like Graham's
> number.

Yes, David Metzler's series is great and, I would say, essential viewing if
your interest in large numbers is piqued.

While he doesn't go through some of the big numbers talked about her (TREE(3),
SCG(13), etc), he gives an excellent explanation of the fast-growing
hierarchy. This is really the closest thing we have to a 'standardized' way of
comparing the sizes of very large numbers.

I would also recommend Giroux Studios' 'Extremely Large Numbers'[1], which
takes a similar style to Metzler's videos and goes further with the fast-
growing hierarchy. (Does something called 'infinite collapsing functions'
sound interesting...?)

[1]
[https://www.youtube.com/watch?v=vq2BxAJZ4Tc&list=PLUZ0A4xAf7...](https://www.youtube.com/watch?v=vq2BxAJZ4Tc&list=PLUZ0A4xAf7nkaYHtnqVDbHnrXzVAOxYYC)

~~~
SAI_Peregrinus
Metzler's series got expanded a while back, he does have some sections on TREE
(though not on SCG, SSCG, etc). IIRC there's a brief bit about ordinal
collapsing functions as well, but not very detailed.

~~~
asah
Let me propose YOUTUBE(n) which represents the number of YouTube videos about
the mathematics of large numbers.

------
mjd
A while back someone on math stackexchange asked “Graham's Number: why so
big?” and I wrote what I think is a pretty good explanation.

If you're interested, it's at
[https://math.stackexchange.com/q/163423/25554](https://math.stackexchange.com/q/163423/25554)

~~~
iamgopal
Superb.

------
borutz
Somewhat more visually satisfying explanation:

[https://waitbutwhy.com/2014/11/1000000-grahams-
number.html](https://waitbutwhy.com/2014/11/1000000-grahams-number.html)

~~~
carapace
Man I really wonder what your brain is doing when you try to comprehend those
numbers.

> thinking about Graham’s number has actually made me feel a little bit calmer
> about death

------
sjcsjc
"But Graham's number is different. I can't tell you how many digits it has. I
also can't tell you how many digits its number of digits has, or how many
digits the number of digits of its number of digits has. They're all too big."

~~~
YetAnotherNick
This is really mild in comparison to the number. You can't tell how many times
after repeating that you can tell the number of digits in it.

------
RcouF1uZ4gsC
Thinking about Graham's Number reminds me of this quote from CS Lewis's That
Hideous Strength

"Saturn, whose name in the heavens is Lurga, stood in the Blue Room. His
spirit lay upon the house, or even on the whole earth, with a cold pressure
such as might flatten the very orb of Tellus to a wafer. Matched against the
lead-like burden of his antiquity, the other gods themselves perhaps felt
young and ephemeral. It was a mountain of centuries sloping up from the
highest antiquity we can conceive, up and up like a mountain whose summit
never comes into sight, not to eternity where the thought can rest, but into
more and still more time, into freezing wastes and silence of unnameable
numbers. "

------
Bromskloss
This raises the question of what it means to "tell someone how big it is".
Does it mean to write it using some particular notation (such as positional
notation or exponentiation)? Does it mean to instill a satisfactorily good
image in someone's head of how big it is? I mean, mathematically speaking, any
definition of a number is as good as any other, and Graham's number can be
defined perfectly well.

------
Ronsenshi
Don't know about you, but I like this explanation of Graham's number:

[https://youtu.be/1N6cOC2P8fQ](https://youtu.be/1N6cOC2P8fQ)

------
biot
Conway chained arrow notation [0] is really mind blowing. The simple notation
of “3 -> 3 -> 3 -> 3” results in a number that is larger than Graham’s number.

[0]
[https://en.wikipedia.org/wiki/Conway_chained_arrow_notation](https://en.wikipedia.org/wiki/Conway_chained_arrow_notation)

------
e0m
Numberphile also did a great video on TREE(3) here (a 'useful' number larger
than Graham's):
[https://www.youtube.com/watch?v=3P6DWAwwViU](https://www.youtube.com/watch?v=3P6DWAwwViU)

------
justinpombrio
There was a puzzle in the MIT mystery hunt a couple years back that involved
sorting large numbers. Graham's number was somewhere in the middle.

[http://web.mit.edu/puzzle/www/2016/puzzle/identify_sort_inde...](http://web.mit.edu/puzzle/www/2016/puzzle/identify_sort_index_solve/)

------
fahadkhan
"And in Go, which has a 19-by-19 board and over 10150 possible positions, even
an amateur human can still rout the world’s top-ranked computer programs."

When was this written? Evidently before 2017.

~~~
tedunangst
> By Evelyn Lamb on April 1, 2014

~~~
mikeash
A bit of a tangent, but it blows my mind how quickly Go went from “computers
are so terrible at it that they can’t even beat a decent amateur” to
“computers utterly crush the world champion.”

------
chaosfox
I really don't like this kind of writing, it spending too much time gawking at
the number and little explaining.

I would rather recommend this video of Graham himself explaing Graham's
Number.
[https://www.youtube.com/watch?v=HX8bihEe3nA](https://www.youtube.com/watch?v=HX8bihEe3nA)

------
_greim_
My children loved to invoke googol, like most children it seems. "I want to
eat googol cookies!" But I always sternly warned them it was a big number—most
of their invocations pretty much entail the observable universe immediately
collapsing into a black hole. Then of course they caught wind of googolplex,
which is just nuts, and I keep telling them "that's nuts!" Now they're onto
Graham's number, and I've run out of tools to convey how insane it is. I'm
relegated to saying things like "Remember googolplex? Well that's peanuts
compared to Graham's number." I don't think I'm getting through to them.

------
kibwen
The linked Numberphile video mentions that a black hole the size of a human
head wouldn't have enough entropy to encode Graham's number. So what diameter
of black hole would actually be required?

~~~
speakeron
Well, you could plug Graham's number into the Bekenstein–Hawking formula[1].
The formula defines the area and there's a factor of the plank length in
there, but Graham's number is so overwhelming that the the result is pretty
much Graham's number in any units you want.

[1] [https://en.wikipedia.org/wiki/Black-
hole_thermodynamics](https://en.wikipedia.org/wiki/Black-hole_thermodynamics)

------
carapace
Reminds me of Chaitin's constant:

> In the computer science subfield of algorithmic information theory, a
> Chaitin constant (Chaitin omega number)[1] or halting probability is a real
> number that, informally speaking, represents the probability that a randomly
> constructed program will halt.

> Each halting probability is a normal and transcendental real number that is
> not computable, which means that there is no algorithm to compute its
> digits. Indeed, each halting probability is Martin-Löf random, meaning there
> is not even any algorithm which can reliably guess its digits.

~
[https://en.wikipedia.org/wiki/Chaitin%27s_constant](https://en.wikipedia.org/wiki/Chaitin%27s_constant)

------
sixothree
Are there even a Graham's Number of things we could count in the observable
universe?

~~~
Robotbeat
Definitely no. Overwhelmingly and absurdly no.

Think about it this way.

You can count by just adding. |||| is four. When you're counting small things,
that works fine.

Then you have multiplication. Instead of writing |||||||||||||||||||| we just
write 20. We developed a new abstraction to write large numbers easier.

Then you have exponentiation: 10000000000 is just 10^10. This is abstraction
is all the further you need to go to talk about the number of atoms in the
Universe (which is roughly 10^80), the speed of computers (And exaflop is
10^18 operations per second, there are 10^7 to 10^8 seconds in a year, and the
universe is 10^10 years old, so if each atom were an exaflop computer, you
could do 10^(80+18+8+10) = 10^116 operations. Easily written in scientific
notation. This is really all the level of abstraction needed for roughly
writing large numbers in science and technology. But not for math. Certain
proofs can require absurdly larger numbers.

So what if we abstracted exponentiation? So, instead of writing 10^10^10
(which is 10^10000000000), we write, say, 10^^3. And we can keep doing that.
But what if we iterate on the number of _abstractions_? So the number of "^"s
is abstracted, i.e. 10 ^{m} 10. and THAT is abstracted n times (i.e. we
replace m with the previous 10 ^{m} 10, n times). Graham's number is n=64.

No doubt I probably made a mistake in here... But:

tl;dr: no, we need to develop totally new abstractions to represent Graham's
number. Numbers which can be merely represented directly in our universe are
easily written in exponential format.

------
Ygg2
It's not too big to comprehend. It's a number, which if you could keep all
digits in your mind, would cause your head to collapse into a black hole
several times larger than observable universe.

------
lukeasrodgers
It's a bit of a tangent, but if you find it pleasing (stirring might be a
better word) to contemplate large numbers like this, and are curious about why
that may be so, you may be interested in Kant's reflections on the aesthetic
appreciation of the sublime, see
[https://en.wikipedia.org/wiki/Sublime_(philosophy)#Immanuel_...](https://en.wikipedia.org/wiki/Sublime_\(philosophy\)#Immanuel_Kant)
for an intro.

------
psyc
[http://www.polytope.net/hedrondude/scrapers.htm](http://www.polytope.net/hedrondude/scrapers.htm)

------
badosu
Since we're talking about mind-boggling quantities, although this is more
related to cardinals it's one of the most fascinating pieces I've ever read.

[https://johncarlosbaez.wordpress.com/2016/06/29/large-
counta...](https://johncarlosbaez.wordpress.com/2016/06/29/large-countable-
ordinals-part-1/)

~~~
alanbernstein
I just read all three posts. Very silly, very entertaining. I remember seeing
that "clock diagram" of ordinals years ago, and wondering what happens if you
take it further. Now I know!

------
m3kw9
Is just the elegance in they way it outputs the large number. I could easily
do an expontianial of an exponential of exponential repeatedly exponentially
and that would be hard to imagine, but it would look hacky

------
stcredzero
So here's a question that popped into my mind: Is it possible to define two
computable functions, f() and g(), such that the truth or falsity of f(x) <
g(y) is not computable?

~~~
zardo
No, could be proven by the transitivity of computability under composition.

Provided the function <(,) is computable for all typeof(f) and typeof(g).

~~~
stcredzero
Interesting, thanks. So it should be possible to define f() and g() such that
neither of those are computable, but the truth of f(x) < g(x) is computable,
but not so much for f(x) < g(y). Could one construct a couple of monotonically
increasing uncomputable functions with that known relationship?

~~~
zardo
>So it should be possible to define f() and g() such that neither of those are
computable, but the truth of f(x) < g(x) is computable, but not so much for
f(x) < g(y).

The difficulty is really just defining an uncomputable function that has a
definite value for some domain, but this can be done. Let's call one h.

f(x): -abs(h(x)) g(x): -f(x)

Gives your second inequality.

------
satiespickles
A good example of mathematical representation as compression.

------
utopcell
I guess it's all in the eye of the beholder. For instance, Erdos would say
that, for him, the Graham number is just 1.

------
joeblau
I just watched a video on how this number is derived and now my head hurts.

------
BenjiWiebe
This needs a [video] tag.

------
luord
I love articles like this, they make science so approachable.

------
pmoriarty
Should these be called numbers or rather just processes which (in principle)
arrive at numbers?

~~~
zardo
>Should these be called numbers or rather just processes which arrive at
numbers?

They are one and the same. You can use a Graham's number place value system
and Graham's number is simply written while a number like 7 might be quite
complex to represent.

~~~
pmoriarty
If a number and the process used to arrive at a number were completely
equivalent, then going through the process to arrive at the number would never
be worthwhile. Yet people are often interested in actually going through these
processes to generate numbers, when it's practical to do so.

~~~
zardo
That's because they want to represent the number in a particular
representation.

They are not "completely" equivalent, they are equivalent up to the
representation, converting between representations can have very high
computational costs.

But equivalent up to the representation is the usual notion of equality in
mathematics, and if you mean something else you have to define it.

(Obviously in computer science, this difference is quite important and we need
to differentiate between an expression and it's evaluation)

~~~
pmoriarty
There still seems to be a difference in kind between a number represented in
such a way that no further processing is needed, versus represented in a way
in which further processing is needed.

To me, such a fundamental difference seems to deserve a distinction in
nomenclature.

~~~
Retra
That's a distinction in goals though, not in concepts. If we want to know how
many people are on a bus "the number of people you would count if you
bothered" is a perfectly true answer. It's an accurate representation. You
perhaps can't do anything more than talk vaguely about abstract bus
populations with it, but that is a reasonable thing to do.

On the other hand, maybe "17" is the answer you want, because you need to
compare it to some other integer or plug it into a function. Also useful
things to do, but not always. And not even more often than the trivial
alternatives.

Even though we're significantly biased toward wanting the latter form, it
still means the same thing as the former. And the former is a description of a
process, while the latter is a number. Or the former is a number, while the
latter is a particular encoding of a process. Same difference in the end. We
even define numbers by describing the processes we use to generate them.

------
geoffreyhale
FOOT

