
Graham's Number - ColinWright
http://myscienceacademy.org/2012/09/30/grahams-number-numberphile/
======
btilly
It is worth noting that while Graham's number might have held a record for the
largest number used in a serious proof, his up arrow notation does NOT hold
the record for the fastest growing computable function used in a serious
proof.

The record for that probably is Goodstein's function. See
[http://en.wikipedia.org/wiki/Goodstein%27s_function#Sequence...](http://en.wikipedia.org/wiki/Goodstein%27s_function#Sequence_length_as_a_function_of_the_starting_value)
for more. At 12 it already has surpassed Graham's number.

The result that this function was used to prove is that there exist computable
functions that cannot be proven to be computable using the Peano axioms. And
the reason that it cannot be proven computable is that it grows faster than
any function that _can_ be proven computable using the Peano axioms.
(Ackermann's function is an example of a fast-growing function that can be
proven computable using the Peano axioms, and therefore which must grow more
slowly than Goodstein's function.)

~~~
baddox
When you say "a fast-growing function that can be proven computable using the
Peano axioms," do you mean that the Peano axioms are used to compute the
function, or that the Peano axioms are used to _prove_ that the function is
computable?

~~~
btilly
I mean that they are used to prove it.

Basically anything that you can prove using direct calculation and induction a
finite number of times on the integers can be proven starting from the Peano
axioms. It is not obvious that there are computable functions that those
methods of proof do _not_ suffice for. Goodstein's function grows too fast for
this to suffice for.

The proof that it is actually computable requires transfinite induction on the
countable ordinals.

~~~
bo1024
Grain of Salt: I'm not an expert by any means.

This is really cool and interesting, but I have to also admit to being deeply
skeptical. I'm going to try to read a lot more on the proof, but it seems
incredible (in a negative sense) that transfinite induction could prove that a
finite number of operations suffices for anything.

Similarly, "computable" and "transfinite" would seem on their face to never
belong in the same proof.

Obviously I don't have the math background to make any claims, but right now I
can't convince myself that this could possibly prove what people claim it
does. (Edit: Or maybe a better way to state my objection is, it seems to me
like the axioms one would need to accept to prove this might not really make
sense when trying to describe computable systems.)

~~~
btilly
The wiki link I gave before outlines the transfinite induction proof. I can
give the idea of the same proof without any reference to infinite ordinals.

First let's take a slightly more verbose notation than the wiki link uses.
Let's say that B(b, b', n) is the operation put n into hereditary base b
notation, then change the base to b'. Let's define G(b, n) as 1 if n = 0, and
otherwise as 1 + G(b+1, B(b, b+1, n) - 1). Then the original G(n) is just G(2,
n).

If n < b, then by induction on n we can easily prove that G(b, n) is well-
defined (here being well-defined is the same as being a finite number), in
fact it is just n + 1.

If n = n_1 b+n_0, with n_1 and n_2 < 0, then by induction on n_1 we can easily
prove that G(b, n) is well-defined.

Now by induction we can easily prove that if 2, n_0, n_1, n_2 are < b then
G(b, n_0 + n_1 b + n_2 b^2) is well defined.

We can go farther and by induction we can prove that if m, n_0, n_1, ..., n_m
< b then G(b, n_0 + n_1 b + n_2 b^2 + ... + n_m b^m) is also well-defined.

Now we're in a position to prove that G(b, b^b) is well-defined. After that we
repeat the previous proof and find out that if k < b^b then G(b, k + b^b) is
well-defined.

Now we can do an induction proof that G(b, 2 b^b) is well-defined.

And so on. In fact what we discover if we're careful is this. For each
hereditary base pattern, there exists an inductive proof that G(b, n) with
that hereditary base pattern is well-defined. The challenge is that the
necessary proof _changes_ as the hereditary base patterns get bigger.

What transfinite induction provides is a way to describe, in a unified way,
the underlying idea behind all of those different inductive proofs into a
single one. In particular every hereditary base pattern is mapped onto a
(possibly infinite) ordinal. The operation B(b, b', n) does not change which
ordinal you're mapped on to. But the operation -1 always decreases the
ordinal. Thus we get a decreasing sequence of ordinals. And transfinite
induction says that all decreasing sequences of ordinals must terminate in a
finite number of steps. The pattern of the necessary inductive proofs is
dependent upon the ordinal.

What the Peano axioms lack is any way to unify all of those individual
inductive proofs into a single one. Thus for any given integer n we can come
up with a proof that G(b, n) is well-defined. But we can't prove that fact for
all positive integers n with a single proof.

Therefore this is really an exercise about the limitations of the set of
possible proof techniques that the Peano axioms provide, and not about the
limitations of computability.

(Of course the limitations of computability on real machines are much, much
stricter than the limitations of the Peano axioms. We can't say much that is
useful about G(12) other than, "Unbelievably big."

~~~
bo1024
OK, that is really cool, thanks!! I may get back to you with questions once I
wrap my head around it a bit more.

------
andrewingram
If you live in London, Matt Parker (one of the guys in the video) regularly
does various comedy gigs in the area. I highly recommend them, they're always
mathematical or scientific and a lot of fun.

He's done his bit on Graham's number in a couple of them before.

~~~
ColinWright
Indeed, Matt is always entertaining. I regularly speak on the same program,
appearing with him in, for example, "Festival of the Spoken Nerd" as a guest
speaker.

Details of the FotSN tour can be found here:

<http://festivalofthespokennerd.com/>

~~~
andrewingram
I think I saw your juggling talk at FOTSN in Shepherd's Bush, very
entertaining! But you were up against the tough competition of a Lego clock.

------
huhtenberg
For those without Flash Players -

[0] <http://en.wikipedia.org/wiki/Graham%27s_number>

[1] <http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation>

------
defen
A rather obvious mathematical fact, which I never really noticed before
watching this video (perhaps because of notation or the rarity with which it
comes up) is that exponentiation and its higher-order forms (arrow notation)
is not associative. Is this just a mathematical curiosity or is there any
consequence to it? Is there a reason right-associativity was chosen as the
default?

~~~
dbaupp
I think right associativity was "chosen" because left associativity would be
redundant (since (a^b)^c = a^(b*c)), and so it saves on bracketing and
generally makes things slicker.

------
mateo42
I was lucky to get Graham as a professor for an undergrad discrete math course
at UCSD. He's incredibly funny and besides his ability to juggle can also
speak a fair amount of Mandarin. Definitely one of my more memorable classes.

------
madiator
For a moment I had thought this is like Erdos number or Bacon number, but now
with PG at the center. Your number is one if you have worked with him etc.

~~~
chrishepner
_Ron_ Graham was a close friend of Erdős, so if you were to co-author a paper
with him, you would have a Erdos number of 2. ;)

~~~
ColinWright
I have, and I do.

------
marquis
After experiencing a significant moment of inability to conceptualize while
watching Graham's number be described, I think the brain might have some kind
of stop-gap to stop your cognitive functions from failing. Either that or I
need more practice in imagining large numbers? Do mathematicians learn special
tricks to holding large-number concepts in their minds?

~~~
IvoDankolov
Yes, I can quite easily think of Graham's number - I call it G.

You might think that I'm saying that in jest, but it is actually the way of
things. There are quite literally an infinite number of numbers. Why should
Graham's be important enough for us to consider. For that matter, why should
we consider 1?

The answer lies in part in the question - why - it must be important in some
way. That is exactly the case - we only consider numbers that, for some
reason, relate to problems we are facing, whether it be in mathematics,
programming, or grocery shopping, and even then not on their own merits. Past
a certain point, you don't hold numbers in your head as little balls, but
rather their decimal encoding. That's a clever hack, but not without its
downsides. Ask some people randomly if they were in a position to decide, how
much budget would they allocate to save 10000 birds, and ask others the same
with 100000 birds. The mean of the second answer would not be ten times that
of the first (unless your subjects know and actively try to work around that
particular bias).

But back to Graham's number - if you think the only way to imagine it is to
hold its decimal notation in your head - the obvious question is why? Why not
hexadecimal? Octal? Why any base-p encoding at all? The usual reason to use
base-10 is to quickly get an idea of the magnitude of a number relative to
things we are familiar with and do some simple arithmetical manipulations on
it. Both of these are pointless for Graham's number. Relative to familiar
things like billions and trillions it's quite a ways of the chart, and adding,
subtracting, or even raising to powers of such makes relatively no difference.

Besides which, in this case it isn't really pointless, it's impossible. Jokes
about collapsing into black holes aside, there aren't enough bits of
information in the universe to encode the decimal representation of Graham's
number. There aren't enough bits of information to encode the number of digits
of that. Not enough even to encode the number of digits of the number of
digits. Interestingly, there also aren't enough bits to encode the number of
times you'll have to repeat taking the base-10 logarithm to get back to a
universal scale.

Which goes to show that not only your mind, but no entity in the universe can
accomplish the feat of representing an arbitrary number in decimal notation,
mathematician or otherwise. And what is the point, really? G is a perfectly
good symbol, and so is g_64. Usefulness is important, not imagining a string
of balls. For that matter, I bet you've been perfectly content to use Pi on at
least a few occasions, even though in terms of representability in decimal, it
is infinitely worse than Graham's number.

~~~
csense
> The mean of the second answer would not be ten times that of the first

I'm not sure how good an example this is to argue your point, because the
second answer _should_ only be ten times the first if the cost structure is a
linear function with a zero constant term. Which seems like an awfully big
assumption -- you might get economies of scale (nonlinear, decreasing
derivative), or you might pick the low-hanging fruit first and then have to go
after more difficult birds (nonlinear, increasing derivative), or you might
have fixed costs that give you a constant term.

That being said, many people do have a lot of difficulty conceptualizing the
difference between millions, billions and trillions [1].

[1] <http://xkcd.com/558/>

------
prezjordan
Blog spam? Please link to the YouTube video instead - the article gives no
extra information.

Nevertheless - very interesting topic! Also terrifying.

