Hacker News new | past | comments | ask | show | jobs | submit login
Who Can Name the Bigger Number? (scottaaronson.com)
223 points by jeremynixon on Feb 16, 2015 | hide | past | web | favorite | 145 comments



When I was around 14 years old I was baby sitting a kid who was 3 or 4 or something. He came up to me and said "I bet I can say a bigger number than you." clearly proud of what he had just learnt in preschool. I accepted the challenge, he took a deep breath and said "Five hundred!", so I said "Five hundred and one". He looked determined and said "Ok then! Five hundred and fifty hundred and five hundred million hundred five hundred" (Or something that makes no sense like that). So I said "Five hundred and fifty hundred and five hundred million hundred five hundred, and one". He looked horrified and said "Can you just say 'and one' for any number?" I told him yes, and he said "Cool!".


And every since has that poor kid believed that "Five hundred and one and one" was a valid number.


It kind-of-is, it's just not particularly well-formed/ambiguous linguistically.


A bit like 99 in French: quatre-vingt-dix-neuf

four twenties and ten and nine


I knew the Danes where counting in a crazy way but I didn't know the French did this as well.

One more reason I'm happy English is the most common international language I guess.



According to wikitionary [1] "four score and seven" was a commonplace way of saying "eighty seven" in 19th c. English.

[1] https://en.wiktionary.org/wiki/four_score_and_seven_years_ag...


Peano says it is a valid number.


Given the other comments I wonder whether the Meta-Algorithm for big numbers is basically:

Compose a state space that expands as quickly as possible in terms of the argument(s) and define some property of the state space as the result.

Looking at it this way, I'd say:

"Hyper-exponentiation" (Ackermann, tetration and so on) is basically counting the number of states in a well-defined state space.

BB(X) is equivalent to generating automatically the state space (via a search over all possible state spaces that can be constructed with a TM with X states) and returning the size of the maximum.

This means that BB(X) is smaller than the sum of the sizes of all possible state spaces constructible via that TM.

But BB(X+1) is aware of that trick (summing state space sizes) and uses it already (or more probably something better), so it doesn't really matter and you might as well just increase X by one to include all the tricks you had in mind for BB(X).

Anyone thoughts on that?


BB_2(X) uses a different trick. It asks for the largest number computable by any program of length X that's allowed to call a BB oracle with arbitrary arguments. That's more powerful than any finite expression involving BB and computable functions, like BB(F(X)), BB(F(BB(X))), etc.

More generally, I expect there to not be any "meta-algorithm" for naming big numbers. Every big step will require a new insight. A similar problem is inventing notations for large ordinals: http://en.wikipedia.org/wiki/Ordinal_notation


I looked it up and this page showed up: http://web.mit.edu/nelhage/Public/big-number.txt. Pretty interesting.


If BB is the highest paradigm published, and it is assumed BB numbers, given enough time and resources, can be computed, and you can't use BB2, why can't you simply go:

    BB(BB(BB(BB(BB(11111)))))
Repeat as many BB's as you have space on your 1000 character card. You might even use math notation to define a new function where

    BB-(n) = [BB(BB(.......]
              <---- n ----->
And then you might have:

    BB-(BB-(BB-(1111)))(BB-(BB-(BB-(1111)))(11111))
Or some such monstrosity.


So this turns out to be less interesting mathematically than you might think, because the entire thing about Busy Beaver numbers is that it is using the full power of computation available to it for a given number of states, so as soon as the "normal" Busy Beaver reaches a number of states sufficient to describe whatever trick you're interested in pulling, the Busy Beaver sequence itself already uses that trick!

One of the most important sentences in understanding them is one that's easy to pass by in the original work: "Indeed, already the top five and six-rule contenders elude us: we can’t explain how they ‘work’ in human terms."

That is, while we humans are thinking we're all clever by defining repeated iterations of the BB ruleset itself, all of these things that we think we are being so clever with are actually very very compactly described by Turing Machine states. Meanwhile, even by BB(6) the TM is metaphorically "using" incomprehensible tricks to be larger than we'd even dream. If repeated iterations of BB'ness can be trivially described in, say, 10 states, and let's say we reserve 4 states for BB(4) to describe the number of times to iterate, our hand-constructed "clever" re-iteration of the BB procedure will be blown away by some other 10-state machine that we can't even imagine would ever terminate.

So on the one hand, yes, BB(BB(20)) is incomprehensibly larger than merely BB(20), and yet, on the other hand, in a profound way, it also doesn't matter. Busy Beaver in a sense shows us numbers that have all the practical properties of infinity, even if they are not technically infinite. In a sense BB(30) < BB(31) is obviously true, yet, ultimately, a humanly meaningless statement, since grasping either number in any sense beyond its mere definition is impossible. We might as well say that "blibble" is less than "bloo". And not merely impossible in the sense that it is impossible to even really properly grasp just how far it is to Alpha Centauri, but impossible in the sense that we are literally incapable of even constructing mathematical tools that will let us grab on to and manipulate such numbers... deeply, profoundly, computationally impossible for us to understand, not merely "mind-blowing", but impossible for us to understand in the absolute strongest sense of the term.

Similarly for trying to catch up to BB with yet more iterations of exponentiation... the procedure we humans use is shockingly, shockingly simple to describe programmatically, which means that BB automatically already encompasses practically all such tricks very early in its sequence, which means you can't defeat it that way.

Busy Beaver is metaphorically (again) much smarter than you, and it's not worth your time to try to outsmart it to make yet again bigger numbers.

This also, probably correctly, implies that attempting to adjudicate some contest in which some people write BB(BB(BB(x)))) vs some other BB-based notation is also impossible and you'd actually fail out of the contest for writing an ill-defined number, as if we can't compare the two numbers for which is larger, even in principle, it is for the purposes of this contest, ill-defined. Busy Beaver in a sense also sets the limit of what a well-defined number even can be, and is thus in some sense the natural limit of this contest by virtue of simply transcending through sheer incomprehensible size all our computationally-naive human tricks for making big numbers, or even just numbers of any size at all.

It's really a profound sequence.


> we are literally incapable of even constructing mathematical tools that will let us grab on to and manipulate such numbers

I think this is really the key to your point. Part of me wonders whether this should even make the example in the essay, BB(111), invalid. If there is no way in our universe to ever know exactly what BB(111) is, can it really be considered a well-defined number? That said, I'm no mathematician, so I'm probably off-base here.

My first though reading the challenge was a series of 7^7^7^7... since the 7s would nest together nicely (unlike 9s) :P


> If there is no way in our universe to ever know exactly what BB(111) is, can it really be considered a well-defined number?

There's a branch of Maths called constructivism which requires values/proofs/etc. to be "constructable" in principle. For example, the law of the excluded middle ('for all X, either X is true or (not X) is true') is not constructive, since it doesn't give us a value: we don't know whether we're dealing with X or (not X).

http://en.wikipedia.org/wiki/Constructivism_(mathematics)

Constructive mathematics turns out to be very closely related to computation, and is one reason why functional programming gets so much research attention.

Constructivists don't have a problem with infinite objects, like the set of all Natural numbers, if they can be represented in a finite way (eg. as a function). There's another branch of Mathematics knows as finitism, which regards infinities as not existing. What you're describing is ultrafinitism, which regards really big things as not existing: http://en.wikipedia.org/wiki/Ultrafinitism


It is well defined - it's "just" a matter of computation. There are a small number of possible 111-state turing machines - well, ok, there's actually a very large number, since it's one of those things that grows exponentially, but still, it's very much a finite number. For each of those possible turing machines, either it halts or it doesn't, something a programmer should be able to grind out; if it halts then it necessarily does so after some finite number of steps. So in principle we could compute all of those and then take the max of them. In practice it's a lot of steps - but we understand all the steps, so it would just be a matter of "cranking it out".

Conversely if you want to insist on a "constructible" number then you kind of have to insist on unary notation. Maybe a computer can give you a numerical value for "11^11^11^11", but can you really consider it well-defined if you can't put that many pebbles in a row? There are a few mathematicians who take this kind of view - see http://en.wikipedia.org/wiki/Ultrafinitism - but it's very much an extreme position.


I have some issue with how you're generating the BB numbers, though it's not necessarily mathematical. Fundamentally, "cranking it out" is exactly what you cannot do to generate the BBs: following any one specific set of instructions for generating the BBs is a futile exercise.

The problem arises when you say that "a programmer should be able to grind out" whether or not a TM halts or not, which you use to get around the fact that a TM cannot solve the halting problem. However, I'd question if this is a trivial exercise: while we certainly are capable of recognizing infinite loops in the code we write, I'm not certain that humans can identify arbitrary infinite loops. Obviously, whether or not we can isn't a trivially answerable question, as it comes down to whether or not our brain's neural networks can be modeled by a sufficiently large TM, and even if it cannot be modeled by a sufficiently large TM, what differences between our brains and a TM exist and why those would effectively allow us to solve the halting problem.

So I'd question whether finding the BBs is "'just' a matter of computation", because I'm not convinced that humans can solve the TM halting problem.


I'd say that it is a trivially answerable question - "no, we cannot recognize arbitrary infinite loops" is the answer. It is easy to encode difficult-to-prove statements about number theory into loops. Consider a program that searched for a counterexample to the Collatz conjecture, or the Riemann Hypothesis.


Very good point. There's no reason to think we can solve the Halting Problem any more than a TM can, so the 'solved' BBs mentioned in the article are only solved in the sense that we puny humans had a look at all possible programs generated by BB(3) for example, focused on the ones that didn't appear to be halting, and decided by inspection that they wouldn't halt. That inspection process is not mathematically well-defined, so I'd argue that BB numbers should be inadmissible for the contest.


The inspection can include proofs that the loopers loop, with the full mathematical power of the word "proof" behind it, so I'm willing to trust the known numbers. But the proofs can not be general (or at least not past a certain point which probably comes up pretty quickly), and will get ever more difficult to perform as we dig ever further down the sequence.


Thank you; this is a better explanation of what I was getting at in the GP comment. While it is theoretically possible that we could conclusively find BB(X) for any X, we don't know, nor can we prove, that we can (aside from the ones that have already been proven). Therefore I don't know that it makes sense to talk about it being a specific number, but rather the concept of a number (more like his "number of grains of sand in the Sahara" example, which he explicitly mentioned is not valid^.)

^Although his reasoning is that it is not valid because it is always changing, not specifically because it is unknown. Still, I assume that "Number of grains of sand in the Sahara at exactly midnight, Jan. 1 2050, on this atomic clock" would also be disallowed, even though it's possible we would somehow be able to know exactly what that number is in the future.


BB(111) is not computable by "just computation". The issue is you have to prove that the programs that don't halt, don't halt. There is no general way to do this (Halting Problem). In fact once you get to a large enough n, BB(n) will be undecidable in whatever axioms you're using. We already have a simple proof of this by taking any known undecidable statement and turning it into a Turing machine halting problem.


Define C(n) = [BB-(BB-(BB-....] And define D(n) = [C(C(C(C(...n)] And define E(n), and so on.

Then make the Greek letters the entire alphabet iteration process (meaning BB-n through Z(n)). And then make the Hebrew letters the entire Greek letter process.

Look at how much fun we're having!


> and it is assumed BB numbers, given enough time and resources, can be computed

They can't be computed - they can only, at best, be proven to be within some bounds.

But it's unimportant since it's not really required in order to do what you said.


The logical conclusion of this is considering all general computations that can be done with a BB calculator in hand, which leads you to the hyper BB numbers that he refers to, or BB_2.


I got the idea his BB_2 is different. He's referring to a super Turing machine that can solve the halting problem for non-super Turing machines.

The BB-2 used in an iterative algorithm is talking about "A busy beaver number of (A busy beaver number of 10)", no need for super machines here. The iterative algorithm is short enough you can define it in your 1000 characters, no need for additional publication.


But the super Turing machines busy beaver numbers vastly outgrow your iterated busy beaver very quickly. It doesn't take that many states to write a program that can do an iterated function application, and to a super Turing machine, it's easy to write the busy beaver function.


OK, but as the article states you'll have to write a paper to a low-level journal to use it, and I'd have to be really into and good at mathematics to consider doing exactly that to win a name-the-biggest-number competition...


There was a contest to produce programs generating large numbers in a version of C with integral types that hold arbitrarily-large integers: http://djm.cc/bignum-rules-posted.txt

You can see the results here: http://djm.cc/bignum-results.txt -- there's some interesting stuff in there!


Yeah, that was a great contest as well. Though the numbers involved are necessarily much smaller than even the BB numbers, because they must be computable.

Also see this old HN discussion where Eliezer proposed a good way to beat the winner: https://news.ycombinator.com/item?id=1542770


Why not mention factorials? 9!! is way bigger than 9^9^9. And you can write many exclamation marks in 15 seconds.


As peterjmag indicated in his comment, 9^9^9 looks like 9^(9^9) which is actually greater than 9!!. A different way of looking at it is n! < n^n. We can see from here that factorization isn't really the new paradigm that the author is looking for; it's just a part of the exponentiation paradigm.

Furthermore, factorials don't really scale or stack easily. What the author is getting at in the relevant location is stacking the same concept: 1. Multiplying is just adding the same number several times. 2. Exponentiation is just multiplying the same number several times. 3. Tetration is just exponentiation several times. 4. Etc.

This allows us to generate the infinite hierarchy easily expressible by the ackerman numbers (which is basically A(i) = f_i(i,i)), which doesn't generate itself as easily with factorialization in place of exponentiation.


When writing factorials you would want to write (9!)! since 9!! is actually a different operation (the double factorial). 9!! = 9 x 7 x 5 x 3 x 1, so 9!! is less than 9!.


Wow, 9!! is so large that the exponent almost needs it's own exponent.

    9^9^9 =~ 2e77
    9!! =~ 1.6e1859933


Correct me if I'm wrong, but aren't you supposed to evaluate stacked exponents from the top down? That is, 9^(9^9) instead of (9^9)^9. If that's the case, 9^(9^9) is much larger than 9!!. Though I'm not sure how much larger, since I couldn't find any big integer calculators online that would give me an actual result for 9^(9^9).


9^(9^9) is ... wait for it... 4e369693099 bigger than 9!!

http://www.wolframalpha.com/input/?i=9%5E9%5E9+-+9%21%21



Ah thanks! How could I forget Wolfram Alpha? =)


It seems you're correct - I was just pounding the values in sequence into a calculator, much like a determined monkey would.


Good one! I was only thinking of e^e^9 which is way smaller than 9!!


It seems notable that so many different variations of "naming large numbers" all end up mapping iterated applications of a function into a new function that takes a number of iterations as a parameter. Knuth's up-arrow notation defines iterated exponentiation, iterated iterated exponentiation, and so on. Ackermann's function makes the number of up-arrows just a numeric parameter to the function.

But you could go further than that: A(A(10)) is much bigger than A(1000), so you can get larger numbers faster by iterated applications of the Ackermann function. Turn the iteration into a function, and let B(n) be n applications of A to n. Iterated application of B would be even faster, so turn that iteration into a function: let C(n) be n applications of B to n.

But this process itself is iterative. So, define a new function: let iterA(n) be the nth level of iterated Ackermann functions applied to n, where iterA(1) is A(1), iterA(2) is B(2), iterA(3) is C(3), and so on.

Now, iterA can be applied iteratively. So, repeat again. The resulting function can be applied iteratively, so repeat again.

And this whole process of turning iteration into a function and then iterating that function is itself an iterative process, so it can be turned into a function...


In your notation B(n)=A[n](n), where [] denotes the number of iterations. So C(n)=B[n](n)=(A[n])[n](n)=A[nn](n), so really iterA(n)=A[n^n](n). I you repeat this procedure with iterA, then you get iterA2(n)=A[n^(n^n)](n) . While n^n, n^n^n and so on are definitely fast growing you are better of putting A into []. So you can write an extremely fast growing function as f(n)=A[A(n)](n). So one can improve your strategy by using the [] notation, resolving the recursion and putting large functions inside.

And of course my method can improved with iterations that also can be resolved by creating a new notation. It's really never ending.

A[A[A(n)](n)](n)

And it's all computable, so the Busy Beaver grows faster than any* of these.


So the competition is not really to name the biggest number but rather to write the deepest iterator in 1000 characters and apply it to the highest paradigm function. More like a programmer's challenge.

Let A = {fastest published sequence}. # This is a macro for BB, Ackermann's, etc.

Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^Iterate^10(A, 10)


IMO you can't win big number competitions by such a low-level approach. At least notice that the concept of "deepest iterator in 1000 characters" is itself mathematically definable, and define it as D(1000) or something. Then try to find connections between that and BB numbers... Coming up with the "highest paradigm function" is the important part of the challenge, iteration is unnoticeable by comparison.


yes! I've had this exact thought before. Recursing and wrapping the total recursion as fast as you can is the fastest way.

Actually, the speed would be limited by your linguistic or programmer ability to concisely state the level of recursion you want (ideally the logical maximum) given all the previous inputs up until the recursion statement -- e.g. "now recurse steps a, b, and c and call that d" is not as strong as "imagine how the slope of the exponential curve increased between a and b, and the difference between that and b and c, and raise c to the power of both of those increases combined raised to the power of a, b, and c respectively" .. but obviously the second sentence is longer and more complex as well as "bigger".


I have been thinking about this problem for a year or so now. I was introduced to the concept watching a Numberphille video and several hours later read this article.

I've been taking a somewhat metaphysical approach to thinking about it. Instead of thinking about a Turing Machine as just a head on an endless tape, can we not consider the Universe to represent the upper limits for our calculations? Just as Turing suggested that we can take the two dimensional representation of a mathematical equation and represent it in one dimension, can we represent the entire state of the Universe in a similar way on a hypothetical Turing Machine? The halting problem is then a question of whether or not the Universe has a halting state.

To extend from that, any machine that can be conceived must fit within the bounds of the Universe. Of course BB(Universe) is incalculable, but I think that it defines an upper limit. It is pointless to consider a Turing Machine that features an endless tape if an endless tape is an impossibility.

In not sure if this adds to the discussion, but I haven't had anyone else I could discuss this with until now.


I suspect a physicist would say you're idea rests on an a view of the universe where everything is deterministic and knowable and encodable.

You also have to remember that some quantities are going to be continuous, while TMs are discrete. Even an "infinite" TM can't truly encode real numbers (I think) because its infinity is of the countable variety, while Real numbers are uncountably infinite.


True, I am making an assumption that it is deterministic.

If we presume that the Big Bang model of the Universe is true, then in the earliest moment of time, can the state of the Universe be measured? If it can be measured, how does it transition from a measurable state to an immeasurable one?

I'll have to give some more thought to what you've said, but I wanted to give you a basis for why I believe it could be deterministic. Clearly I'm making some assumptions, and based on my postulate, this is a state that we could never measure ourselves. I don't know if our inability to make these measurements removes the possibility of determinism.


By postulating an endless tape you're just stalling petty objections based on how long a tape would be required for any given task. The only point that's relevent, when considering Turing machines in the abstract, is that the amount of tape required for any given task be finite.


It seems like you are approaching a philosophy of mathematics called "Ultrafinitism" (http://en.wikipedia.org/wiki/Ultrafinitism). It is a very interesting subject and shouldn't be dismissed prematurely.


I'll need to read about this. Thank you.


Mathematician Edward Nelson has a very interesting take on "big" numbers. He claims that the exponential function is not necessarily total, and a number like 2^1000000000000 might not actually exist. The reason he singles out exponentiation is that the reasoning that exponential numbers are "real" numbers, is circular (impredicative). According to Nelson, speaking about such numbers might lead to condradictions, just like speaking about "the set of all sets that don't contain themselves" leads to a contradiction.

He also relates these issues to Christian philosophy, which I find very interesting. In particular, he claims that the a priori belief in the objects defined by Peano Arithmetic, is equivalent to worshipping numbers, as the Pythagoreans did.

I think this is the best starting point if you're interested in reading about his ideas: https://web.math.princeton.edu/~nelson/papers/warn.pdf


Nelson withdrew the claim in this paper based on a blog interaction with Terry Tao.

http://m-phi.blogspot.com/2011/10/nelson-withdraws-his-claim...

The whole episode: - Reputed Princeton mathematician publishes paper questioning the foundations of math - Terry Tao 'disproves' him in a series of blog comments is one of the more interesting recent happenings in the world of math


You've misunderstood the situation.

I was referring to earlier work by Nelson in which he explains why he thinks Peano Arithmetic may be inconsistent.

Later, in the incident you referred to, he claimed to have an actual proof. Terry Tao showed the proof was wrong, and Nelson accepted this.


I'm definitely missing something about his position. I think I understand why he accepts addition and multiplication but not exponentiation. But I don't understand how he can illustrate that with 80^5000 or other concrete examples. While you can't guarantee to him that x ^ n is a countable number for arbitrary x and n, for any specific x and n, you can, right?


No, and that's the key point. His claim is that if you tried to express 80^5000 as an ordinary number, you would just end up going in circles. Your calculation would never terminate. Now we can't in practice compute 80^5000 directly, in the sense of writing down this many 1's, so it's a good example to use. If the example where 2^10, we could indeed demonstrate that 2^10 is a real number.


I don't think that makes sense. We can certainly find the numeral for 80 ^ 5000 (open a command prompt and your numerical tool of choice if you doubt me). If your point is that we can't write out enough tally marks, then you're correct, but

1) Not unique to exponentiation, it's also true of 281474976710656 + 79766443076872509863361.

2) This is a vague notion and you're going to have to deal with the problem of the smallest number that isn't expressible (I believe it might be the same as "feasible numbers", and there's a whole theory here)

3) the whole tenor of his critique suggests something a bit more foundational (predicativity, I guess...?)


When I said Now we can't in practice compute 80^5000 directly, in the sense of writing down this many 1's I meant waht you referred to as "tally marks". I agree "1's" isn't quite the right way to put it.

As you say, the notion of a "real number" is vague without the details (which I'm not really able to describe properly, because I'm only summarizing something I vaguely understand. I'm not an expert on mathematical logic). The best source is Nelson's book "Predicative Arithmetic", https://web.math.princeton.edu/~nelson/books/pa.pdf In the book, he defines what he means by a real number, and shows why exponentation doesn't satisfy the property that if a^X is a real number, then a^(X+1) is.


I'll read the paper, it sounds interersting. But isn't all Math "circular" -- Goedels Second Incompleteness Theorem yeah?

http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_the...


Goedel's incompleteness theorem uses Peano Arithmetic, which Nelson claims is (possibly) inconsistent. He develops his own "predicative" arithmetic, to which Goedel's theorem does not apply.

(large pdf) https://web.math.princeton.edu/~nelson/books/pa.pdf


I'll read it, thanks. I always (layman) understood Goedel's theorem as meaning you can be complete or consistent but not both. So you can have a language that can describe everything but you'll have paradoxes, or you'll have no paradoxes but not be able to describe everything (all possibilities) in your system


Yes, Goedel's Theorem says you can be complete or consistent but not both, but only about systems that are stronger than Peano Arithmetic, i.e. systems that contain the axioms of Peano Arithmetic, as well as any other axioms.

Usually this is left out, because Peano Arithmetic is treated as a MVP for mathematics. But Nelson claims Peano Arithmetic may be inconsistent, and proposes a weaker system.


Wow. I have a lot of reading to do. Thanks!


Even if an axiom system could prove its own consistency, that wouldn't be any less circular - we could believe T because T proves that T is consistent - but if T were inconsistent then it might still prove that T was consistent.

Most of us take some axioms and believe them, ultimately because they correspond to physical experience - if you take a pile of n pebbles and add another pebble to the pile, you get a pile of n+1 pebbles, and n+1 is always a new number that doesn't =0 (that is, our pile looks different from a pile of 0 pebbles). Maybe there's some (very large) special n where this wouldn't happen - where you'd add 1 and get a pile of 0 pebbles, or where you can have n pebbles but you can't have n sheep. But so far we haven't found that. So far the universe remains simple, and the axioms of arithmetic are simpler than conceivable alternatives.


Now the contest for programmers: Write the biggest number in one line of code (80 characters, keywords count as 1 character, whitespace - 0) using computer language of choice(standard library with infinite integer type) that any reasonable programmer can prove that it is actually a finite number.


Mathematica has important whitespace for indicating multiplication, and it's not clear what counts as a keyword, so here are 80 copy-and-pasteable characters:

  u[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]];Nest[u[#^#^#][#]@#&,9,u[99][9]@9]
u[n][a][b] gives a (Knuth's up arrow)^n b. The after-the-semicolon expression computes

  f(f(f(f(... u[99][9][9] fs total ... f(9) ... ))))
with the function f(n)=u[n^n^n][n][n]. This clearly results in a finite number, since it is just iterated iterated iterated iterated ... (finitely many "iterated"s) ... iterated exponentiation of finite numbers.

However, even when I try to compute (after $RecursionLimit=Infinity)

  Nest[u[#^#^#][#]@#&,2,u[2][2]@2]
my kernel crashes. This number is BIG.

There is one obvious way to make this number even bigger: make the base case yield a^b. However, then it's not Knuth's up arrow notation, so it's harder to debug by looking at the wikipedia page :). I used all my tricks (like using @) to get rid of extraneous characters, which gave me space to put #^#^# as the first argument of u. I still had 1 character remaining, so a 9 became 99. If you can squeeze a few more characters #^#^# and 99 should be substituted for u[#][#]@# and 9.

https://en.wikipedia.org/wiki/Knuth's_up-arrow_notation


I just realized 99 should be replaced with "9!".


Using the infix special form ~ we can cram in another ^#:

  u[n_][a_,b_]:=If[n==0,a b,Nest[a~u[n-1]~#&,1,b]];Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
I should also note that I'm not confident as to which of

  Nest[#~u[#^#^#^#]~#&,9,9~u@9~9]
  Nest[#~u@#~#&,9,9~u[9^9^9^9]~9]
is larger.


Very nice. Mathematica can clearly do the job. But I feel like there is still a lot of room for improvement. Clearly though, the proof would be more and more difficult.

Here is my modification:

  M=Nest;
  u[f_][n_][a_]:=If[n<1,f@a,M[u[f][n-1],a,a]];
  u[#][#@9][#@9]&@(u[#!&][#][#]&)
82 chars total.

comments:

  (*start with definition of Knuth up arrow*)
  u1[n_][a_][b_]:=If[n==0,a b,Nest[u[n-1]@a,1,b]]
  (*let treat 1 as symbol and take 1 == b == a *)
  u2[n_][a_]:=If[n==0,a a,Nest[u[n-1],a,a]]
  (*next define for arbitrary function f  instead of multiplication*)
  u[f_][n_][a_]:=If[n==0,f@a,Nest[u[n-1],a,a]]
  (*numerical example when we take n<3 instead of n==0*)
  u[#! &][#][#] &@3 = u[#! &][3][3] = 10^1746 
  (*Next take the function f and parameters a to be: *)
  f = u[#!&][#][#]&
  a = f@9
  (*compute final number*)
  u[f][a][a]
  (*those 3 steps are shortened to: *)
  u[#][#@9][#@9]&@(u[#!&][#][#]&)


Smart. It didn't occur to me to have the base case be an arbitrary function. Yours is much larger than mine. One comment: M=Nest; is a waste of characters. I tried that in my solution too, but it wound up costing me an extra character ;). So I think you're down to 75 characters. It might make sense to remove the factorial, and change the base case of u to f@f@f@f@a.


Thanks, I forgot to remove M=Nest that was beneficial in my previous attempts where I used 3 Nests.

But remember:

Never forget that it is a waste of energy to do the same thing twice, and that if you know precisely what is to be done, you need not do it at all. --- E. E. ``Doc'' Smith (1930)

      ...the optimal solution avoids all pattern.
                                        --- Douglas Hofstadter (1983)
http://djm.cc/bignum-results.txt

So I would recommend to avoid things like f@f@f@f@a where there is clearly a pattern.


I think there might be a nice way to use #0 to blow up the numbers even further. But I have to do real work :)


Related, this came up in yesterday's thread about Graham's number:

http://djm.cc/bignum-results.txt

https://news.ycombinator.com/item?id=9054274


   num = 0; while (true) { try { num++; } except ( mallocFail ) { return num; } }


Just do -1 on an unsigned number if you're that lazy.


My solution in brainfuck, given a finite tape:

    -[>-]-


might be easier to use a language like Idris, which can check that functions are total (i.e. return a finite number in finite time).


Unfortunately, if a language L has the property that all programs in it are total, that means L is not Turing-complete. In fact, that puts a hard limit on the growth speed of functions definable in L. For example, the Busy Beaver function for L won't be definable in L (because otherwise you'd get the same paradox as in the halting problem), but will be easily definable in any Turing-complete language (because you can just dovetail through all possible programs in L without worrying that any of them will loop forever).


If you're interested in big numbers, David Metzler's "Ridiculously huge numbers" sequence on youtube (https://www.youtube.com/watch?v=QXliQvd1vW0) is well worth a watch.

He gives great explanations of things like the Conway chained arrow notation and the fast-growing hierarchy and just keeps on making bigger and bigger numbers.


> Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.

Does this mean that the modern mathematician must be able to assemble all the digits of the number's representation (say, in decimal)? Given some formalization of this requirement, there would be a fairly set upper bound on the acceptable integers.


It occurs to me that you could formalize the requirement by saying that there must be a provably terminating algorithm that produces precisely the decimal representation of the integer. That way, numbers like Graham's number can be submitted despite there being no physical way to represent the number in decimal (in the observable Universe).


Does the algorithm need to terminate? It seems like convergence would be sufficient. For example, we have no terminating algorithm to produce pi, but we have plenty of ways to precisly express it.

Of course, given the nature of this problem, you can define your converging algorithm to terminate when it is withing a certain error of what the "pure" answer would be. Of course, it is possible you would underestimate the number, and if someone could beet you by using the same method with a slightly different termination condition, so you may want to add 1...


Good point, although this problem involves only integers, so I'm not sure if "convergence" makes sense. Pi, of course, is a computable number, implying that there is a terminating algorithm that, given input n, outputs the nth digit of pi's decimal representation.


Determining something exactly does not mean providing its digits. It means uniquely identifying the semantic meaning of an expression. As an analogy, you can know what a word means when it's spoken to you even if you can't spell it. Or you can understand what a program written down on paper computes (and prove that it computes that thing) without knowing the explicit output of a particular input.


There must be more to it than that. For one thing, it would be necessary to determine which of the submitted numbers is larger, which according to my interpretation requires more than "uniquely identifying the semantic meaning of an expression."


I really enjoyed this article, it touches on so many different areas of math and computer science and connects them in interesting ways.


I don't get why the BB(x) numbers should grow faster than A(x). They're harder to compute - true. They're more difficult to prove right - true. But I cannot find anything in the article that proves it actually grows faster than Ackerman numbers.

Unless you can always program A(x) in less than x steps on a Turing machine?


No, BB is impossible to compute (as a sequence, that is). A(x) is not.


It's impossible to compute them - sure. But the claim was that: "The sequence of Busy Beaver numbers, BB(1), BB(2), and so on, grows faster than any computable sequence."

The explanation was that: "Because if a Turing machine could compute a sequence that grows faster than Busy Beaver, then it could use that sequence to obtain the D‘s—the beaver dams." But these goals seem different to me.

Another sequence you cannot compute is "foo(x)" - the number of integers lower than x for which algorithm "bar(x)" doesn't halt. It's impossible to compute if "bar(x)" doesn't always halt. But the sequence cannot grow faster than "x" itself.

Also, it doesn't matter that you cannot compute them if every TM of a given size can be analysed and proven halting. It doesn't have to be computed (as in, automatically checked).

The fact you can compute A(x) would only be an issue if you could compute A(x) with a machine of size smaller than x.


I think you are missing some basic concepts here. There is a single Turing machine (of constant size) which, when given any integer n, outputs A(n). This is the statement that A(n) is computable. One can prove that there is no such Turing machine for the function BB(n) or any function f(n) that bounds BB(n). Such a function would be used to solve the Halting problem as follows: given a TM M and a string x, compute k = f(|M|) and run the TM on x for k steps. If the simulation halts within k steps, output "halt" and otherwise output "loops forever."

Your example of foo/bar(x) is not quite correct. If you fix a program bar, then it is not necessarily impossible to characterize (and hence compute) the set of strings for which bar(x) halts. This is not the statement of the Halting problem, and as far as I know there is no theorem that says such a function "bar" exists.


Sure it does, just let "bar" be a UTM.


Good point. My stupid mistake.


Here's a proof I came up with a few days ago (and it became relevant so quickly!)

Assume by contradiction that there is a computable function f that grows faster than the busy beaver problem. We can then make a Turing Machine that takes input n, calculates f(n), and then constructs and simulates every n-state Turing Machine for f(n) steps. Since f grows faster than the number of steps the most productive TM would take, any simulated machine in not in the halt state would never halt, and we could select the busiest beaver out of the remaining Turing Machines. We have constructed a TM that computes the busy beaver problem, which is a contradiction. Thus, there can be no computable function that grows faster than the busy beaver function.


1) My submission: modifying Ackermann's function to describe a sequence of Busy Beaver numbers of sequential hypercomputation systems. E.g. ABB(n+1) = BB_n(n) Of course it's the same problem, you could just as easily do AABB(n+1) = ABB_n(n), but still grows faster than a Busy Beaver sequence for any single level of the hypercomputation hierarchy.

2) Question: I understand the argument for noncomputability of Busy Beavers given, but couldn't you just argue that BB(n) is finite, therefore BB(n) is a computable sum of 1+1+...+1, therefore BB(n) is computable given a finite amount of time, so any given number in the sequence is itself computable, therefore by mathematical induction all elements in the sequence are computable? Clearly not, but I don't understand why this doesn't work.


1) If you're aware of the BB_n hierarchy, you've outgrown the need for recursion tricks. Instead, talk about halting times of programs that can make calls to the oracle F(n,m)=BB_n(m). That leaves any Ackermann-like approach in the dust. I leave it to you to come up with an even better approach (yes, it exists, and then there's one or two levels after that). Please don't say "I'll just use Ackermann on top of your idea", at that level it's the moral equivalent of "your number plus one".

2) For each individual BB number, there's a program that prints it. But there's no program that prints the whole infinite sequence of BB numbers one by one. All finite sequences are computable, but not all infinite sequences are.


To say that the sequence is computable means that you must present a Turing machine, call it T, that will produce BB(n) in finitely many steps after taking input n. Our specific Turing machine T will have a fixed number of rules call it N rules, then by the definition of the Busy Beaver T(N+1) <= BB(N) = T(N) but BB(N+1) > BB(N) so T(N+1) does not compute BB(N+1). The flaw in your inductive proof is that you have shown there is a Turing machine that can compute BB(N) for each N, but you haven't shown that the same Turing machine can compute all numbers in the sequence.


And just to be clear here, this isn't just because it can't recognize which numbers are the Busy Beavers as it arrives at them while summing ones indefinitely? If T(N) tries to compute BB(N) by summing ones then, since BB(N) is finite, then it just fails to halt? But then there would be finite numbers that can't be described as the sum of one from 0 to BB(N)? It's just hard to reconcile the concept of indefinite-but-finite running time, all numbers being able to be expressed as a sum of ones, and the BB sequence still not being computable.

Edit: Ah wait, I get it now. Misunderstanding was in confusing "Turing machine" with "universal Turing machine." Any given T(N) will have a finite number of steps it can complete before halting, and since a universal Turing machine or equivalent (e.g. rule 110) can emulate any arbitrary Turing machine, I was modeling it as just letting an arbitrary UTM run forever. But the UTM is just emulating a specific T(N) each time, and you'd have to keep going up emulating different machines for each N to get to each next BB without running out of steps. So you can write programs to calculate arbitrary BBs indefinitely, but no single program to calculate them all. Hence the infinite sequence is noncomputable despite any finite length of it being computable. Thanks for the point in the right direction.


Here is the most entertaining article I've ever read about large numbers: http://waitbutwhy.com/2014/11/1000000-grahams-number.html


Loved this article, I'm really interested by the BB numbers in particular and how they relate to the halting problem.

And I though I could win this silly game with my Knuth's Up Arrow notation and vague knowledge of Ackermann's sequence!


BB(6) could be bigger than Graham's number but we don't even know, that's how big it is

the busy beaver function grows so fast that for some number N it's always going to be larger than any number we can describe in normal math notation

so if you define Graham's number as a function of N like g(N) where Graham's number is g(64), busy beaver still grows faster and for some N it will be bigger


"If you want to use higher-level Busy Beavers in the biggest number contest, here’s what I suggest. First, publish a paper formalizing the concept in some obscure, low-prestige journal."

I wonder if anyone has followed through with this?


Is there some N such that BB(N) can produce any "arbitrarily large" number? Not infinity because that would mean it doesn't halt, but that there is no number which can not be exceeded by saying in effect "plus 1".

Or, as N becomes large and can emulate the english language use berry's paradox style algorithm to show there is no limit to what size you want.

Another approach would be to use a probability-based event for halting, and so it is always possible that a series of coin flips could keep coming up heads 1,000,000 times in a row, or a zillion times, see: st. petersburg paradox.


>Is there some N such that BB(N) can produce any "arbitrarily large" number? Not infinity because that would mean it doesn't halt, but that there is no number which can not be exceeded by saying in effect "plus 1".

This would imply BB(n) is not well defined. To the contrary, BB(n) is finite for every n.


Ok, I think I may have the ultimate answer to this, a sequence that grows optimally fast. It goes like this:

Imagine 2^(7n) copies of the judge of this competition. For every judge create a different bit sequence of length 7n. Decode it from ascii and ask if it is a valid entry of the competition. Take the largest valid entry of those imaginary competitions. That is my entry. In order for this procedure to be consistent, this entry must be longer than n characters. Hence it will end with some padding you may disregard: asdfasdfasdfasdfasdfasdfasdfasdfasdfasdf.


If you're allowed to submit entries that are mutually recursive with the decision procedure of the judges, I'm afraid the competition becomes less well defined. For example, it's easy to create a paradoxical entry by adding 1 to your entry. Also see Berry's paradox: http://en.wikipedia.org/wiki/Berry_paradox


Yeah, I agree that it becomes less well defined. The problem with Berry's paradox is that it refers to itself. I circumvented paradox, by only refering to shorter descriptions of the number, than the description I gave. If that strategy works really depends on what the judge thinks though. Assuming he accepts refering to his own potential judgements of shorter sequences but not longer or equally long, the solution would work and the rules would be paradox free.


Yeah, that might avoid paradox at the cost of requiring a specific kind of judge. But then I think you need to fully specify the judge's reasoning, and that's a very difficult problem, it might be even AI-complete. I don't know of any computer program that would be able to judge the descriptions of current winning entries.

Maybe this quote from the googology wiki will set you on a more productive track:

Googologists generally avoid many of the common responses such as "infinity," "anything you can come up with plus 1," "the largest number that can be named in ten words," "the largest number imaginable," "a zillion," "a hundred billion trillion million googolplex" or other indefinite, infinite, ill-defined, or inelegant responses. Rather googologists are interested in defining definite numbers using efficient and far reaching structural schemes, and don't attempt to forestall the unending quest for larger numbers, but rather encourage it. So perhaps a more accurate description of the challenge is: "What is the largest number you can come up with using the simplest tools?"

http://googology.wikia.com/wiki/Googology#History


Here's a philosophical question that's been bothering me for a while. For large enough n, we can construct an n-state Turing machine that attempts to prove a contradiction in our current most powerful axiomatic system (let's say ZFC). We must assume that this machine loops forever, but Gödel's incompleteness theorem implies that we can't prove that.

What does this construction imply about BB(n)? In what sense is the BB sequence even well-defined if we can prove that it can't be determined?

(Edited for clarity.)


> In what sense is the BB sequence even well-defined if we can prove that it can't be determined?

I think it may help you to understand how the BB sequence can be well-defined if you change your definition of "we can prove".

A proof is a sequence of logical deductions in an axiomatic system, so what "we can prove" depends a lot on which axiom system you're using. For some types of questions, it helps to make that explicit.

For instance, in weak axiom systems, we can't prove that every hydra game terminates. [0] In very strong axiom systems, ones so strong we call them inconsistent, we can prove everything! People argue over intermediate systems sometimes by pointing out things they can prove that are counter-intuitive. [1][2]

For any terminating Turing machine, there is a (possibly very long) proof in a very weak axiom system of that fact: the full trace of the execution. (For non-terminating machines, this is not true.) So, if ZFC, a rather strong axiom system, cannot prove a machine halts, it does not halt.

For another example of this kind of thing, see Terry Tao's discussion of the consequences of the independence of Goldbach's conjecture from ZFC on MathOverflow. [3]

[0] https://en.wikipedia.org/wiki/Goodstein%27s_theorem [1] https://en.wikipedia.org/wiki/Freiling%27s_axiom_of_symmetry [2] https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox [3] http://mathoverflow.net/a/27764


But if the machine that stops when it has proven ZFC is inconsistent does not halt, then surely it means there is no proof of the inconsistency of ZFC? Hence ZFC is consistent? Which is contradicted by Godel.

I would think instead ZFC can't prove or disprove that this machine halts, which just means it's undecidable wether it halts or not.

This also implies that for sufficiently large n, we won't be able to prove that BB(n) = N, or even BB(n) < N for any N using the ZFC axioms. Of course, although they can't compute or bound its value, they can still easily prove that BB(n) is finite.


> But if the machine that stops when it has proven ZFC is inconsistent does not halt, then surely it means there is no proof of the inconsistency of ZFC?

I'm not sure what this question means. Are you referencing an earlier discussion of a machine that demonstrated a proof of "0=1" using the axioms of ZFC? If so, what is your concern?

When thinking about undecidability/independence from a computational perspective, it's helpful to be cautious about the phrase "does not halt". How would you know a machine "does not halt"? You might run it for a long time and observe that it has not halted yet, but that's not the same.

Instead, you can talk about "a proof in $AXIOM_SYSTEM that this machine does not halt".

So, using that terminology, consider a machine that enumerates all valid proofs of ZFC and checks if their conclusion is "0 = 1". Certainly, if ZFC is inconsistent, this machine will find a proof and halt. However, if there is no such proof, then ZFC is consistent, the machine will not halt, but there will be no proof in ZFC that the machine will not halt.

This ability to enumerate the proofs is related to semidecidability/recursive-enumerability.

When you say "for sufficiently large n, we won't be able to prove that BB(n) = N", I agree. There is some n that is so large that there is a Turing machine of size n that does loop, but the fact that it loops will not be provable in ZFC.


Well you said "if ZFC, a rather strong axiom system, cannot prove a machine halts, it does not halt.". This particular statement is what I was replying to. I agree that a non-halting Turing machine is not a very well defined concept in the general sense (hence my reply), but I would think the onus is on the one who used it first to define what they meant by it :-)


> I would think the onus is on the one who used it first to define what they meant by it :-)

That's a very fair criticism.

My writing is muddy, for sure. Let me try again. You ask:

> But if the machine that stops when it has proven ZFC is inconsistent does not halt, then surely it means there is no proof of the inconsistency of ZFC? Hence ZFC is consistent? Which is contradicted by Godel.

I do not think this is contradicted by Goedel. Goedel's theorem implies that ZFC can't prove its own consistency unless it is inconsistent. So, how can this machine that searches for an inconsistency in ZFC (by enumerating all possible valid ZFC proofs so that for any proof p, there is some time t such that the machine checks p by time t) run forever, thereby proving ZFC consistent?

It is by the unprovability in ZFC (unless ZFC is inconsistent) of the fact that this machine will run forever.

The fact that this machine will run forever implies the consistency of ZFC, and the consistency of ZFC implies the machine will run forever.


After looking more closely at your linked post by Terence Tao, are you by chance basing your statements such as "it does not halt" in what he calls the informal platonic reasoning system (which presumably assumes ZFC as well as its consistency)?

If so, I think I understand what you mean and I agree with you.

My thoughts were a bit confused on these topics so my posts probably contain some reasoning errors, but in the end I think what matters is we agree on the answer to OP's question, i.e.

For any given theory T at least as strong as ZFC (and maybe even some weaker ones) then

- BB as function is well defined in T - However there is some number n_T after which T can't prove any upper bound for BB(n) for any n > n_T

I think this is a very interesting result because the fact that any axiomatic system will be powerless to describe this function's growth after only a small number of steps expresses pretty well how mind numbingly fast it grows.


I believe he's correct. The trick is that while ZFC

1. can prove the machine halts for any machine that halts, but

2. cannot necessarily prove that non-halting machines don't halt

This is the distinction between recursively enumerable sets and decidable sets, and it comes up everywhere. The first implies that "for every true p, you'll eventually find a proof" and the second says "for every p, you'll eventually either find a proof of p or a disproof."


I think this[0] is a good formalization of what I'm trying to say.

[0] http://math.stackexchange.com/a/614017/215039


Everything rests on the consistency of ZFC, which we can't prove. In some sense even results like 2+2=4 are provisional, because if ZFC were inconsistent (and we can't prove that it isn't) then it might also be true that 2+2=3 and 2+2=5.

(Well, strictly we can prove that 2+2=4 in Presburger arithmetic which is provably complete and consistent. But any result that we prove by contradiction and that uses the higher axioms (e.g. Hilbert's basis theorem) is implicitly assuming the (unprovable) consistency of ZFC).

Mostly this doesn't bother working mathematicians - ZFC seems reasonable, corresponds to the models that we do have, so we just take it on trust. People who care explicitly about these issues might work in a "higher" axiom system to prove "metamathematical" facts (e.g. ZFC + weakly inaccessible large cardinal, under which we can prove that ZFC is consistent and also that this particular Turing machine doesn't halt).

((Of course the whole point of Gödel's incompleteness theorems is that there's no sharp line between mathematical and metamathematical; any unproven proposition might turn out to be an unprovable landmine. But this doesn't tend to bother people. After all, the proposition might just as easily turn out to simply be very difficult to prove, and the practical impact would be much the same))


This kind of thing happens in math a lot. Any time you use the axiom of choice to prove something exists, it's non-constructive. It exists, but you can't get your hands on it.

I wrote a comment down below about how one could in principle determine a number which is probably BB(n), but you could never be sure. But I just had the crazy thought that if a human brain is really just an N-state Turing machine for some giant N, then any human would either wait forever or give up before finding the true BB(n) for some n. Time for bed!


>We must assume that this machine loops forever

We must assume this machine would loop forever, it it operated on only the rules of ZFC, because we assume ZFC is consistent. A real machine cannot loop forever, and it would be unreasonable to assume that it does. Real machines are not known to operate based solely on the rules of ZFC. ZFC is only an approximation.


that's a good question. I think the answer is that if we could prove BB(n) = K, then we just have to run this turing machine for K iterations, and either it finds a contradiction, or it doesn't. In the latter case, we have proven that ZFC has no contradiction, and so by Godel's theorem, it has a contradiction.

From this it follows (by proof by contradiction!) that BB(n) is undecidable/uncomputable (getting a bit hazy on these definitions). In any case, we cannot prove that BB(n) = K for any K.

Is it a problem that for all K we cannot prove that BB(n) = K? I don't think so, this is just another incompleteness result.


> For large enough n, we can construct a Turing machine that attempts to prove a contradiction in our current most powerful axiomatic system (let's say ZFC).

What is n in this construction?


I'm sorry, n was supposed to be the number of states of the Turing machine. I've clarified the original post.


The article does mention that "no Turing machine can list the Busy Beaver numbers".


That's clear. My point is that mathematics can't list the Busy Beaver machines.


> In what sense is the BB sequence even well-defined if we can prove that it can't be determined?

Every BB number can be determined. However, each one must require zillions of special cases which require their own reasoning---this is the only way to prevent the BB numbers from being computable.

So, if you consider, for example, the proof of the four-color theorem (which amounts to checking many special cases), as mathematics, then mathematics surely can determine BB(n). It's just extremely hard---as hard as doing all of mathematics (in the sense that if you could easily get BB(n) you could then take a logical proposition you are interested in proving in some axiomatic system and construct a TM to derive the statement using those axioms. Then, count how many states that TM has, and start running it. If it runs for more than BB(# states), there is no proof from your system of your proposition).


I think it is possible, in principle, to find a number which you can be pretty sure is BB(n). The problem I think is being sure that this number really is BB(n).

Like you run all n-state Turing Machines for a super long time, and take the max of the output of all the ones that have halted. Now this might give you BB(n), but you can't be sure because you don't know whether the ones that haven't halted are ever going to halt.

In mathematics, we can just say "Well consider the set of all the ones that do halt, and take their max." In real life, it's not so easy to "consider the set".


You don't just wait, you prove that they all don't halt. There are only finitely many of them. You can't provide a general algorithm for proving that Turing machines don't halt, each one is going to require a special case - but it will be a fact that the physical instance of the machine halts or doesn't halt. (Or if it halts in some models of ZFC but not others, that would be even more interesting).


There won't always be a proof. If there always was, you could programmatically find it via brute-force search and solve the halting problem.


xkcd forum has an interesting (and longish) thread playing this exact game. [1] Starting with nine thousands, it quickly becomes a mathematical nightmare or something.

And also, the Big Number Duel. [2]

[1] http://forums.xkcd.com/viewtopic.php?t=7469 [2] http://tech.mit.edu/V126/N64/64largenumber.html


Also see this mathoverflow post by Scott Aaronson as a follow-up to the article: http://mathoverflow.net/questions/34710/succinctly-naming-bi...


As an adjunct to this, if large numbers interest you there's a really excellent and long-standing set of pages about them at http://www.mrob.com/pub/math/largenum.html which has some fascinating incidental detail and covers infinities as well as large finite numbers.


One of my favorite articles... well worth the re-read even if (as I have) you have seen it before. You may notice something you had forgotten, as I had forgotten the existence of the "busy-beaver-like-function" for machines more powerful than Turing machines.



I would argue that 9^9 mentioned in the article for example _is not a number_. It is a method for calculating a number. So most of the article is invalid


So is 387420489. The method goes, first you take 9, then you multiply 8 by 10 and add that to 9, then you multiply 4 by 100 and add that to what you have...


You're probably viewing it as a programmer. You see the exponentiation operator, which to you means "computation", which implies some "work" must occur to get the "real" number out of the calculation.

As far as a mathematician is concerned, there is no difference between 9^9, 387420489, 387420499 - 10, 774840978 / 2, or 193710244.5 * 2. They all represent the same quantity. That some are not in their fully reduced form does not change that they are unambiguous representations of the same number.

Even if you are unable to see it that way, we can easily rephrase the exercise to be "write down a way to compute the largest number you can in 15 seconds". This is an enlightening mental exercise, it would be silly to dismiss it for such trivial reasons.


It's a notation, not a formula


Take number theorethic conjecture. Look for counterexamples. Return first counterexample. You can't prove I didn't win!


> Take 9^9^9, for example. This behemoth, equal to 9^387,420,489

...if the author is here, he should correct this.


Exponentiation is right associative.


9^(9^9) = 9^387,420,489


Name it? Sure - I will call it "Bob". What Bob's value is, I have no idea.


1/0-1

I won! :-P

(because infinity is not allowed, so i had to subtract one)


9!!!!!!....!! factorial


9!!!!!!....!! + 1




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: