Hacker News new | past | comments | ask | show | jobs | submit login
Who Can Name the Bigger Number? (scottaaronson.com)
68 points by shawndumas on Dec 20, 2010 | hide | past | favorite | 31 comments



I was given this test when I was 13 by my grade 8 teacher. I wrote something along the lines of:

(7)^(7^(7^(9^(9 ... ))))))

after realizing about second into the challenge that the extra time taken to draw a 9 vs a 7 was worth it.

To my surprise, when we got back from lunch I was told that there was a tie for first place. I thought "how could this possibly be?"

It turned out that the way the teacher phrased the question was "write the biggest number" which to me, and all but one person in the class, meant the furthest number from 0 on a number line while still remaining positive. The other person just drew an ENORMOUS "10" on her piece of paper. The teacher went on to explain out of box thinking about solutions (like mine) and out of box approaches to the question (like hers).


How about writing "the biggest number" on your paper?


the extra time taken to draw a 9 vs a 7 was worth it

Is it??? Adding one more exponent is huge, while 7->9 feels much smaller. It's all intuition, though. Does anyone know how to tackle such a question?

Limited empirical results say:

  CL-USER> (log (expt 9 9))
  19.77502
  CL-USER> (log (expt 7 (expt 7 7)))
  1602540.6
Taking the log of an expt is silly, but it makes it printable.


So I ran some tests for a 10 second countdown. I'm pretty sure you are right and my mind changing 13 year old self was wrong.

I can do 21 "7"s in 10 seconds (given ok legibility) and 18 "9"s. So it looks like 7 is indeed better.


Still, starting to write immediately and then worrying about optimization while the process was running was the right solution.


An older submission with 68 comments: http://news.ycombinator.com/item?id=1539538

Great article, worth repeating/re-reading every now and again.


If you enjoyed this article, and haven't read "Gödel, Escher, Bach: An Eternal Golden Braid" by Doug Hofstadter yet, then stop reading HN immediately and order yourself a copy:

http://www.amazon.com/G%C3%83%C2%B6del-Escher-Bach-Eternal-G...

Having read this book would, in a fair world, be worth more on your CV/resumé than a large proportion of comp-sci degrees.

Even better, go out of your front door to a real bookshop and get them to order you one. Who knows, you might speak to someone! BONUS!




While I was able to follow the Aaronson article with no particular trouble, I can't seem to grok this. I'd be very grateful to anyone who could give a slightly lower level explanation.


So, this is about trees labelled with positive integers at their vertices. (To be formal: let's call them "i-trees", a name I just made up; then an i-tree consists of a positive integer "root" together with a possibly-empty list of i-tree "children", and is of finite size.)

Here's a way of classifying some i-trees as "smaller" than others. Say that the following operations make an i-tree smaller: (1) Reducing one of the integers that appear in it. (2) Deleting any subtree. (3) Replacing a node with one of its subtrees. And say that one i-tree is smaller than another if you can transform the latter into the former by a sequence of these basic steps, and not otherwise.

Finally, for any positive integer n, say that an n-tree is an i-tree in which all those positive integers are at most n.

It turns out that the smaller-than relation that I just described is a "well-quasi-order" (don't blame me, I didn't make up the terminology) on i-trees, which means: If you take any infinite sequence of i-trees -- T1, T2, T3, ... -- then there must be i,j with i<j and Ti smaller-than-or-equal-to Tj. This probably sounds like a bizarre and arbitrary notion, but it's useful because when a set is well-quasi-ordered you can do a kind of mathematical induction on it. (Mathematical induction is the thing that's usually described as "if something's true for 1, and true for n+1 whenever it's true for n, then it's true for all positive integers" but generalizes to something like "if something's true about any object provided it's true about all simpler objects, then it's true for all objects"; what's needed for this to work is approximately for "simpler" to be a well-quasi-order. I am skipping many details, but this is just a motivational aside anyway.)

There's a sort of finite version of that theorem, which goes like this: For any n, if you consider n-trees rather than unrestricted i-trees then there's a fixed N such that whenever you have T1, ..., TN there are two for which the earlier is smaller-than-or-equal-to the later.

How big is N, given n? The answer is the TREE function. It's nice and small for n=1 and n=2 but becomes absolutely enormous for n=3.

The very rapid growth of the TREE function is related to another cool fact: In a certain sense that I'm not going to bother making formal here, you cannot prove the theorem without making essential use of infinite quantities. (One can even quantify, kinda, just how much infinity is required.)

If you happen to like this stuff, here's another theorem (Goodstein's theorem, if you want to look it up) with the same sort of flavour. You already know, I take it, how to write a number in base n: one way to say it is that you write it as a sum of terms of the form [smaller than n] times n^k. Well, suppose you do that and then do the same thing to all the exponents, and to all the exponents that that produces, and so on, until you've expressed your number in terms of (1) numbers smaller than n and (2) the operation "a times n ^ b". Let's say that you've then written your number "in superbase n".

OK. Now, consider the following process. Start with any number. Write it in superbase 2. Replace all the 2s with 3s (so you now have something in superbase 3). Subtract 1; if necessary, fix up the superbase-3-ness. Replace all the 3s with 4s. Subtract 1 and fix up. Replace all the 4s with 5s. And so on.

Typically, the "replace n with n+1" operation increases the number enormously, and you might think that your numbers will increase without limit. But, as it happens, you always get down to 1 eventually -- but (1) the number of steps it takes to get there grows outrageously fast and (2) again, you can't prove this without (in some sense) using infinite quantities.

(For any readers who know about infinite ordinal numbers, here's the one-line proof: At each step, when you have the number written in superbase k, replace all the k's with omegas. Then the "add 1 to the superbase" steps do nothing, and it's easy to see that the "subtract 1" steps always strictly decrease your number. But the ordinals are well-founded, so you can't strictly decrease infinitely many times. QED.)


yes, can someone explain? please and thank you!


When all wikipedia can say is "so enormously large" you know you're on the right track. :-)


You apparently win.


this is why i read hacker news! thank you.


An excellent article, touching on many aspects of Computation theory. Well worth reading if you are interested.


This is the same reaction to the intro that I had last time, but this bit stands out: the girl he was up against, who wrote a string of 9’s followed by the superscript 999. Aha! An exponential. My guess? She just started writing smaller nines so that she could fit more in...


If people use things like exponents and nested factorials can you decide between two numbers without explicitly working out what the number is? It seems like sort of a hard programing problem to take the entered text (assuming an unambiguous format - machine parsable) and quickly tell which one is larger. Some numbers that are easy to write down might take years to calculate but if you apply some kind reasoning, similar to algorithm analysis, you might be able to construct arguments for why one number is larger than another.


In general it'd be reducible to the halting problem, if you allowed a Turing-complete language for the number-generating programs.

I imagine you could tackle a good subset of the problem though with a symbolic algebra package (mathematica?) if you restricted the form of the number-generating programs to a certain set of mathematical expressions for which it knows enough tactics to reduce the inequalities involved.


In 2001, there was a contest to see who could name the biggest number using a C program:

The object of the BIGNUM BAKEOFF is to write a C program of 512 characters or less (excluding whitespace) that returns as large a number as possible from main(), assuming C to have integral types that can hold arbitrarily large integers.

See the third line of http://djm.cc/dmoews.html for more, including the (quite interesting, IMO) results.


I suppose a way to create confines to this test would be to ask the "contestants" to enter their number as a line of code with a restriction on characters. The (theoretical, as it may take eons to compute) output of the code would be your answer. A "big number" golf if you will.


Graham's Number. Fin.


My answer is always Graham's Number + 1, just to fuck with people like you.


Graham's Number isn't particularly large in this context (indeed, it is referred to about halfway through the article.)


I can tell you that the last 10 digits of Graham's number are ...2464195387, how large could it be?


The referenced Albert Bartlet quote is from his lecture Arithmetic, Population, and Energy. It's a fascinating look into the exponential function, and available on DVD from the University of Colorado. If you haven't seen it, I highly recommend it.


I'm reticent to post humor here, but this article immediately made me think of this sketch from Mr. Show: http://www.youtube.com/watch?v=RkP_OGDCLY0


A variation that actually works as a game: Name the bigger number, but if your number >= double your opponent's number, you lose. 0 and 1 are off limits.



I was thinking something like:

Googol ! ! ! ! ! ! ! ! ! !


[deleted]


But "n+1, where n is your number", is not well defined with respect to this game...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: