Hacker News new | past | comments | ask | show | jobs | submit login
The Remarkable Number 1/89 (2004) (ou.edu)
413 points by nonsapreiche 31 days ago | hide | past | favorite | 113 comments



On the decimal expansion part, 1⁄7 has always fascinated me, having something very similar going on. Doubling from 7, you get 14, 28, 56; and 1⁄7 is 0.1̅4̅2̅8̅5̅7̅, 2⁄7 is 0.2̅8̅5̅7̅1̅4̅, 3⁄7 is 0.4̅2̅8̅5̅7̅1̅, &c. (just changing which digit you start the recurring sequence with). https://en.wikipedia.org/wiki/142,857 talks about it a bit more; the doubling sequence thing is covered in the section 1⁄7 as an infinite sequence (including the reason the recurring decimal has 57 instead of 56—that 56 doubled is 112, so the hundred there overlaps with the six, much as the ten of the 13 adds to the 8 in the 1⁄89 expansion of this article).


I've always liked 1/7 as well, and I never realized that thing about doubling from 7 giving 14, 28, 56. People are always impressed when I can rattle off the digits of x/7.

By the way, I appreciate your use of U+0305 combining overline. Did you enter those manually or do you have some neat way of doing it?


To me, the fact that you’re in an environment where knowing the digits of x/7 is appreciated is the most-impressive thing of all!


Heh, last time I spouted about 1⁄7 here someone asked about the overline too: https://news.ycombinator.com/item?id=24302104. I love my Compose key.


I've had my CapsLock bound to Compose for ages; it's a great use of that piece of keyboard realestate.

I still don't have a good way to discover compose sequences other than by groveling through xkb and compose files. I really wish there were a character-palette tool that would tell me how to type the characters by introspecting the current input settings.


How do you get one of those. Track down an old Sun keyboard? :)


On my last laptop I used the Menu key as Compose. On this one I use Right Alt.


I use CapsLock; lowest usefulness to size ratio possible.


Caps lock is much more useful as a home row ctrl key


Why stop there though? Mine is configured to be ESC when tapped, CTRL when held down. Took a while to get used to, but it's a big improvement compared to other ways of handling ESC on a 60% keyboard.


I find it most useful as a Super key, so that I can attach all my custom keybindings and shortcuts to it.


Cool thing is, this is not a special property of 1/7. 100 / 7 is 14, with a remainder of 2, therefore the series starts with 14, multiplies by 2, and divides with 100 in each iteration. For instance 10 / 7 is 1, with a remainder of 3, therefore 1/7 is also equal to 0.1+0.03+0.009 etc. And 1/8 is 0.1+0.02+0.004 etc.


Then how does 1/8 become 0.125?

If you do the calculation, with each step it goes towards 0.124999999999


> with each step it goes towards 0.124999999999

Also known as .125.

https://en.wikipedia.org/wiki/0.999...


Coming from video, I've always been a fan of 1/1001. 30000/1001 = 29.970029797002997 and 24000/1001 = 23.97600239760023976. There's something about it's clean repeating that I liked. I hear people confusing frame rates by saying something like 29.976. I also don't like 23.98 as that rounding is going to cause problems later.

However, you have to be a special math something to have any of these kind of number "oddities" be anything meaningful. I wear mine like a badge of honour


I had forgotten why the number 1001 mattered in video (it's been too long since I worked with NTSC circuits), so I looked it up. It has to do with avoiding dot crawl in color analog video.

https://en.wikipedia.org/wiki/Frame_rate


It also had to do with allowing the additional of the color information to not break compatibility with the existing B&W TVs in existence. Had the decided to not make 1 video signal that could be broadcast to both color and B&W TVs, they could have just broadcast color at 30fps (and man would my life had been so much easier).


Kludgy hacks are interesting when their value highly outweighs the lack of careful design or effort put into it. Or, I work in the video games industry and quick work can end up being charming or valuable to your audiences, even if they're a difficult thing to continue developing or maintain. The wide compatibility of NTSC likely has been very valuable to the public, but this public is also unaware of the difficult work it implies.

That said, games traditionally have a point where development stops and doesn't resume (not counting from more live-ops-style games today), so the calculus of that sort of thing changes to management.


This is especially true to the people building the ROM emulators. There are so many tricks/hacks/kludges that were at the heart of some games even being usable. Timing for interlacing that was needed for the game to work in NTSC has to be accounted for on today's faster hardware and progressive scanning. Reproducing color accurately from NTSC seems to also be another thing I've seen. I'm sure there are plenty more that have been posted here before, but they are always a fun reminder that porting code can be a nightmare.


Indeed. NTSC is such a kludgy hack. An incredibly clever kludgy hack, but it's still kludgy.


I discovered this in my late teens and thought it was super cool and made me want to understand more about repeating decimals. After playing around for a bit I realized you could form arbitrary repeating decimals by dividing by 9, 99, 999 etc. So for example, 1/7 = 142857/999999. Or written another way, 999999/7=142857.


Which also makes it easier to find the patterns in its multiple(or just divide by the extra multiple) say 1/14 starts .07142857142857. Multiples with 3 don't give us the common repeating 1428 but still repeats in its own way.. but 1/49 is pretty cool. 1/49 looks to do what 1/89 is doing but with the powers of 2. Nice!

edit: Dont know the format for proofs but heres a try.

1/49 = i=1 towards inf

sum 2^i*100^-i


I learned about 1/7 back in my youth, and it's just been one of those things that I enjoyed knowing as I went on in life.

Imagine my amusement when I ran across a Project Euler problem where those digits were the answer. I recall just looking at it and thinking I __know__ this one, there's no need to code anything. An easy point, but I didn't feel like I cheated on it.


Does this work or other bases too? Other than 10?


That you can find other special properties on various numbers is true in all bases, but 1/7 in base 10 is pretty special.

So the property it has is:

    1/n = n (2/b^2 + 4/b^4 + 8/b^6 + ...)
which by geometric series sums to

    1/n = n / (b²/2 — 1)

    n² = b²/2 – 1
So this works precisely because 7² = 49 = 50 – 1 = 100/2 — 1.

Calculating some of these out these appear to be the Newman-Shanks-Williams numbers [1], the next one is 41 in base 3364, where

    1/41 = {0}.{82}{164}{328}{656}{1312}{2625}...
notice the 5 finally coming from some overflow.

But, supposing that we just like the idea of starting with some digit d and then the next digit being k times that and the next digit being k times that, we get a more general set of numbers,

    d/b + dk/b² + dk²/b³ + ...
      = d/(b - k)
Given that, this becomes much more boring. So for example for doubling in base-100 we think about 1/98 (b=100, k=2) and we find

   1/98 = 0.01020408163265...
and factors of that 98 also may have similar patterns, so 7 has this strength because it is a factor of 98.

So for example we want to think about 1/7 in base-12, this suggests that maybe we should look for things that quintuple base 12, but that rapidly overflows base 12. So we do the same trick as 1/7 where we take pairs of digits, and maybe things quadruple base-144 (since 144 - 4 is 140 which is divisible by 7), and so we find that

    1/7 = 0.{20}{82}{41} repeating
and if you squint closely you can see starting with 20, quadrupling to 80, quadrupling to 320 but then getting a bit unwieldy. Of course even on single digits 12 - 2 = 10 which has 5 as a factor so you can expect to see a pattern in base-12 on

    1/5 = 0.{2}{4}{9}{7} [repeating]
which you can see a sort of "2, 4, 8, 16," pattern happening.

The other base that I really like is nonnary, if we met aliens we might find that they count in balanced nonnary with digits -4, -3, -2, -1, 0, 1, 2, 3, 4, (so like 7 is actually {1, -2}, 7 = 9 - 2), but it's harder to search for patterns in that because you really feel the cap of having only half the base to count up to before you carry.


No. A trivial counterexample would be base 7.


That really is rather fascinating.


Wow, it keeps going also with the further digits, e.g.

=14285712

+0000000224

=1428571424

+000000000448

=142857142848

+00000000000896

=14285714285696

+0000000000001792

=142857142857....


From an archived talk (2011) on wikipedia [0]:

"The linked page misleadingly suggests that a certain Cody Birsner discovered the relationship between the series and the fraction, whereas it had been known for a considerable time before".

Günter Köhler, 1983 (published in the The Fibonacci Quarterly, 1985; who cites earlier papers from 1977 and 1981): https://www.fq.math.ca/Scanned/23-1/kohler.pdf

[0] https://en.wikipedia.org/wiki/Talk%3AFibonacci_number%2FArch...


It tickles me that there was a "Fibonacci Quarterly" where people shared their favorite new Fibonacci facts on a quarterly basis.


Not was, is. I was curious about that as well.

https://www.fq.math.ca/list-of-issues.html


Wow, almost 60 years of Fibonacci content!


It's a bit silly to chase down original authorship of an idea that is a minor detail visible to many people who work in a field.

It's like asking who was the first person to discover that all multiples of 11 have the same parity in the respective sums of their odd and even digits.


Then it is also silly to (erroneously) mention who discovered such a property, don't you think?


The proof follows from this instance of the discovery, with an Oklahoma U student indepently discovering and an OU professor indepently proving. I think that's a perfectly reasonable way to track what happened.

I am unable to open the pdf or follow any of the links from the wiki talk page, so the proof may predate this instance, which would invalidate my point.


Sorry, can't edit the comment from here, those italics make it seem snarky. Should have been an asterisk after proving and before the next sentece that I forgot to escape.


Since there had to be a first person who discovered that all multiples of 11 have the same parity in the respective sums of their odd and even digits, that question is obviously interesting to some historians. Due to what circumstances was that person first? What hindered others before that person? How long was the lag before use of decimal and the discovery?


It's important to remember that while technically there is a chronologically first person to discover X for all X, that doesn't imply that that person is the only person to discover X. For sufficiently obvious X, there are likely to be many independent discoverers and highlighting the chronologically first one heaps praise somewhat arbitrarily on one of them.


"History isn't perfect" is not a good reason to avoid doing history. Historians are all well aware of these facts.


Making a discovery in mathematics confers a kind of immortality. You're part of a conversation that has lasted centuries and will continue for the life of mankind.


"The successive ratios of the terms, i.e. 1/1, 2/1, 3/2, 5/3 ... tend to a number called the Golden Ratio by the Greeks."

Fun fact: take any two numbers (e.g. chosen randomly), and use them as the seeds for a Fibonacci-like sequence by summing the last two terms to generate the next term. The ratio of any two consecutive terms in that series will tend towards the golden ratio.


This seems like a pretty basic calculus problem. Constants become irrelevant at the limit, so that the relationship (ratio) is all that's left.


Well it isn’t really calculus as there’s no differentiation (I guess you could consider the last step where you take a limit to be calculus), but it’s a bit like differential equations. You can write down the recurrence relation:

  a_(n+2) = a_(n+1) + a_n
Observe that there is a linear solution space (I.e. if you add solutions point wise or multiply each value by the same scalar, you get solutions), and the values a_0 and a_1 are sufficient to determine the sequence. Now guess that a_n = k^n is a solution:

  k^2 = k + 1
  (k - 1/2)^2 = 5/4
  k = (1 +/- sqrt(5))/2
  k = φ or -1/φ, where φ is the golden ratio
Due to linearity, there are a family of solutions a_n = Rφ^n + S(-1/φ)^n for any values of R and S. Because this family provides a solution for any choices of a_0 and a_1, it contains all the solutions.

Because |1/φ|<1, we find that asymptotically a_n ~ Rφ^n as n grows. Therefore the ratio of terms tends to φ in the limit.


You're right, of course. Thanks for the explanation. I was thinking particularly of the comment about the behavior with arbitrary initial numbers. Start with 1, 5000, and it still converges quickly to the golden ratio, as the parent comment mentioned. I enjoy watching the initial constants disappear.


If you read this and you’re curious, there’s a proof that is fairly easy to follow if you understand eigenvectors. Write the operation that takes the two last elements of the sequence and produces the following two, notice it’s linear, then analyze the eigenvalues of the associated matrix and relate the original operation to the power method.


You actually don't need eigenvectors for this proof. Sketch: Write x = lim a_n/a_{n-1} where a_n is the nth term of Fibonacci. Replace a_n with a_{n-1} + a_{n-2}, which will give you a quadratic equation in x. Solving this quadratic equation gives you the golden ratio.


I believe that gives you the result starting from Fibonacci, but not that any starting pair of numbers eventually lands at the golden ratio.


No, it works for any pair of numbers, as long as they are not both zero. I am only using the recursive relation of the Fibonacci sequence, not the starting terms.


This is even more clear with larger expansions like `1/99989999 =1.00010002000300050008001300210034005500890144...×10−8`

See discussion on math overflow here: https://math.stackexchange.com/questions/656183/why-does-fra...


I went to college in '88 or '89 and one of my teachers showed me the 1/89 trick, along with a few others, e.g. 1/7, and so forth. My teacher claimed to have been shown the various tricks by one of his teachers back in the '60s.

In the same period I "discovered" an error detection technique which is commonly known as Hamming codes. I would never dare to claim I invented them or discovered them.

In '92 and '93, I was heavily in to the BBS scene, some people had 56K modems, others had 14.4K. With verifiable evidence of written notes and digital artifacts (a BBS door and protocol for AmiBBS and Citadel and a couple of others) , I created a technique whereby multiple peers with low-bandwidth connections could transfer small fragments of a larger dataset to a peer with a lot of bandwidth.

If you were heavily in to the warez scene and/or part of Fate, it is probably you made use of this protocol to transfer pirated software between FTP sites. A warehousing server would tell each peer who had what part of a piece of data, and any peer could make requests for any piece of the data from any other peer who happened to have a copy of that data. Today, a very similar protocl is commonly known as BitTorrent.

I would not say I "discovered" or "invented" the protocol as my work was based on the various X-, Y- & Z- modem protocols. There was a TCP/IP packet to -Modem packet translator so that a BBS talking over that new fangled internet thing could take advantage of a T1 (1.5Mbps) connection for instance, which really helped with spreading the warez around the various FTP sites by the couriers.

I doubt the veracity of the claim by the author to have "discovered it as original" in 1994 before anybody else.


For people who are generally interested in these kind of identities relating a fraction to a recursion, please check out the generatingfunctionology book [1]. The basic idea is to define f(x) = sum_n f_n x^n where f_n satisfies some recursion equation. Very often you can find f(x) as p(x) / q(x) where p and q are polynomials in x. Now you can simply evaluate f(0.1) or more generally f(b^-l) where b is a base and l a positive integer, which gives you on the left-hand side your rational number as a fraction, and on the right-hand side the decimal expansion.

[1] https://www.math.upenn.edu/~wilf/gfologyLinked2.pdf


Okay, as a non-mathematician, I see something like this and I think... “neat coincidence?”

But the world of numbers seems to be full of these neat coincidences. So do any of the math folks here have a theory or explanation of why?


There's a "theorem" about this:

> The interesting number paradox is a semi-humorous paradox which arises from the attempt to classify every natural number as either "interesting" or "uninteresting". The paradox states that every natural number is interesting. The "proof" is by contradiction: if there exists a non-empty set of uninteresting natural numbers, there would be a smallest uninteresting number – but the smallest uninteresting number is itself interesting because it is the smallest uninteresting number, thus producing a contradiction.


That just sounds like another formulation of the Surprise Exam paradox.[0] It falls down when you realise that “the four hundred and seventieth otherwise uninteresting number” is not a particularly interesting number, so there must be a problem with the problem statement.

[0]: https://en.wikipedia.org/wiki/Unexpected_hanging_paradox


It's a paradox related to "meta-logic", but it's different. Surprise exam is about temporal reasoning -- It's only impossible to be surprised on the last day (or else the premise of having an exam is invalidsted), and reasoning backwards in time from a contradiction is not valid.

Uninteresting number is a simpler contradiction in definitions.


This is not a paradox though, as your copy and paste states. It's just a theorem (as you stated) with a proof by contradiction. A paradox must be self-contradictory under all circumstances.


It is a paradox. It assumes you have a definition of uninteresting number such that you can select the least uninteresting number, and then retroactively defines that number to be interesting by brand new criteria, which contradicts that you would have ever selected it in the first place. Thus the axioms invoked are mutually contradictory: the axioms that allow you to identify the least uninteresting number, and the axioms you invoke to declare it interesting are in conflict.


This is no coincidence — it’s because 89=100-10-1, and the ordinary generating function for the Fibonacci sequence is 1/(1-x-x^2) (if you go to wolfram alpha and Taylor expand that expression, you’ll see its coefficients are the Fibonacci numbers).


Yes. One has the identity:

1/(1 - x - x^2) = 1 + x + 2x^2 + 3x^3 + ...

where the Fibonacci numbers are the coefficients on the right. Try writing it out! The basic idea is that because F_n = F_{n - 1} + F_{n - 2}, everything will neatly cancel out.

This is an example of a "generating function". Anyway, plug in x = 0.1, and then divide by 100 to see the behavior described in the post.

One also gets

1/9899 = 0.00010102030508132134559046368320032326498...

1/998999 = 0.0000010010020030050080130210340550891442334...

and so on.


More to the point this function trivially statisfies f(x)-1-x = X (f(x)-1) + X^2 f(X). You should be able to convince yourself that this is equivalent to the fact that its power series satisfies the Fibonacci recurrence, and that this power series starts with 1 + x.

You can also use this to easily find generating functions that satisfy other starting conditions.


In addition to this question I would like to know if you can in general say/proof that for every sequence which has some relation between the successive numbers there is a rational number whose decimal expansion is the same as the sequence.


For a linearly recursive sequence x_0, x_1, x_(n+2) = ax_(n+1) + bx_n, the general formula for the terms is

x_n = cα^n + dβ^n,

where α, β are the roots of the quadratic x²−ax−b; c, d are solutions to the system

c + d = x_0 cα + dβ = x_1.

If the series Σ x_n⋅10^n converges then its value is

10c/(10−α) + 10d/(10−β) = ((100−10a)x_0 + 10x_1)/(100 − 10a − b).

If a, b, x_0, x_1 are all rational then the above series converges to a rational number, too. This is the case for the Fibonacci sequence, with a=b=1, x_0=0 and x_1=1.


A standard technique to find explicit expressions for the `n`th number of a recursion is through generation functions; see e.g. https://en.wikipedia.org/wiki/Generating_function. You can plugin x = 10^-1 there. Not sure if the result is always a rational number.


The Penguin Book of Curious and Interesting Numbers

https://www.amazon.com/Penguin-Book-Curious-Interesting-Numb...

The caveat here is "for some value of 'interesting'."


You might also enjoy

tan(1 degree/55555555555)


Fun video about this one on Numberphile https://www.youtube.com/watch?v=IMY2_yzDm9I


"they say there's a fine line between a numerator and a denominator" --?

another one of those maths jokes that makes me smile and confuses a lot of other people of why i'm laughing


3.1415926536212091649988555782837470430464925667776193389556...e-13

Nice


not impressive given that more 5s are needed than correct decimals produced


Imagine the real number line is a database that you can run SELECT queries against, and you don't have to worry about giving a computational procedure that produces the result, it just magically gets produced.

Now, imagine you write a query, like, say, "SELECT number WHERE number = .01 * FIB[1] + .001 * FIB[2] + .0001 * FIB[3]" and so on until you get what the article discusses.

It isn't necessarily that surprising that you might find something with an uncountably infinite numbers to pick from.

Now, consider all possible "interesting" queries you could run, along with all their results.

There result is an inconceivably large sea of queries. Most of them are, in fact, utterly pointless; SELECT statements that return no values, SELECT statements that return all values (equally pointless), SELECT statements that return complicated sets of values but have essentially no mathematical interest because there is no practical way to represent them as anything smaller or more interesting, etc.

In this massive sea of results, you should expect a lot of interesting things to exist. Finding them is tricky; in percentage terms they make up 0% of the results, but we have mechanisms for finding some of them.

Basically, there are so infinitely many mathematical statements that there can't help but be a large supply of "interesting" statements like this.

For an interesting view on that, see https://en.wikipedia.org/wiki/Mathematical_coincidence . These are true statements or almost true statements (near equalities) about a wide variety of numbers that are essentially meaningless... it's just there's so many ways of putting things together that there are inevitably large numbers of these things (the wiki page is just a sampling).

You can even generate these mathematical coincidences yourself. Create a program that will systematically iterate over abstract syntax trees of mathematical expressions involving whatever combination of mathematical operators (+-×/, sqrt, log, sin, whatever) and numbers you like (the first ten integers, e, i, pi, whatever else you like), store up a table of results and emit any two expressions that are, say, within .01% of each other. You will rapidly find a ton of results, because it turns out that even with modest numbers of operators, there are far more mathematical expressions than there are small numbers for them to result in separated by more than .01%. If you think about it, this program can't help but emit a lot of results. Some of them will be humanly "interesting". A few of them will even be mathematically interesting (e.g., this procedure will generate the famous Euler identity relatively quickly if you included the relevant operators and numbers).

On a larger scale, this is also known as the Strong Law of Small Numbers: https://en.wikipedia.org/wiki/Strong_Law_of_Small_Numbers The previous paragraph is a very bite-sized example of why this holds that you can code up yourself if you are interested.


> Most of them are, in fact, utterly pointless; SELECT statements that return no values, SELECT statements that return all values (equally pointless)

There are numerous examples of such queries which are far from pointless, such as "SELECT number FROM reals WHERE number = sqrt(-1)" for the former, and "SELECT number FROM reals WHERE number = number * 1".

I would call those results interesting, insofar as they're absolutely required to do any interesting number theory.

Not to detract from your interesting post! I enjoy the conceit, and could see myself using it in conversation.


There are many co-incidences that seem to be so strange that you start thinking they can not be co-incidences, until you take into account that there are an infinite number of facts which do not seem to be co-incidental at all. But beware of untrue facts: https://en.wikipedia.org/wiki/Lincoln%E2%80%93Kennedy_coinci...


Presumably in a different number base (base 12, eg) it would be a different number that had this reciprocal property. Would it still be in the fibonacci series in that base?


This was my thought too. If 1/89 in base 10 works, and 89 in the 10th unique term of the series

> 1,1,2,3,5,8,13,21,34,55,89,144,...

then does 1/144 work in base 11, and 1/233 in base 12?

Assuming we of course translate the base-10 number 144 to the appropriate base-11 number (121) first. I'm too bad at math to do this anymore, maybe someone else can tell us ;)


Compute the generating function f(x) = sum_n f_n x^n where f_n satisfies your favorite recursion. Compute f(b^-1) where b is your base. For fibonacci f(x) = x / (1 - x - x^2).


I noticed a while ago that the powers of 1001 encode the successive rows of pascal's triangle/the binomial coefficients. This is not so surprising, since we are effectively taking powers of the polynomial (1+x), but replacing x with 1000 (clearly it works the same if we use any other power of ten instead). I wonder if we can find a relationship to that here. We might start by looking at this relation:

  1/(1-x) = 1 + x + x^2 + x^3 + ...
Hopefully this will yield an operator x whose succesive powers are the fibonacci numbers. Take .01/(1-x) = 1/89, then x = 0.11. Actually, the powers of x, just like 1001 above, will yield rows of pascal's triangle. So the taylor expansion above tells us that F(k) = Σ(n=0..k-1) B(n, k), in other words that each fibonacci number is the sum of a diagonal of pascal's triangle (like here: https://cdn1.byjus.com/wp-content/uploads/2018/11/maths/2016...)

More generally, we can compute numbers with decimal expansions of the fibonacci numbers with 10^-2n / (1 - 10^-n - 10^-2n). Notice that this is just the z-transform of the recurrence relation of the fibonacci series, with 10^n replacing z:

  Z(f(n)) = Z(f(n-1) + f(n-2) + δ(n-2)) (δ is the kronecker delta)
     F(z) = z^-2 / (1 - z^-1 - z^-2)
The taylor expansion of this expression has coefficients equal to the terms of the fibonacci sequence - which makes sense, because that's the definition of the z-transform. We can, with a little rearranging, get an explicit formula for the fibonacci sequence from it too:

  Take φ± = (1 ± √5)/2
  Then F(z) = z^2/(z - φ+)/(z - φ-)
            = ( φ+ z/(z - φ+) - φ- z/(z - φ-) )/√5 (by partial fraction decomposition)
  Z^-1(F(z)) = f(n) = (φ+^(n+1) - φ-^(n+1))/√5


In case anyone is interested, I ran this procedure using integer bases besides 10 and obtained the following sequence

1, 5, 11, 19, 29, 41, 55, 71, 89, ...

Searching it on the OEIS (a great resource for mathematics) gave

http://oeis.org/search?q=1%2C5%2C+11%2C+19%2C+29%2C+41%2C+55...

It turns out that these are precisely the first values of the Fibonacci polynomial n^2 - n - 1

I haven't verified this fact, but it seems like it comes from an application of the generating function of the Fibonacci numbers.

I posted my code in another comment

https://news.ycombinator.com/item?id=24933085


Any number is remarkable if you're smart enough to find why it's remarkable.


Hardy and Srinivasa Ramanujan about interesting and uninteresting numbers, Hardy remarked that the number 1729 of the taxicab he had ridden seemed "rather a dull one", and Ramanujan immediately answered that it is interesting, being the smallest number that is the sum of two cubes in two different ways.

https://en.wikipedia.org/wiki/Interesting_number_paradox


And if it's not remarkable, then that is in itself remarkable.


Aren't most transcendental numbers unremarkable?


No number is unremarkable. Let's construct the set of all unremarkable numbers. Now, let's construct the sequence of those numbers in order. The first member of that sequence has the remarkable property that it is the smallest unremarkable number. That is remarkable, so remove it from the set. By induction, the set must be empty.


That doesn't work because real numbers are not enumerable, so you cannot induce over them. That joke "proof" only works for natural numbers and goes like this:

Theorem: all natural numbers are interesting

* Base case: 0 is interesting because it is the smallest natural number, as well as the identity element of + operation.

* Inductive case: Assume the theorem holds for all m, m<n. Take n. If it is not interesting, then n is the smallest non-interesting number. But that's interesting because it's the smallest such number. Therefore it cannot be non-interesting. Therefore theorem holds for n.

By induction, we conclude all natural numbers are interesting. QED.


...only works for natural numbers...

That proof also works for the rationals with a suitable ordering. Example: 0, 1, -1, 2, -2, 1/2, -1/2, 3, -3, 1/3, -1/3, 2/3, etc....


Yes works for all enumerable set (i.e. all sets that have a bijection with natural numbers).


> Now, let's construct the sequence of those numbers in order. The first member of that sequence

Not happening. Sets of numbers do not always have a first in order.

Consider the set of unremarkable real numbers > 0 under the regular arithmetic ordering. Which one is first?


I think I found a typo in the final line (not impactful for the result, however).

11/89 should be 11/8900

This can be verified by using the following Sage code (simply go to https://sagecell.sagemath.org/ to avoid installing sage)

  b = 10

  A = Matrix([[0, 1],[1/(b**2), 1/b]])
  I = matrix.identity(2)

  show((I - A).inverse() * Matrix([[1/b**2],[1/b**3]]))


Did nobody tell Fibonacci that rabbits are mortal?


I think it would be built into the curve or sequence that the new births replace the dying ones.

Obviously though they would have limited resources and couldn't keep growing indefinitely.


oh weird, I accidentally "discovered" this when I was a kid, and assumed it was known for a long time... but 1994?


Question (perhaps naive):

Given the decimal expansion of 1/89, is there any way to directly retrieve the Fibonacci sequence from it? (that is, not using any knowledge of the sequence itself)

I'm assuming it can't be done because different sets of fractions can sum to 1/89, but maybe I'm missing something.


Presumably in a different number base, the number would be different. Would it still be in the aeries?


Without any basis whatsoever, and I really must one day put this idea out of its misery with some studying, I long suspected that quantum mechanics involved parallel universes where different bases more aptly fit with that other reality.

My thought is surely crackpot but I'll explain how my idea arose :

A fraction eg 1/3 describes a decimal number to infinite accuracy but creates a challenge for base 10 calculations.

I thought about the precision necessary for learning the math of the quantum particles and the experimental way we're trying to figure out how they behave to infer their properties. (Higgs was the other way around but I am optimistic that we'll predict much more in future instead of this convoluted observations rigmarole) and I started wondering if you could approximate extremely high precision decimals to fractions in non decimal bases and from there simplify calculations with far greater resolution.

How much more capable would Nyquist - Shannon sampling, if we could clock with the precision of infinite decimal fp digits but handle only a short simple "one third" input or do visor?

My silly mind wandered off to imagine particles jumping between different base based universes just as a sequential progression through the precision of their infinitessimal steps through space and time.


Base 10 doesn't fit particularly well with our reality, it was picked for historical reasons. Every other base works just as well.


including fractional bases (rational or irrational) and even negative bases, I think.


This is my favourite kind of pure speculation. You don't know what you're talking about, really, but you're clearly exploring vast areas of thought at the same time.

Remember, integers are integers are integers, because they represent the intrinsic "whole quantity" of something; this is as concrete as logic will get, the idea that there are "ones", it's pretty hard to imagine a universe that doesn't have that.

Once you have integers, then you're going to do math in an integer base; and making too high of a base has a diminishing value at at certain point, so it's unlikely we'd see higher than maybe 60. Non-integer bases exist - https://en.wikipedia.org/wiki/Non-integer_base_of_numeration - but it's clear to me that I'm too stupid to use them, and so probably most other people are too. This tells me that it's going to be a comparatively rare Many World that chooses to do this.

Choosing a number like 12 or 60 with a lot of divisors would have been nice. 1/3 in base 12 is "0.4", which is a lot nicer than 0.333... and would probably help make a lot of younger math education way easier.

Combining that with skipping "degrees" entirely and using radians from the start would probably have been wise choices. We'd have been much better equipped to divide things! I imagine a six-fingered being would have had an immediate advantage in that regard, but, alas.

Now, would some of those number constants look particularly different? Not really. Pi in base 12 is "3.184809493B91866", for instance, so it doesn't look like that would be much easier. E and other numbers similarly just end up with different expansions.

Remember, you can use whatever number base you want to, in this universe. The key is that it's just a way your brain interprets the symbols to represent a quantity; don't confuse the map for the territory. Five, the quality of having five whole entities, exists the same when it's 101 in binary or 10 in base 5 or 11 in base 4; either way it's all still just five, and so the right thing to do is to use the base system that most intuitively works for you so that it becomes transparent.


The fraction, the decimals, and the series, are in the same base.


> As you can easily check, 1/89 = .01123595595... Bizarre, eh?

Typo: 1/89 = .01123595505...


So strange. Published another post about the same topic an hour before you


1/89 = .01123595505..., not .01123595595..., small typo


btw, there's a typo on the page for the value of 1/89, it should be :

   .01123595505617977528
but the math does work of course, it converges :

   >>> .01 + .001 + .0002 + .00003 + .000005 + .0000008 + .00000013 +  .000000021 + 
   .0000000034 + .00000000055 + .000000000089 + .0000000000144 + .00000000000233 + 
   .000000000000377
   
   0.011235955056107002


Another surprising fraction:

987654312/123456789 = 8

And this works in any base.


This is cool. The proof is a bit over my head though. How does someone go about learning or dissecting the syntax of this proof...?


Here's a simple way of looking at it (and we'll build up to the matrix notation).

The nth and (n+1)st fibonacci numbers can be written as a system of two equations in the nth and (n-1)st numbers:

  F(n) = F(n)
  F(n+1) = F(n) + F(n-1)
where the base cases are F(0) = 0 and F(1) = 1.

The question is then to show that (note that the indices under F are actually wrong in the OP...):

  1/89 = .01F(2) + .001F(3) + .0001F(4) + ... + 10^(-n)F(n) + ...
One beautiful (and mathematically simple) way of analyzing this system is by the use of linear algebra. The idea is that, because F(n+1) depends linearly on F(n) and F(n-1) (see the above definition), then we can write the previous system with the following (linear-algebraic) notation

  [ F(n)   ] = [ 0 1 ] [ F(n-1) ]
  [ F(n+1) ] = [ 1 1 ] [ F(n)   ].
If we write x(n+1) as the vector (F(n), F(n+1)), i.e., first entry is F(n) and the second entry is F(n+1), and the matrix as A, then x(n+1) = Ax(n). (Note that A is the matrix whose entries are exactly the coefficients of the linear equation we gave above!)

In other words, we've reduced the problem down to the question of investigating the properties of the matrix A ! Now, the sum we were looking at, originally, can be written (in terms of x(n)) as the first entry of (note that x is a vector as we've defined it!)

  x(2) + .1x(3) + .01x(4) + ... = x(2) + .1Ax(2) + .01 A Ax(3) + ... = x(2) + (.1A)x(2) + (.1A)^2 x(3) + ...
There's a slick proof (see [0]) that, if this sequence converges, then its result is given by the first entry of

  (I - .1A)^(-1)x(2),
which is exactly what the result gives. (I have changed the normalization a little bit for convenience, but it is the same proof :)

-----

[0] If

  y + By + B^2y + ... = z converges, then we can multiply both sides by B to get

  By + B^2y + ... = Bz
But, here's the magic! Let's subtract the first equation from the second to get

  y = z - Bz = (I - B)z,
so, multiplying by the inverse of B on both sides, we get:

  y + By + B^2y + ... = z = (I - B)^(-1)y,
as required!


As a self-studyer, I'm a big fan of this book https://www.people.vcu.edu/~rhammack/BookOfProof/ .


An interesting property. Seeing the proof is like seeing how a magic trick is done--ah of course it works like that.


Ah makes sense if you think of it as 1 / (100 - 10 - 1)

89 being a fib itself is quite a coincidence though?


It had to add up to something. The surprise to me is that it's a rational number.


What's more interesting is that the sum is periodic.


Fibonacci died in 1242, so 13th century not 15th.


8th Fibbonacci century. (0 1 1 2 3 5 8 13)


fibonacci retracements are used heavily in trading, i wonder how prices/averages line up so nicely around those points


Nice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: