
Applying Eigenvalues to the Fibonacci Problem - mparramon
http://scottsievert.github.io/blog/2015/01/31/the-mysterious-eigenvalue/?hn=1
======
ColinWright
Massive discussion of these sorts of things here:

[https://news.ycombinator.com/item?id=8761343](https://news.ycombinator.com/item?id=8761343)

Indeed, searching[0] for Fibonacci shows many, many, _many_ discussions about
this.

[0]
[https://hn.algolia.com/?query=fibonacci&sort=byPopularity&pr...](https://hn.algolia.com/?query=fibonacci&sort=byPopularity&prefix&page=0&dateRange=all&type=story)

------
Chinjut
Nevermind 1.618... What a silly number. All that was special about it was that
its square was one plus itself. Well, so what? That's also true of f: f^2 = 1
+ f. And continuing in this way, we find that f^3 = f^2 * f = (1 + f) * f = f
+ f^2 = 1 + 2f. And f^4 = 2 + 3f, and f^5 = 3 + 5f, and so on, in the same
way.

Clearly, calculating Fibonacci numbers is as good as calculating the
"coefficients" of powers of f. We don't need to bother exponentiating
1.618...; let's just exponentiate f.

But how do we efficiently raise f to high powers? Well, the same way we
efficiently raise anyone to high powers: by repeated squaring (or addition
chains more generally). If you want to compute f^(2n), you can just compute
f^n and square it [and to get f^(2n + 1), multiply that by f]. So we can
compute f^n with on the order of log(n) multiplications [though, it should be
noted, the actual quantities being multiplied will grow in size as we go on].

But what IS f if it's not 1.618...? Well, it's a Fibonaccex number: numbers of
the form a + bf, which add and multiply in just the way you'd expect subject
to the rule that f^2 = 1 + f. [So, in particular, (a + bf)^2 = a^2 + 2abf +
b^2(1 + f) = (a^2 + b^2) + (2ab + b^2)f, which we can use for our repeated
squaring]. Introducing the Fibonaccex numbers with f^2 = 1 + f is no different
from introducing the complex numbers with i^2 = -1.

And working with these directly gives us just as efficient a method for
computing Fibonacci numbers as Binet's formula, without all the worries about
floating-point precision. Indeed, as far as I'm aware, it gives us the most
efficient method anyone knows (supposing you use efficient integer
multiplication).

------
codepie
There are many ways to get to Binet's Formula from the recurrence relation.
But it gives wrong result for large values of N. I have also written on the
same topic
[http://www.dipkakwani.appspot.com/5651124426113024](http://www.dipkakwani.appspot.com/5651124426113024)

~~~
deckar01
> Binet's Formula ... gives wrong result for large values of N [due to the
> fixed precision estimations of the irrationals].

Sounds like a job for an arbitrary precision decimal library and an estimation
of the precision the irrationals need to minimize the error at N.

Do you know of any attempts to overcome the error of computing with irrational
numbers?

~~~
codepie
I think this might provide you a relevant answer :
[http://stackoverflow.com/questions/9645193/calculating-
fibon...](http://stackoverflow.com/questions/9645193/calculating-fibonacci-
number-accurately-in-c)

~~~
deckar01
> arbitrary precision libraries often come with their own optimised Fibonacci
> functions.

I will check out some of these implementations.

[http://cs.stackexchange.com/questions/7145/calculating-
binet...](http://cs.stackexchange.com/questions/7145/calculating-binets-
formula-for-fibonacci-numbers-with-arbitrary-precision)

This post suggests that the matrix solution is desirable.

I wonder if the complexity of performing exponentiation on high precision
values would ever overtake a logarithmic solution.

------
jpfr
> While this isn’t recursive, there’s still an n−1 unnecessary matrix
> multiplications. These are expensive time-wise and it seems like there
> should be a simple formula involving n.

Computing the nth power of the matrix takes only log n iterations [1]. So the
overhead will be much less than n-1.

[1]
[http://en.wikipedia.org/wiki/Exponentiation_by_squaring](http://en.wikipedia.org/wiki/Exponentiation_by_squaring)

------
chucksmart
Doing anything new in math is really really hard.

------
drostie
So, first off, you can get this whole result much faster if you know what
"linearity" is (and while your average HNer might not, anyone who knows what
eigenvectors are should). The basic idea is that we say a function f is linear
if it distributes over addition of whatever objects it takes, so that we
always know:

    
    
        f(x + y) = f(x) + f(y)
    

If you extend this line of thinking a bit, then you can see that the Fibonacci
recurrence is "linear" in the sense that if you have two sequences, f and g,
which both obey this equation, then the sequence h = f + g produced by adding,
term by term, each of their numbers, also follows this equation. (Just so
we're clear: adding together f[n] = f[n-1] + f[n-2] with g[n] = g[n-1] +
g[n-2] and using h[m] = f[m] + g[m] gives you h[n] = h[n-1] + h[n-2], in other
words, so h is also in the Fibonacci family.)

We can then search for special solutions of the form f[n] = p^n. There are, it
turns out, two solutions, as when you substitute that into the recurrence and
divide by p^(n−2) you get p^2 = p + 1, the equation for the Golden ratio. That
gives you these numbers p_{±} = (1 ± √5)/2.

With a little more thinking you can conclude that if you have any two
"different" Fibonacci family sequences f and g, then _every_ family sequence h
can be represented as h = A * f + B * g, where A and B are ordinary numbers.
This is because if two sequences have the same first two numbers, the
recurrence forces them to have all the rest of the same numbers. (The meaning
of "different" above is therefore _linear independence_ of the first two
numbers; f[0] * g[1] ≠ f[1] * g[0].) Solving for A and B in the case where
f[0] = 0, f[1] = 1 yields the "normal" Fibonaccis as:

    
    
        f[n] = (p_+^n − p_−^n) / √5
    

And with a little effort you can see that |p_−| < 1 so that for large N (which
is the only time it counts) the result can be simplified by rounding. It turns
out that this approach happens to also round correctly for n=0, n=1, etc. so
that you just get:

    
    
        var fib = (function () { // scope bracket to precompute r and sqrt5 as local vars:
            var sqrt5 = Math.sqrt(5),
                r = (1 + sqrt5) / 2;
            return function fib(n) {
                return Math.round(Math.pow(r, n) / sqrt5);
            };
        }());
    

This is blazingly fast but the Fibonaccis grow exponentially like p_+^n, so
every step requires one more bit of precision. By luck this works until f(76)
at which point it runs into problems with double arithmetic.

So if you want something even faster and more accurate, just precompute the
array out to 79 elements:

    
    
        var fib = (function () { // scope bracket
            var i, fa=[0, 1];
            for (i = 2; i < 79; i += 1) { 
                fa[i] = fa[i - 1] + fa[i - 2]
            }
            return function fib(x) {
                if (x > fa.length) {
                    throw new Exception("Fibonacci too big: fib[" + x + "] overflows double integer precision and JS has no BigInts.");
                }
                return fa[x];
            };
        }());
    

To really get the right result for, say, fib(100) you'll need a language with
arbitrarily large integers, and you'll have to do a matrix exponentiation to
do the same "powers" trick with arbitrary precision.

At that point, I have some bad news for you: trading O(n) additions for O(log
n) multiplications is _not_ asymptotically faster! I think it may be in-
practice slower due to the naive algorithm generating O(n p^n) data which
needs to be garbage collected, rather than something like O(log(n) p^n)
data... but they both end up having O(n^2) running-time if you ignore this
aspect of memory use. (N.B.: it is indeed n^2 and not n^2 log(n) for the
exponentiation-by-squaring one. Most of the multiplicative effort ends up at
the very tail end so that the log(n) completely falls out; you can basically
do the sum with the geometric-series formulas to see this.)

