

Computing Fibonacci - legaultmarc
http://atgcio.blogspot.com/2013/01/computing-fibonacci.html

======
wging
_"The proof of this identity probably requires mathematical knowledge that is
beyond my current capacity"_

Not at all!

Let M = [[1 1] ; [1 0]], the matrix under discussion. What does M do to a
column vector v = [x ; y] under the rules of matrix multiplication?

    
    
        M*v = [x+y ; x].
    

But what is this transformation, in terms of the input and output vectors?
It's the same transformation as the Fibonacci transformation! We take
[current, previous] --> [current + previous, current].

This tells us that multiplying the matrix _n_ times will give us a matrix that
gives the same result as applying the Fibonacci transformation n times:

    
    
        M * (M * v) = (M * M) v , 
    

etc. (Think of the left hand side as applying the transformation twice, one
after the other, and the right hand side as applying _once_ a single
transformation that has the same effect as two Fibonacci transformations.)

Now this tells us that M^n [1 ; 0] (the n^th power of the matrix M, multiplied
by the initial state vector Fib_1 = 1, Fib_0 = 0), equals [Fib_n ; Fib_{n-1}].

You should be able to work backwards from that to see that M^n must have the
entries specified, since matrix multiplication is just a simple algebraic
process.

I've probably made some sort of off-by-one error here. But that's the idea.

What this suggests is that _any_ method of computing M^n will work to give you
Fib_n. You could try repeated matrix multiplications, but why not an
adaptation of the standard fast exponentiation algorithm? To compute M^k,
either square M^(k/2) (if k is odd) or multiply M by M^(k-1) (if k is even).
So M^13 would be

    
    
        M * M^12 = M * (M^6)^2 = M * ((M^3)^2) ^2 = M * ((M^2 * M)^2)^2, 
    

done in 5 multiplications instead of 12 the naive way

    
    
        (M * M * M * ... * M).
    

This is done in chapter 1, exercise 19 of SICP, although they never explicitly
admit that the transformation under discussion is a _linear transformation_ or
write down its associated matrix. [http://mitpress.mit.edu/sicp/full-
text/book/book-Z-H-11.html...](http://mitpress.mit.edu/sicp/full-
text/book/book-Z-H-11.html#%_thm_1.19)

By the way, this view of matrices--that they express _transformations that can
be applied to vectors_ , and that transformations that can be written down as
matrices have special properties and can be manipulated and composed to form
new transformations with the same properties--is why linear algebra will knock
your socks off in the right hands. (By contrast, if all you're told is that a
matrix is what we call it when you line numbers up in a pretty little row, you
will begin to hate your math class.)

~~~
Someone
For those wondering: that fast 'half the exponent' multiplication algorithm is
not optimal in the number of matrix multiplications. For example, x^15 can be
done in 5 multiplications, while 'half the exponent requires 6. See
<http://en.wikipedia.org/wiki/Addition_chain_exponentiation>:

    
    
        x3 = x * x * x
        x6 = x3 * x3
        x12 = x6 * x6
        x15 = x12 * x3
    

vs

    
    
        x2 = x * x
        x4 = x2 * x2
        x8 = x4 * x4
        x12 = x4 * x8
        x14 = x12 * x2
        x15 = x14 * x

------
pluies
The naive recursive implementation + memoization would be a cool addition. :)

~~~
cygx
See also [1], which, however, does not provide a recursive version.

[1] <http://stackoverflow.com/a/427810/48015>

