
Fibonacci Numbers in the Real World - ColinWright
http://lee-phillips.org/lispmath/
======
sbi
The "right" way to generate Fibonacci numbers is not via the closed form, but
by exponentiation of the matrix A = [0 1; 1 1] that defines Fibonacci numbers
as an element of its endomorphism ring Z[x]/(x^2 - x - 1). This doesn't rely
on explicitly computing the eigenvalues of A and scales well to more complex
recursions. Sample Haskell:

    
    
      fib :: Integer -> Integer
      fib n = fst . loop (0, 1) base . abs $ n
        where pmul (a, b) (c, d) = ((a + b)*(c + d) - b*d, b*d + a*c)
              base = if n >= 0 then (1, 0) else (1, -1)
              loop acc sq m
                | m == 0         = acc
                | m `mod` 2 == 0 = loop acc (pmul sq sq) (m `div` 2)
                | otherwise      = loop (pmul sq acc) (pmul sq sq) (m `div` 2)

~~~
leephillips
Indeed, this is covered in one of the references in my article, which didn't
actually approach the issue of the "best" or "right" way to calculate the
numbers.

~~~
sbi
Which reference?

~~~
leephillips
The last one ([11]) to Nayuki Minase's article, where he lists several
algorithms, including matrix exponentiation.

~~~
sbi
Thank you for the reference. The algorithm in the parent comment is not
actually matrix exponentiation but the "fast doubling algorithm" in Minase's
terminology, where you exponentiate the matrix algebraically using the Cayley-
Hamilton theorem rather than using matrix multiplication.

~~~
ColinWright
You have me confused. The usual "Russian Peasant Multiplication" can be
adapted to produce fast exponentiation, or "exponentiation by squaring". Your
code appears to be doing exactly that.

Further, that is then exactly what is quoted in reference [11], just as
leephillips says. There it is then taken a step further and specialized to the
calculation of the Fibonacci numbers.

So the referenced paper seems to cover what you have said, and go even
further. To that end, I'm confused as to what you are saying. Are you agreeing
that the paper does what your code does? Or are you disagreeing, in which
case, could you be more explicit?

Thanks.

~~~
sbi
You are correct that this code uses "Russian Peasant Multiplication." The
question is: what is being exponentiated? One choice is the matrix A = [0 1; 1
1], whose powers are A^n = [F_{n-1} F_n; F_n F_{n+1] if we compute those
powers _treating A as a matrix_. This involves some redundant computations
since F_n is being computed twice.

An alternative method is to use the following identity:

    
    
      (aA + b)(cA + d) = acA^2 + (ad + bc)A + bd
                       = (ad + bc + ac)A + (bd + ac)
                       = ((a + b)(c + d) - bd)A + (bd + ac)
    

(In fact, A^n = F_n A + F_{n-1} I_2). This step requires 3 multiplications and
4 additions and is more-or-less equivalent to the "fast doubling" algorithm in
reference [11], which is more optimized. But this derivation is completely
generic---it just relies on the characteristic polynomial of A, namely A^2 - A
- 1 = 0.

In the case where A is not the Fibonacci matrix but something much larger,
working with polynomials modulo the characteristic polynomial of A is much
faster than working with A as a matrix.

~~~
ColinWright
Right - that helps. Thanks for the clarification - very helpful.

------
nefreat
SICP shows us how to do this as an exercise 1.19.
[http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-11.html...](http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-11.html#%_sec_1.2.4)

In scheme:

    
    
      (define (square x) (* x x))
      (define (fib n)
        (fib-iter 1 0 0 1 n))
      (define (fib-iter a b p q count)
        (cond ((= count 0) b)
              ((even? count)
               (fib-iter a
                         b
                         (+ (square p) (square q))      ; compute p'
                         (+ (* 2 p q) (square q))       ; compute q'
                         (/ count 2)))
              (else (fib-iter (+ (* b q) (* a q) (* a p))
                              (+ (* b p) (* a q))
                              p
                              q
                              (- count 1)))))

------
pmiller2
What, no discussion of Fibonacci-Lucas identities? :)

See my ancient post [http://pythonista.wordpress.com/2008/07/03/pure-python-
fibon...](http://pythonista.wordpress.com/2008/07/03/pure-python-fibonacci-
numbers/)

Pretty graph here:
[http://stevekrenzel.com/articles/fibonacci](http://stevekrenzel.com/articles/fibonacci)

------
arocks
This is a very important rebuttal to the “The Mathematical Hacker” article
because it shows Maths is useful but should not be taken too literally. Even
Knuth uses assembly language of a hypothetical machine in his Art of Computer
Programming books because any discussion on optimisation is incomplete without
the context of which machine we are talking about.

~~~
ska
I really don't understand this sort of response.

The right way to understand the behaviour of computations on a real machine is
inherently mathematical. This is why, for example, we have an entire
discipline of numerical analysis, just for starters.

Programmers very commonly make elementary mistakes in computation precisely
because they don't understand this, and don't approach the problems
mathematically. Ignoring overflow, or pretending floats have infinite
precision is antithetical to thinking about these problems mathematically.

Conversely, any mathematician worth their salt will approach computation on a
computer not as a black box, but as a concrete implementation. Mathematicians
make lots of programming errors (some of them quite egregious and easily
avoidable). Misunderstanding floating point representations or the overflow
behaviour don't tend to be among them in my experience.

By the way: Knuth used a hypothetical machine to avoid the quirks of a
concrete one distracting from the discussion.

~~~
arocks
My response can be very simply stated as the well-known adage - Profile Before
Optimizing

I am not completely discounting Numerical Analysis here. Mathematicians can
make entire mathematical models of a machine but it will be at a certain level
of abstraction. In practice there will be abstraction leaks due to some quirks
of the implementation or real world dependencies.

So one should not merely depend on the mathematical model.

~~~
ska
A more important and often overlooked adage - make it correct before you try
to make it fast.

My confusion at your (and others, not meaning to pick on this one) response is
more fundamental than that. There is no way in which thinking about these
things is somehow divorced from thinking about things mathematically.

To me "Merely depending on a mathematical model" isn't a problem that actually
happens much. Failing to think about these things, on the other hand, very
much is.

