Hacker News new | past | comments | ask | show | jobs | submit login
Fibonacci Numbers in the Real World (lee-phillips.org)
28 points by ColinWright on Dec 23, 2013 | hide | past | favorite | 18 comments



The "right" way to generate Fibonacci numbers is not via the closed form, but by exponentiation of the matrix A = [0 1; 1 1] that defines Fibonacci numbers as an element of its endomorphism ring Z[x]/(x^2 - x - 1). This doesn't rely on explicitly computing the eigenvalues of A and scales well to more complex recursions. Sample Haskell:

  fib :: Integer -> Integer
  fib n = fst . loop (0, 1) base . abs $ n
    where pmul (a, b) (c, d) = ((a + b)*(c + d) - b*d, b*d + a*c)
          base = if n >= 0 then (1, 0) else (1, -1)
          loop acc sq m
            | m == 0         = acc
            | m `mod` 2 == 0 = loop acc (pmul sq sq) (m `div` 2)
            | otherwise      = loop (pmul sq acc) (pmul sq sq) (m `div` 2)


Indeed, this is covered in one of the references in my article, which didn't actually approach the issue of the "best" or "right" way to calculate the numbers.


Which reference?


The last one ([11]) to Nayuki Minase's article, where he lists several algorithms, including matrix exponentiation.


Thank you for the reference. The algorithm in the parent comment is not actually matrix exponentiation but the "fast doubling algorithm" in Minase's terminology, where you exponentiate the matrix algebraically using the Cayley-Hamilton theorem rather than using matrix multiplication.


You have me confused. The usual "Russian Peasant Multiplication" can be adapted to produce fast exponentiation, or "exponentiation by squaring". Your code appears to be doing exactly that.

Further, that is then exactly what is quoted in reference [11], just as leephillips says. There it is then taken a step further and specialized to the calculation of the Fibonacci numbers.

So the referenced paper seems to cover what you have said, and go even further. To that end, I'm confused as to what you are saying. Are you agreeing that the paper does what your code does? Or are you disagreeing, in which case, could you be more explicit?

Thanks.


You are correct that this code uses "Russian Peasant Multiplication." The question is: what is being exponentiated? One choice is the matrix A = [0 1; 1 1], whose powers are A^n = [F_{n-1} F_n; F_n F_{n+1] if we compute those powers treating A as a matrix. This involves some redundant computations since F_n is being computed twice.

An alternative method is to use the following identity:

  (aA + b)(cA + d) = acA^2 + (ad + bc)A + bd
                   = (ad + bc + ac)A + (bd + ac)
                   = ((a + b)(c + d) - bd)A + (bd + ac)
(In fact, A^n = F_n A + F_{n-1} I_2). This step requires 3 multiplications and 4 additions and is more-or-less equivalent to the "fast doubling" algorithm in reference [11], which is more optimized. But this derivation is completely generic---it just relies on the characteristic polynomial of A, namely A^2 - A - 1 = 0.

In the case where A is not the Fibonacci matrix but something much larger, working with polynomials modulo the characteristic polynomial of A is much faster than working with A as a matrix.


Right - that helps. Thanks for the clarification - very helpful.


The O(log N) algorithm, in Python:

  def fib(n):
    a,b,x,y = 0,1,0,1
    while n != 0:
      if n % 2 == 0:
        a,b = a * a + b * b, 2 * a * b + b * b
        n = n // 2
      else:
        x,y = a * x + b * y, b * x + a * y + b * y
        n -= 1
    return x


Could you expand "defines Fibonacci numbers as an element of its endomorphism ring Z[x]/(x^2 - x - 1)" in a few more words? Are you saying that Fibonacci numbers are themselves an element of its endomorphism ring (despite not obviously being a morphism)? What does "Z[x]" mean (I have a feeling it is mathematical notation forced into plain text)?

Thanks.


I explained this in a reply below, but this is a linear-algebraic technique. The matrix A = [0 1; 1 1] satisfies the relation A^2 - A - 1 = 0. (In fact, if you think of A as a lag operator, this is the definition of Fibonacci numbers!) You can use this relation to compute products of matrices of the form aA + bI where I is the identity matrix---the comment below has the derivation. This is a bit faster than using matrix multiplication because there are only two coefficients to keep track of.

This might seem like a pointless optimization, but it is actually useful. One advantage is that you don't need to factor the polynomial x^2 - x - 1 or work with square roots. The closed-form for Fibonacci numbers is a bit of a red herring because to work with it exactly, you will end up doing the same kinds of computations anyway. For recurrences of higher degree, factorization might be impossible and this is the only route.

When you're working over a finite field, even if you don't start with a recurrence, there are fast techniques for finding factors of the characteristic polynomial of a sparse matrix (the Berlekamp-Massey algorithm comes to mind).


SICP shows us how to do this as an exercise 1.19. http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html...

In scheme:

  (define (square x) (* x x))
  (define (fib n)
    (fib-iter 1 0 0 1 n))
  (define (fib-iter a b p q count)
    (cond ((= count 0) b)
          ((even? count)
           (fib-iter a
                     b
                     (+ (square p) (square q))      ; compute p'
                     (+ (* 2 p q) (square q))       ; compute q'
                     (/ count 2)))
          (else (fib-iter (+ (* b q) (* a q) (* a p))
                          (+ (* b p) (* a q))
                          p
                          q
                          (- count 1)))))


What, no discussion of Fibonacci-Lucas identities? :)

See my ancient post http://pythonista.wordpress.com/2008/07/03/pure-python-fibon...

Pretty graph here: http://stevekrenzel.com/articles/fibonacci


This is a very important rebuttal to the “The Mathematical Hacker” article because it shows Maths is useful but should not be taken too literally. Even Knuth uses assembly language of a hypothetical machine in his Art of Computer Programming books because any discussion on optimisation is incomplete without the context of which machine we are talking about.


I really don't understand this sort of response.

The right way to understand the behaviour of computations on a real machine is inherently mathematical. This is why, for example, we have an entire discipline of numerical analysis, just for starters.

Programmers very commonly make elementary mistakes in computation precisely because they don't understand this, and don't approach the problems mathematically. Ignoring overflow, or pretending floats have infinite precision is antithetical to thinking about these problems mathematically.

Conversely, any mathematician worth their salt will approach computation on a computer not as a black box, but as a concrete implementation. Mathematicians make lots of programming errors (some of them quite egregious and easily avoidable). Misunderstanding floating point representations or the overflow behaviour don't tend to be among them in my experience.

By the way: Knuth used a hypothetical machine to avoid the quirks of a concrete one distracting from the discussion.


My response can be very simply stated as the well-known adage - Profile Before Optimizing

I am not completely discounting Numerical Analysis here. Mathematicians can make entire mathematical models of a machine but it will be at a certain level of abstraction. In practice there will be abstraction leaks due to some quirks of the implementation or real world dependencies.

So one should not merely depend on the mathematical model.


A more important and often overlooked adage - make it correct before you try to make it fast.

My confusion at your (and others, not meaning to pick on this one) response is more fundamental than that. There is no way in which thinking about these things is somehow divorced from thinking about things mathematically.

To me "Merely depending on a mathematical model" isn't a problem that actually happens much. Failing to think about these things, on the other hand, very much is.


>My response can be very simply stated as the well-known adage - Profile Before Optimizing

Well, profiling IS also math.

If anything, it would be "optimizing" which is programming proper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: