fib :: Integer -> Integer
fib n = fst . loop (0, 1) base . abs $ n
where pmul (a, b) (c, d) = ((a + b)*(c + d) - b*d, b*d + a*c)
base = if n >= 0 then (1, 0) else (1, -1)
loop acc sq m
| m == 0 = acc
| m `mod` 2 == 0 = loop acc (pmul sq sq) (m `div` 2)
| otherwise = loop (pmul sq acc) (pmul sq sq) (m `div` 2)
Further, that is then exactly what is quoted in reference , just as leephillips says. There it is then taken a step further and specialized to the calculation of the Fibonacci numbers.
So the referenced paper seems to cover what you have said, and go even further. To that end, I'm confused as to what you are saying. Are you agreeing that the paper does what your code does? Or are you disagreeing, in which case, could you be more explicit?
An alternative method is to use the following identity:
(aA + b)(cA + d) = acA^2 + (ad + bc)A + bd
= (ad + bc + ac)A + (bd + ac)
= ((a + b)(c + d) - bd)A + (bd + ac)
In the case where A is not the Fibonacci matrix but something much larger, working with polynomials modulo the characteristic polynomial of A is much faster than working with A as a matrix.
a,b,x,y = 0,1,0,1
while n != 0:
if n % 2 == 0:
a,b = a * a + b * b, 2 * a * b + b * b
n = n // 2
x,y = a * x + b * y, b * x + a * y + b * y
n -= 1
This might seem like a pointless optimization, but it is actually useful. One advantage is that you don't need to factor the polynomial x^2 - x - 1 or work with square roots. The closed-form for Fibonacci numbers is a bit of a red herring because to work with it exactly, you will end up doing the same kinds of computations anyway. For recurrences of higher degree, factorization might be impossible and this is the only route.
When you're working over a finite field, even if you don't start with a recurrence, there are fast techniques for finding factors of the characteristic polynomial of a sparse matrix (the Berlekamp-Massey algorithm comes to mind).
(define (square x) (* x x))
(define (fib n)
(fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
(cond ((= count 0) b)
(+ (square p) (square q)) ; compute p'
(+ (* 2 p q) (square q)) ; compute q'
(/ count 2)))
(else (fib-iter (+ (* b q) (* a q) (* a p))
(+ (* b p) (* a q))
(- count 1)))))
See my ancient post http://pythonista.wordpress.com/2008/07/03/pure-python-fibon...
Pretty graph here: http://stevekrenzel.com/articles/fibonacci
The right way to understand the behaviour of computations on a real machine is inherently mathematical. This is why, for example, we have an entire discipline of numerical analysis, just for starters.
Programmers very commonly make elementary mistakes in computation precisely because they don't understand this, and don't approach the problems mathematically. Ignoring overflow, or pretending floats have infinite precision is antithetical to thinking about these problems mathematically.
Conversely, any mathematician worth their salt will approach computation on a computer not as a black box, but as a concrete implementation. Mathematicians make lots of programming errors (some of them quite egregious and easily avoidable). Misunderstanding floating point representations or the overflow behaviour don't tend to be among them in my experience.
By the way: Knuth used a hypothetical machine to avoid the quirks of a concrete one distracting from the discussion.
I am not completely discounting Numerical Analysis here. Mathematicians can make entire mathematical models of a machine but it will be at a certain level of abstraction. In practice there will be abstraction leaks due to some quirks of the implementation or real world dependencies.
So one should not merely depend on the mathematical model.
My confusion at your (and others, not meaning to pick on this one) response is more fundamental than that. There is no way in which thinking about these things is somehow divorced from thinking about things mathematically.
To me "Merely depending on a mathematical model" isn't a problem that actually happens much. Failing to think about these things, on the other hand, very much is.
Well, profiling IS also math.
If anything, it would be "optimizing" which is programming proper.