 Fibonacci series and Dynamic programming 38 points by aditgupta on June 17, 2013 | hide | past | web | favorite | 28 comments The fastest way to calculate the Fibonacci sequence (with integer math) is a simple loop, just like you'd do it on paper. This version is slower and much more complicated. With so many relevant good examples of recursion and dynamic programming, it boggles my mind to see the Fibonacci sequence used as an example for either.`````` def fib(n): a, b = 0, 1 for k in range(n): a, b = b, a + b return a`````` Your method computes fib(n) in O(n) arithmetic operations. There's a way to do it in O(log n) arithmetic operations on integers of the same size. First note that you can obtain the two-element vector (fib(n),fib(n+1)) by applying the matrix A=((0,1),(1,1)) to the vector (fib(n-1),fib(n)). So to get from (fib(0),fib(1)) to (fib(n),fib(n+1)), you need to raise the matrix A to the nth power. That can be done in O(log n) arithmetic operations by using repeated squaring: http://en.wikipedia.org/wiki/Exponentiation_by_squaring. Here's O(log n) solution :`````` def fib_sicp(n): a, b, p, q = 1, 0, 0, 1 while n > 0: if n % 2 == 0: # even oldp = p p = p*p + q*q q = 2*oldp*q + q*q n //= 2 else: olda = a a = b*q + a*q + a*p b = b*p + olda*q n -= 1 return b `````` : http://stackoverflow.com/questions/1525521/nth-fibonacci-num...It is ~20 times faster for n=10000 Cool - did not know about that. Wish I could edit my comment. The Fibonnacci sequence is (IMO) one of the easiest to understand examples of dynamic programming, so it makes for a good introduction to the concept. Furthermore, the dynamic programming implementation is better if you're going to call it multiple times (depending on the context of course -- if you want the first n number the linear version can be adapted to be even better). The looping version could keep a static array around to eliminate that difference. I think Fibonacci is a bad example because the student will wonder why they are going to the trouble of dynamic programming when it doesn't save any work. A better example would be the recursive computation of N choose K - because it avoids overflow from computing the factorial-based formula directly - or matrix chain multiplication (http://en.wikipedia.org/wiki/Dynamic_programming#Matrix_chai...). Yes - an iterative loop is much faster, because with recursion you have to calculate a number of trivial fibonacci numbers. I can't resist a quick plug for this beautiful O(n) implementation of the fibonacci sequence in Haskell. One line to define the list `fibs` -- an infinite list of fibonacci numbers -- and one line to define the function `fib` which simply takes the n'th element of the list.`````` λ> let fibs = 0 : 1 : zipWith (+) fibs (tail fibs) λ> let fib n = fibs!!n λ> fib 100 354224848179261915075`````` I really liked the haskell version when I first learned about it in a programming languages course, I believe I've never been more surprised than after seeing the output of 'take 10 fibs' without knowing how did that single line worked. Correct me if I'm wrong, but this is also O(n) in space, right? I think you can get a similar scala version in:`````` lazy val fib:Stream[BigInt] = 0 #::1 #:: fib.zip(fib.tail).map(x=>x._1+x._2) `````` This is neat and all, but the imperative version wins by being O(1) in space. Right? No, the Haskell version will operate in constant space provided you don't hold onto the list:`````` let fibs = 1 : 1 : zipWith (+) fibs (tail fibs) in fibs !! 100 `````` Since the fibs list is visible only to the expression (fibs !! 100), and since the (!!) operator advances through the list discarding elements until it hits the 100th, those elements are free to be garbage collected. Further, since the elements of the list (after the first two) are generated only when consumed, only a few of the cons cells will ever be live at any point in time. That makes sense, and I would think should be true in the Scala version as well. I did not know if it was, though. I guess I would have to spent a lot more time digging the details of the environment created to really see where the head of the list is dropped from scope.Specifically, I would expect space to grow at O(n), but that it could be garbage collected quickly. The question is, how quickly would the garbage collector do its thing. Or are you saying that is dodged in the Haskell version, as well? It is perfectly possible to write a fib function that takes O(1) space functionally too, but of course it'd have a worse time complexity due to the lack of memoisation. Worse than O(n) for time complexity? Seems it should be easily doable to transcribe the straight forward iterative solution to a tail recursive one. Something like:`````` (defun fib-help (a b curX desiredX) (if (= curX desiredX) b (fib-help b (+ a b) (+ curX 1) desiredX))) (defun fib (x) (fib-help 0 1 1 x)) `````` (No, I don't really know lisp that well... learning.) Perhaps surprisingly, you can convert many recursive algorithms into equivalent iterative versions using a series of simple mechanical steps. For example, if we start with the naive recursive implementation of our beloved fib function,`````` def fib(n): if n < 2: return n return fib(n - 1) + fib(n - 2) `````` we can arrive at the iterative (dynamic-programming) version that follows,`````` def fib(n): fibn1, fibn2 = (0, 1) for _ in xrange(n): fibn1, fibn2 = fibn2, fibn1 + fibn2 return fibn1 `````` through the following steps: https://gist.github.com/tmoertel/5798134If you're interested in this kind of thing, I'm writing a series of articles on converting recursive algorithms into iterative algorithms. The initial article: http://blog.moertel.com/posts/2013-05-11-recursive-to-iterat... Edit:Doh - I'm a moron! You're using a recurrence relation, which has the nice property of never computing the same sub-solution twice. I'll leave this here as a reminder to think before I nitpick next time.Original:Apologies to nitpick, but your iterative python solution isn't DP. DP hinges on the idea of never computing the answer to the same problem twice. For it to be DP, it would need to memoize subproblem solutions and refer to the solution cache rather than recompute the solution to every subproblem in the tree every time.This solution fits the bill, although it can probably be written a bit more elegantly:`````` fibs = {0: 1, 1: 1} def fib(n): global fibs stack = [] for a in xrange(n,0,-1): if a in fibs: stack.reverse() for item in stack: fibs[item] = fibs[a] + fibs[a-1] a = item break else: stack.append(a) return fibs[n]`````` Here is the Fibonacci sequence calculated by a literal series of mechanical steps: http://www.chrisfenton.com/the-turbo-entabulator/ Or you could apply some math and come up with a generating function! Another math-y option is to note that the Fibonacci series satisfies the linear recurrence F[n+2] = F[n+1] + F[n] and can therefore be represented in matrix form. Here's a (trimmed down) Maxima session that gives the details:`````` \$ maxima (%i3) load(eigen)\$ (%i4) A: matrix([1, 1], [1, 0])\$ (%i5) b: columnvector([1, 0])\$ (%i6) fib(n) := (''A^^n . ''b); [ 1 1 ] [ 1 ] (%o6) fib(n) := (([ ] . [ ]) ) [ 1 0 ] [ 0 ] 2 1 (%i7) makelist(fib(n), n, 10); (%o7) [1, 1, 2, 3, 5, 8, 13, 21, 34, 55] `````` Since the transition matrix A is symmetric, it can be diagonalized using its eigenvector matrix:`````` (%i8) similaritytransform(A)\$ (%i9) Adiag: expand(leftmatrix . A . rightmatrix)\$ `````` And the n-th power of a diagonal matrix is the same as raising the values on its diagonal to the n-th power:`````` (%i10) AdiagN(n) := matrix([''(Adiag)^n, 0], [0, ''(Adiag)^n]); `````` And thus raising A to the n-th power is the same as raising its diagonal counterpart Adiag to that power and then reversing the diagonalization transform (since leftmatrix and rightmatrix are inverses and cancel out for the interior multiplications):`````` (%i11) fibdiag(n) := expand(rightmatrix . AdiagN(n) . leftmatrix . b)\$ `````` Finally, this leads to a closed-form representation for the n-th Fibonacci number:`````` (%i12) fibdiag(n); sqrt(5) 1 n 1 sqrt(5) n (------- + -) (- - -------) 2 2 2 2 (%o12) -------------- - -------------- sqrt(5) sqrt(5)`````` Very neat, I need to learn more math to truly understand how this works! In that case, pick up a good book on linear algebra. Gilbert Strang's Introduction to Linear Algebra is inexpensive and solid on the fundamentals, and there are related lectures from Strang's MIT course on the subject . Jim Hefferon's Linear Algebra is also good and you can download it for free (GFDL) . It takes a different approach to Strang's book. The two work well together for self-study since for most subjects you can see two different approaches.Also, for this particular problem, see the "Topic: Linear Recurrences" chapter of Hefferon's book. It actually uses the Fibonacci series as its example (this example seems popular with authors of textbooks and lecture notes). And, if you want to know more about generating functions, you can check out the book "generatingfunctionology" by the recently late Herbert Wilf gratis at http://www.math.upenn.edu/~wilf/DownldGF.html Another great resource on generating functions is Analytic Combinatorics by Philippe Flajolet and Robert Sedgewick: Perl 6 (from ):`````` > my @fib := 1, 1, *+* ... *; @fib 354224848179261915075 > (1, 1, *+* ... *) 354224848179261915075 ``````  https://justrakudoit.wordpress.com/2010/12/29/perl-6-fibonac... A O(n) Scala one: https://github.com/pathikrit/scalgos/blob/master/src/main/sc... Reminds me awefully much about the blog post below. In Clojure you can implement this as:`````` (def fib-list (map first (iterate (fn [[a b]] [b (+ a b)]) [0N 1N]))) (nth fib-list 100) 354224848179261915075N`````` Doesn't Clojure have a .memoize() that would be useful here? Search: