
The Nth Fibonacci Number in O(log N) - skazka16
http://kukuruku.co/hub/algorithms/the-nth-fibonacci-number-in-olog-n
======
cperciva
It's easy to compute things quickly if you pretend that you can do arbitary-
precision arithmetic in constant time. Want a quadrillion digits of Pi? Easy,
just use the Brent–Salamin AGM algorithm and you'll get them in about 500
arithmetic operations. NP-complete problems? No problem at all, rephrase them
as subset-sum problems and use the polynomial-integer-operations dynamic
programming algorithm. I don't _think_ you can solve PSPACE in a polynomial
number of integer operations, but I can't see how to prove it immediately.

In "real" terms, computing the Nth fibonacci number takes O(M(N)) = O(N log N
2^(O(log* N))) operations.

~~~
Spiritus
Out of curiosity, what line of work are you in given that you can seemingly
pull this stuff from memory? All I read was "lorem ipsum..." until you got to
Big O.

~~~
phireal
It's Colin Percival: author of tarsnap, former BSD security officer and
considered something of an authority on crytography. I presume his maths
skills are therefore suitably well polished to be able to pull this stuff from
memory.

~~~
antics
I want to dispel the rumor that this is magic. This is standard material in
undergrad algorithms classes. See, for example, Jeff Erickson's algorithms
notes here[1]. It's literally the first page of the chapter on dynamic
programming.

[1]
[http://web.engr.illinois.edu/~jeffe/teaching/algorithms/note...](http://web.engr.illinois.edu/~jeffe/teaching/algorithms/notes/05-dynprog.pdf)

~~~
raverbashing
Yeah, but "kids these days" only want to know Java or Angular.js and bawl at
the first sight of math.

~~~
switz
Psh, who wants to learn Java?

~~~
lotyrin
I think "know" over "learn" used by the grandparent is important there;
probably they were made to _learn_ it for coursework, they want to be allowed
to continue only having to _know_ that much and not learn further.

------
sillysaurus3
In Python:

    
    
        def fib(n):
            import math
            r5 = math.sqrt(5)
            x = pow((1 + r5) / 2, n)
            x -= pow((1 - r5) / 2, n)
            return x / r5
    
      $ python fib.py 4
      3.0
    
      $ python fib.py 5
      5.0
    
      $ python fib.py 50
      12586269025.0
    
      $ python fib.py 5000
      OverflowError: (34, 'Result too large')
    

So we add arbitrary precision. Uglier, but supports any n:

    
    
        def fib(n):
            from math import sqrt
            from decimal import Decimal
            r5 = Decimal(sqrt(5))
            x = pow((1 + r5) / 2, n)
            x -= pow((1 - r5) / 2, n)
            return x / r5
    
      $ python fib.py 4
      3.000000000000000242931577139
    
      $ python fib.py 5
      5.000000000000000607328942851
    
      $ python fib.py 50000
      1.077773489309106593392984005E+10449
    
      $ python fib.py 50309230
      8.147201098193506535089177522E+10514006
    

Or does it?

    
    
      $ python fib.py 50309230390
      decimal.Overflow: above Emax
    

This was fun. Computing fib(50309230390) is left as an exercise for the
reader.

Wolfram alpha verifies this is correct for Fib 50,000:
[http://www.wolframalpha.com/input/?i=Fib+50000](http://www.wolframalpha.com/input/?i=Fib+50000)

~~~
webo
This shows nothing but one's ability to code-up a math formula. The blog post
goes into much detail about how to solve the problem using matrices and
memoization.

Edit: Looks like sillysaurus3 edited his comment.

~~~
sillysaurus3
I wasn't trying to one-up anyone. I thought it was interesting, so I figured
others might find it interesting too. In particular, it was surprising to find
that there's a closed-form solution for the Fibonacci sequence. Sorry.

~~~
obfuskater
Don't be sorry! I found your comment very interesting and was also surprised
to find that there exists a closed-form solution.

~~~
sillysaurus3
The whole book is pretty fascinating, "Mathematics for Computer Science" by
Lehman and Leighton:
[http://www.cs.princeton.edu/courses/archive/spring10/cos433/...](http://www.cs.princeton.edu/courses/archive/spring10/cos433/mathcs.pdf)

The formula came from section 17.3.2, which goes into detail about how to
derive it.

~~~
obfuskater
Thanks. I like the way they introduce what a proof is in chapter 1, I feel
like they have a good, subtle sense of humor. I don't have a mathematical
background so it might be a tough read for me, but it seems like a great
resource.

------
dastbe
An issue here is that the the nth Fibonacci number has O(N) bits, so once you
move into arbitrary arithmetic calculations your +'s and *'s will become O(N)
and O(N logN) respectively.

------
isani
Factoring into powers of 2 seems to me like an unnecessary complication. It's
possible to calculate an arbitrary power in O(log N) time without memoization.

    
    
      def __get_matrix_power(self, M, p):
        if p == 1:
          return M
        if p % 2 == 1: # odd power
          return self.__multiply_matrices(M, self.__get_matrix_power(M, p - 1))
        else: # even power
          K = self.__get_matrix_power(M, int(p/2))
          return self.__multiply_matrices(K, K)

~~~
nabb
Also working on pairs of consecutive fibonacci numbers (f_n, f_(n+1)) instead
of the matrix [[f_(n+1) f_n] [f_n f_(n-1)]] makes this much simpler.

    
    
      def fib(n):
        def fib2(n):
          # returns (f_n, f_(n+1))
          if n == 0:
            return 0, 1
          if n % 2:
            a, b = fib2(n-1)
            return b, a+b
          a, b = fib2(n//2)
          return b*a + a*(b-a), a*a + b*b
        return fib2(n)[0]

------
nealwu
As mentioned, the actual number has O(N) bits and thus can't actually be
computed in O(log N) time. So if you just want an approximation, you may as
well use Binet's formula. [1]

 _However_ , one thing this algorithm can do that Binet's formula can't is
compute the N-th Fibonacci number modulo some constant M in O(log N) time. So
you could very efficiently compute, say, the last 12 digits of F_123456789. A
lot of programming contests (TopCoder, Google Code Jam, etc.) will ask for the
answer modulo some constant to take advantage of this trick.

[1] [http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_ex...](http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_expression)

~~~
nhaehnle
A really neat observation here is that Binet's formula and the matrix
exponentiation trick are arguably the same thing.

How so? You can think of the operations of Binet's formula as happening in the
field Q(sqrt{5}). Since sqrt{5} is a square root, every element of this field
has a unique representation of the form a + b sqrt{5} with a and b rational,
and so the field can be said to have dimension 2 over the rationals. When you
squint at how the field operations work in this representation, you'll find
that you end up doing something that looks very similar to the 2x2 matrix
multiplications trick; with a bit of rearrangement, it's the same thing -
except that the matrix multiplication trick is usually derived via dynamic
programming, and Binet's formula is usually derived via power series.

The fact that things often fit together so nicely is perhaps the most
beautiful aspect of mathematics.

------
wb14123
I remember there is an equation that could get the Nth fibonacci number
directly. (Maybe in the exercises of Concrete Mathematics by Knuth).

~~~
hacknat
[https://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_e...](https://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_expression)

If ever there was a way to impress an interviewer it is to bust out this baby
when asked to make a fibonacci function.

~~~
mytochar
Personally, I think the matrix powers method is more impressive. You're using
the matrix as a state engine to get to whatever state you intend to reach.
That has more applications than just the closed form, which a lot of people
know about, and you most likely just happened to memorize.

Don't get me wrong, it'd still be impressive "Woah, he memorized this!" but
seeing someone do the matrix trick would be even more impressive, to me.

~~~
fdej
The closed form is just the diagonalized version of the matrix power, though.

~~~
mytochar
I honestly didn't know that until it was mentioned in another part of this
thread. My point was that someone can memorize the formula. Chances are,
they'll see the matrix form and actually know how to use that in other
situations.

Now, if you derived the formula on the board from the matrix, that'd be
incredibly impressive. The idea here is how the person's brain thinks. When I
see them write an equation on the board, I think "Oh, they memorized it" when
I see them step through the process of creating a matrix and doing matrix
multiplication, I think "Wow, they /really/ know their stuff". If they just
threw up the matrix and couldn't explain it, I wouldn't be as impressed. Same
as if they wrote the equation and couldn't explain it.

At the end of the day, if they could explain either approach, I'd be
impressed, I think the matrix form just lends itself to more likely be in a
way that the person can explain it

------
richardkiss
I wrote about this last December:

[http://blog.richardkiss.com/?p=398](http://blog.richardkiss.com/?p=398)

The matrix math is easier, and the Python code is about a dozen lines.

------
lscharen
This brings back some memories. I remember talking about the different ways of
calculating the Fibonacci numbers in an old comp.lang.java usenet thread about
recursion, memoization and dynamic programming from 2007,

[http://coding.derkeiler.com/Archive/Java/comp.lang.java.prog...](http://coding.derkeiler.com/Archive/Java/comp.lang.java.programmer/2007-10/msg01792.html)

The thread discusses the exponential, linear and logarithmic algorithms.

