
Fast Fibonacci Algorithms (2015) - old_sound
https://www.nayuki.io/page/fast-fibonacci-algorithms
======
OskarS
It's interesting to note that if you add a modulus to the calculation (i.e. if
you calculate F(n) mod M), you can calculate ridiculously large values using
the matrix/doubling method. You can, for instance, calculate the last 10
digits of the Googolth Fibonacci number instantly (since you only need O(log
n) multiplications, it's just a few hundred for 10^100 ). Doing it the "naive"
dynamic programming way, it would take you far longer than the age of the
universe.

It's a neat illustration of asymptotic running times.

~~~
nayuki
Correct! Calculating Fibonacci modulo a number is used in quite a few Project
Euler problems.

------
rawnlq
The fast exponentiation algorithm actually generalizes:

[http://kukuruku.co/hub/algorithms/automatic-algorithms-
optim...](http://kukuruku.co/hub/algorithms/automatic-algorithms-optimization-
via-fast-matrix-exponentiation.html)

~~~
carapace
That is amazing. (Go look! A Python decorator that edits the bytecode...)

------
adilparvez
There is a log time way to do this, use Binet's formula
([https://en.m.wikipedia.org/wiki/Fibonacci_number#Closed-
form...](https://en.m.wikipedia.org/wiki/Fibonacci_number#Closed-
form_expression)), but do the arithmetic in the field Q[√5].

Edit: Someone's implementation:
[http://ideone.com/NWQe38](http://ideone.com/NWQe38)

Edit: constant time -> log time

~~~
yaks_hairbrush
That's log(n) time due to the exponentiation.

~~~
stephencanon
It's log(n) multiplications, each of which requires something like O(n log n
log log n) time [1]; so the actual complexity is O(n (log n)^2 log log n) ish.

[1] Schönhage-Strassen multiplication, one of many bignum multiplication
algorithms with sub-quadratic complexity. Other algorithms in common use have
worse asymptotic performance.

~~~
yaks_hairbrush
Don't know why you were downvoted, you're absolutely correct. At first glance
your n and my n are different, with mine (n1) being the input to the function
and yours representing the number of digits in a multiplicand (n2). n1 and n2
are linearly related, though, since the exponentiation process increases the
number of digits in a linear fashion.

It's also worth noting that since the Fibonacci sequence grows exponentially,
the number of digits is the result will be linear in n. Therefore, the memory
requirement is actually O(n), and writing the result to the screen will be
O(n).

~~~
stephencanon
> At first glance your n and my n are different

Right; as you sketch out, there's some constants that disappear into the O()
notation.

------
jheriko
for really big numbers there are the generalisations of karatsuba and toom-
cook for higher numbers of splits, and beyond that there are the FFT
multiplication methods which i believe are the fastest general case methods
for very large numbers.

also the laddering scheme used for the 'exponentiation' can be in different
forms, the left-to-right or right-to-left form or even a Montgomery ladder...

i seemed to recall there is a 'fastest known' method for generating fibonacci
or lucas numbers... but google is not helping me.

pretty sure i had seen it in this book: [https://www.amazon.co.uk/Prime-
Numbers-Computational-Carl-Po...](https://www.amazon.co.uk/Prime-Numbers-
Computational-Carl-Pomerance/dp/0387252827) i'll have to check when i have
access to it next. :)

------
jerven
The BigInteger implementation in java uses better multiplication algorithms in
Java 1.8 then in the version of 1.6 tested [1], switching to toom-cook[2] for
even larger numbers. So that in practice Fibonacci in java becomes an
benchmark of allocation rate.

[1]:
[http://grepcode.com/file/repository.grepcode.com/java/root/j...](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/math/BigInteger.java?av=f#1464)

[2]:[https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplicati...](https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication)

------
brudgers
For anyone interested in Fibonacci numbers: _Fibonacci Quarterly_ has been
published for more than fifty years.

[http://www.fq.math.ca/](http://www.fq.math.ca/)

------
kitanata
Isn't there a constant time implementation using the Golden Mean? What is the
advantage of these algorithms over the constant time one? (i.e. Binet's
formula)

~~~
jpfr
Constant time fibonacci is derived as an example for linear basis change in
the superb book "Vector Calculus, Linear Algebra, and Differential Forms: A
Unified Approach".

It's all contained in the sample pages
[http://matrixeditions.com/VC5.Chap2.219-221.pdf](http://matrixeditions.com/VC5.Chap2.219-221.pdf)

However, even though the formula is O(1), the runtime of the algorithm might
be larger.

Simply because a standard 64bit integer might be too small. And the BigNum
implementation needs more time to multiply all the bits.

~~~
leephillips
I really like the style of that book! I want to get it, but I'm hesitating at
the $87 pricetag. Looks very nice, though....

------
jmstfv
So, seems like matrix exponentiation is faster than computing eigenvalues /
eigenvectors in practice? Take a look:
[http://mathoverflow.net/questions/62904/complexity-of-
eigenv...](http://mathoverflow.net/questions/62904/complexity-of-eigenvalue-
decomposition)

------
chaitanyav
Product of Lucas Numbers Algorithm - "A fast algorithm for computing large
Fibonacci numbers"
-[https://pdfs.semanticscholar.org/9652/9e1fedb0a372b825215c7b...](https://pdfs.semanticscholar.org/9652/9e1fedb0a372b825215c7b471a99088f4515.pdf)

~~~
chaitanyav
Also wrote a gem that implements this algorithm here
[https://github.com/chaitanyav/fibonacci](https://github.com/chaitanyav/fibonacci)

~~~
user2994cb
That's nice. Here's a (recursive) Python implementation:

[http://ideone.com/z1sUp4](http://ideone.com/z1sUp4)

------
gandolfinmyhead
fast doubling for fibonacci is mentioned in sicp. Cheers for sicp

------
nayuki
Previous discussion:
[https://news.ycombinator.com/item?id=9315346](https://news.ycombinator.com/item?id=9315346)

------
mrcactu5
the continued fraction of (1+√5)/2 is [1;1,1,1,1,1,1,...] it goes on forever.

there are more advanced formulas in this article of Curtis McMullen
[http://www.math.harvard.edu/~ctm/papers/home/text/papers/cf/...](http://www.math.harvard.edu/~ctm/papers/home/text/papers/cf/cf.pdf)

