
A Fairly Fast Fibonacci Function - olooney
http://www.oranlooney.com/post/fibonacci/
======
jfarmer
It's possible to use Binet's formula, too, if you implement the exact
arithmetic of ℚ(φ).

φ is the golden ratio and ℚ(φ) is the set of all numbers of the form a + bφ
where a,b are rational numbers.

So if you create a class like PhiRational where PhiRational(a,b) represents a
+ bφ then Binet's formula

    
    
        Fib(n) = (φ**n - (1-φ)**n) / √5
    

becomes

    
    
        Fib(n) = (PhiRational(0,1)**n + PhiRational(1,-1)**n) / PhiRational(-1,2)
    

I wrote up an implementation in Ruby years ago for some students, along with
the implementations listed in the blog post and benchmarking code:
[http://bit.ly/ruby_binet](http://bit.ly/ruby_binet)

Remember φ = (1 + √5)/2, so that the √5 in the denominator can be written as
-1 + 2φ (i.e., PhiRational(-1,2)).

When teaching I like showing all these solutions because it throws a wrench in
(beginning) students' ideas about how shallow/deep simple exercises can be and
the relatioship between math, recursion, efficiency, etc.

A lot of beginning students see the naïve recursive solution as mathematical-
but-inefficient and draw the conclusion that the math is nice, but ultimately
isn't practical. Then you show them an even-faster implementation using more
math and they're pretty surprised.

~~~
jacobolus
That works out to be the same arithmetic. The arithmetic of the “golden
integers” is identical to the arithmetic of the matrix described here.

For anyone wants to play with “golden integers” or “golden rational” numbers,
[https://beta.observablehq.com/@jrus/zome-
arithmetic](https://beta.observablehq.com/@jrus/zome-arithmetic) or see a
bunch of concrete uses in notebooks at
[https://beta.observablehq.com/@vorth/](https://beta.observablehq.com/@vorth/)

~~~
jfarmer
Of course. And if a student realizes that — or it's pointed out constructively
— a lightbulb might go off!

The (pedagogical) advantages of ℚ(φ) are that it seems like less of a trick to
the student and Binet's formula is more apparent. It also gives them more
surface area to explore.

A lot of students believe that mathematical know-how and practical problem-
solving are at odds (especially since recursion is inherently "mathematical"
to most beginners). IME exercises like this help prevent that false dichotomy
from forming.

So, algorithmically equivalent, pedagogically distinct.

(BTW, not saying that the matrix implementation is bad or anything, it's the
contrast in appearance and equivalence in computation together that makes for
the learning.)

~~~
daveFNbuck
> The (pedagogical) advantages of ℚ(φ) are that it seems like less of a trick
> to the student and Binet's formula is more apparent.

That seems very counter-intuitive to me, as the matrix form is a direct
expression of the Fibonacci function's definition, and Binet's formula follows
from the eigenvalues and eigenvectors of the matrix. I guess this ties in to
the students you're teaching not liking math?

> A lot of students believe that mathematical know-how and practical problem-
> solving are at odds (especially since recursion is inherently "mathematical"
> to most beginners).

This is also counter-intuitive to me. I was under the impression that
beginners tend to overestimate the importance of mathematical skill to
programming. Do you have a more concrete example, or an explanation of what
you mean by recursion not being seen as practical for problem solving?

~~~
User23
The number of off-by-one errors I see in "professional" code suggests to me
that professionals underestimate the importance of mathematical reasoning to
programming.

~~~
daveFNbuck
I think this is a good example of what I'm talking about with people
overestimating the importance of mathematics to programming. Taking more math
classes won't help you avoid off-by-one errors.

~~~
User23
No, but learning how to program correctly using mathematical reasoning will.
See for example _A Discipline of Programming_. If you derive a loop using
those or similar techniques, you will never have an off-by-one error, among
others. And once you achieve proficiency it's no more difficult than basic
arithmetic.

There may be a lack of clarity here though. I am referring to the actual
technical skill of writing computer programs. If by "programming" you mean
"the activities you need to do to have a job as a software engineer" then I
can attest that no actual ability to write programs is required whatsoever, or
at least the walking proof by construction who showed me this provided no
evidence thereof and still maintains their position. Which is sad, since even
people with no real idea of how to write programs can be somewhat productive
via copy-paste cargo cult programming.

~~~
daveFNbuck
You don't need any special reasoning to know how to write a loop. This is just
a basic thing people get taught.

What does it mean to use mathematical reasoning to derive a loop, and what
error does this derivation prevent?

------
morei
It seems rather weird to use an explicit cache for the dynamic programming
exponentiation.

When computing x^n, n is either even or odd. If it's even, then the result is
(x^(n/2))^2, else it's odd and the result is x * (x^(n/2))^2. i.e.

    
    
      def pow(x, n):
        if x == 1:
          return x
        if (n & 1) == 1:
          return x * square(pow(x,n//2))
        else:
          return square(pow(x,n//2))
    

Does ~ the minimum number of multiplies, no LRU cache required.

~~~
Someone
For those wondering about that ~:
[https://en.wikipedia.org/wiki/Addition_chain](https://en.wikipedia.org/wiki/Addition_chain):

 _”There is no known algorithm which can calculate a minimal addition chain
for a given number with any guarantees of reasonable timing or small memory
usage. However, several techniques to calculate relatively short chains exist.
One very well known technique to calculate relatively short addition chains is
the binary method, similar to exponentiation by squaring.”_

~~~
morei
That's why I said "~ the minimum" :)

In particular, it does at most the same number of multiplies as the LRU cache
version in the OP

------
nightcracker
Here's a fairly slow obfuscated Fibonacci function in Python I wrote a long
time ago:

    
    
        f=lambda n:(4<<n*(3+n))//((4<<2*n)-(2<<n)-1)&~-(2<<n)
    

As you can see it's a strictly integer arithmetic closed form. Bonus points to
those who can figure out how it works.

~~~
edflsafoiewq
Amazing!! I had fun figuring it out. My analysis:
[https://pastebin.com/8hzSmxBu](https://pastebin.com/8hzSmxBu)

~~~
nightcracker
You pretty much got it, although the analysis becomes a lot simpler if you
consider evaluating the generating function in base b by substituting x = b.
I'll use multiples of 10 here for visual understanding:

    
    
        >>> import decimal
        >>> decimal.getcontext().prec = 50
        >>> b = decimal.Decimal(10**3)
        >>> b/(b**2 - b - 1)
        Decimal('0.0010010020030050080130210340550891442333776109885996')
    

For a large enough b we don't have to worry of overflow from the next terms.
So then we can shift k terms, mod b to get rid of the earlier terms, and floor
to get rid of (the infinite) later terms:

    
    
        >>> k = 6
        >>> b**k * b/(b**2 - b - 1)
        Decimal('1001002003005008.0130210340550891442333776109885996')
                 ^^^^^^^^^^^^^    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                    mod b                       floor
    

So for any sufficiently large b we have floor(b^(k+1)/(b^2 - b - 1)) % b. And
if we let b = 2^(k+c) for a sufficiently large constant c we use asymptotics
to find that this is a sufficiently large b for large k, and check the first
couple numbers to find that b = 2^(k+1) works.

------
nimish
There's a more fundamental way to derive the exponential form: linear
difference equations are analogous to linear differential equations, so we
should look at them in terms of actions on some vector space of functions.
Similarly, we should also look at eigenfunctions, which end up being of the
form a_i*C_i^n for some constants C_i, a_i and, where i=0..{dimension of the
nullspace of the overall difference operator}. You can then substitute that
form into the equation to find out the C_i and a_i using the initial
conditions.

It's exactly like solving a differential equation.

Another path is to look at formal power series, and a good book is
Generatingfunctionology.

Knuth Oren and Patashnik's Concrete Mathematics covers the basics very well
and goes into much more detail on finite difference calculus.

~~~
romwell
>There's a more fundamental way...

I'd wager to say there's nothing more fundamental about it. You are still
finding eigenvalues and eigenvectors of a linear operator. The matrix is
exactly the same.

~~~
nimish
It's more fundamental since you're working with the underlying vector space
and linear operator vs representing it as a matrix over R^n, running a
decomposition, then converting back. No need to change domains, but yes they
are isomorphic (which is the point).

------
nayuki
More succinctly: [https://www.nayuki.io/page/fast-fibonacci-
algorithms](https://www.nayuki.io/page/fast-fibonacci-algorithms)

~~~
make3
doesn't even mention the closed form O(1) solution..

~~~
jacobolus
It’s cheating (or at best misleading) to allow arbitrarily expensive
operations to be considered “constant time”.

Raising a very-high-precision approximation of an irrational number to a large
integer power will get slower and slower as you handle larger numbers.

You might just as well call the matrix version _O_ (1)

~~~
TheRealPomax
Not sure if you noticed that's literally what they did at the end of the
"Eigenvalue Solution" section, but it reads "So there you have it – a O(1)
algorithm for any Fibonacci number."

Although they do note that eigen_fib() completely breaks down once the numbers
involved are larger than fit in a 64 bit integer.

~~~
jacobolus
The previous commenter was criticizing [https://www.nayuki.io/page/fast-
fibonacci-algorithms](https://www.nayuki.io/page/fast-fibonacci-algorithms)
(for not mentioning the supposedly _O_ (1) solution) not
[http://www.oranlooney.com/post/fibonacci/](http://www.oranlooney.com/post/fibonacci/)

------
vortico
>In our case, the problem is no longer to calculate Fibonacci numbers – the
problem is now to find a way to multiply large integers together efficiently.
As far as I can tell, GMP is already state-of-the-art when it comes to that,
and tends to come out ahead on most benchmarks.

And actually, the best known method of multiplying large integers is achieved
with an FFT over the integers mod 2^n (which is what GMP does). So then your
task is changed yet again to optimizing a modular FFT algorithm...

~~~
remcob
Multiplication in decimal systems is inefficient. FFT multiplication works by
converting the numbers to and from a more efficient representation where
multiplication is O(n) and embarrassingly parallel (convolution vs pointwise
multiplication).

If I'm not mistaken addition is also trivial in this representation, and since
no other operations are required, the entire computation can be done in the
FFT space.

This definitely works in a residue number system, where a chinese remainder
transform is used instead of an FFT. (CRT and FFT are algebraically related)

In short, you can create a massive parallel cluster of computers computing
parts of the result without interaction, and then in the end combine the
results using a single huge pass of FFT.

------
taeric
This article was a lot more involved than I was expecting. It was quite
refreshing to see it go through pretty much every trick method I have ever
heard of, and then keep going for a long time. Well done!

------
noblestone
For students learning introduction to algorithms, I have implementations for
12 simple Bibonacci number computation algorithms at
[https://github.com/alidasdan/fibonacci-number-
algorithms](https://github.com/alidasdan/fibonacci-number-algorithms) . A
related paper is at
[https://arxiv.org/abs/1803.07199](https://arxiv.org/abs/1803.07199) . Hope
they can be useful.

------
robinhouston
Something that people often seem to miss when talking about this sort of thing
is that _any_ correct exact algorithm to compute Fibonacci numbers must use
exponential time and space — simply because the size of the correct exact
output is exponential in the size of the input.

~~~
madcaptenor
No, the size of the output is linear in the size of the input. (The Fibonacci
numbers themselves grow exponentially, but the size of the output goes like
the log of the number itself.)

~~~
romwell
* linear in _input_ , not _size of the input_ :)

~~~
madcaptenor
You're right. I stand corrected.

------
subjoriented
This is one of my favorite forms of fibonacci, because it unwinds the
recurrence relation without having to apply some kind of relation/master-
theorem to it. Rather it describes it as a relation in a way that allows
square-and-multiply.

------
haykmartiros
The article claims the scalar exponential solution is O(1) - this is incorrect
because the exponential of a scalar is still O(logN).

~~~
contravariant
What N?

Floating point operations might be limited in precision (and range to some
extend), but pretty much any you'll normally encounter will be O(1). Unless
you require arbitrary precision.

~~~
umanwizard
By exactly the same reasoning, every algorithm is O(1) because your computer
is a finite object and so the size of the data is bounded.

------
jules
It may not seem like it, but you can generalise this to any matrix. Compute
the minimal polynomial of the matrix, that is, the least degree polynomial P
such that P(A) = 0 and the first coefficient is 1. Then, if you want to
compute A^n, first compute the polynomial x^n mod P, then plug in A.

In this case P(x) = x^2 - x - 1, and any polynomial Q looks like ax + b mod P,
so we only have to keep track of two numbers (a,b), instead of the four
numbers in the matrix A^n.

In general this allows you to reduce the k^2 numbers to k numbers.

------
QML
The fastest way to compute a Fibonacci number is to simply look in up on the
Internet -- not kidding. There was a coding challenge that I did a couple
months back which required the cumulative sum of powers of two and the
fibonacci sequence, and I just did that. We can talk about faster, general
algorithms but most of the time, we are bounded by resources. For example, for
the nth Fibonacci number, n will be bounded by some constant. So why not look
it up? It's good for playing with theory though.

Edit: Fibonacci may be a specific example, but I wondered how much wasted
computation has been spent on calculating the same problem on the same input.

~~~
majewsky
If you're wondering about wasted computation, consider how much energy is
expended when you look up a Fibonacci number on the internet.

------
rinchik
I believe Binet's formula is the fastest:
[http://www.maths.surrey.ac.uk/hosted-
sites/R.Knott/Fibonacci...](http://www.maths.surrey.ac.uk/hosted-
sites/R.Knott/Fibonacci/fibFormula.html)

~~~
aprescott
The article mentions it.

> There exist several closed-form solutions to Fibonacci sequence which gives
> us the

> false hope that there might be an O(1) solution. Unfortunately they all turn
> out to

> be non-optimal if you want an exact solution for a large n.

