Hacker News new | past | comments | ask | show | jobs | submit login
A Fairly Fast Fibonacci Function (oranlooney.com)
131 points by olooney 26 days ago | hide | past | web | favorite | 58 comments



It's possible to use Binet's formula, too, if you implement the exact arithmetic of ℚ(φ).

φ is the golden ratio and ℚ(φ) is the set of all numbers of the form a + bφ where a,b are rational numbers.

So if you create a class like PhiRational where PhiRational(a,b) represents a + bφ then Binet's formula

    Fib(n) = (φ**n - (1-φ)**n) / √5
becomes

    Fib(n) = (PhiRational(0,1)**n + PhiRational(1,-1)**n) / PhiRational(-1,2)
I wrote up an implementation in Ruby years ago for some students, along with the implementations listed in the blog post and benchmarking code: http://bit.ly/ruby_binet

Remember φ = (1 + √5)/2, so that the √5 in the denominator can be written as -1 + 2φ (i.e., PhiRational(-1,2)).

When teaching I like showing all these solutions because it throws a wrench in (beginning) students' ideas about how shallow/deep simple exercises can be and the relatioship between math, recursion, efficiency, etc.

A lot of beginning students see the naïve recursive solution as mathematical-but-inefficient and draw the conclusion that the math is nice, but ultimately isn't practical. Then you show them an even-faster implementation using more math and they're pretty surprised.


That works out to be the same arithmetic. The arithmetic of the “golden integers” is identical to the arithmetic of the matrix described here.

For anyone wants to play with “golden integers” or “golden rational” numbers, https://beta.observablehq.com/@jrus/zome-arithmetic or see a bunch of concrete uses in notebooks at https://beta.observablehq.com/@vorth/


Of course. And if a student realizes that — or it's pointed out constructively — a lightbulb might go off!

The (pedagogical) advantages of ℚ(φ) are that it seems like less of a trick to the student and Binet's formula is more apparent. It also gives them more surface area to explore.

A lot of students believe that mathematical know-how and practical problem-solving are at odds (especially since recursion is inherently "mathematical" to most beginners). IME exercises like this help prevent that false dichotomy from forming.

So, algorithmically equivalent, pedagogically distinct.

(BTW, not saying that the matrix implementation is bad or anything, it's the contrast in appearance and equivalence in computation together that makes for the learning.)


> The (pedagogical) advantages of ℚ(φ) are that it seems like less of a trick to the student and Binet's formula is more apparent.

That seems very counter-intuitive to me, as the matrix form is a direct expression of the Fibonacci function's definition, and Binet's formula follows from the eigenvalues and eigenvectors of the matrix. I guess this ties in to the students you're teaching not liking math?

> A lot of students believe that mathematical know-how and practical problem-solving are at odds (especially since recursion is inherently "mathematical" to most beginners).

This is also counter-intuitive to me. I was under the impression that beginners tend to overestimate the importance of mathematical skill to programming. Do you have a more concrete example, or an explanation of what you mean by recursion not being seen as practical for problem solving?


This depends a lot on what order you learn things.

If you start with, say, empirically discovered relationships in the regular pentagon and pentagram, then you can figure out φ (ratio of the diagonal to the side of a pentagon) must satisfy φ^2 = φ + 1. Proving that this relationship must hold for the regular pentagon is not too hard.

Then start taking powers of φ based on that definition, and you can if you like define the Fibonacci numbers as the coefficient F[i] in φ^i = F[i]φ + F[i-1]. Or define the Fibonacci numbers in the conventional way and simply demonstrate that they show up in the above expression.


The number of off-by-one errors I see in "professional" code suggests to me that professionals underestimate the importance of mathematical reasoning to programming.


I don't think off by one errors are related to mathematical reasoning; all programmers know how to count. They're more likely caused by rushed reasoning, when programmers are pushed to get something out without time to think about all the corner cases.


I think this is a good example of what I'm talking about with people overestimating the importance of mathematics to programming. Taking more math classes won't help you avoid off-by-one errors.


No, but learning how to program correctly using mathematical reasoning will. See for example A Discipline of Programming. If you derive a loop using those or similar techniques, you will never have an off-by-one error, among others. And once you achieve proficiency it's no more difficult than basic arithmetic.

There may be a lack of clarity here though. I am referring to the actual technical skill of writing computer programs. If by "programming" you mean "the activities you need to do to have a job as a software engineer" then I can attest that no actual ability to write programs is required whatsoever, or at least the walking proof by construction who showed me this provided no evidence thereof and still maintains their position. Which is sad, since even people with no real idea of how to write programs can be somewhat productive via copy-paste cargo cult programming.


You don't need any special reasoning to know how to write a loop. This is just a basic thing people get taught.

What does it mean to use mathematical reasoning to derive a loop, and what error does this derivation prevent?


Yes, I am a big fan of the “golden integers” and “golden rationals”. They are very useful when trying to work out any kind of metrical geometry questions w/r/t the symmetry system of the pentagon, icosahedron, 120-cell, etc.

I think a lot more students should spend significant time with Zometool construction toys while in school, and should work out a lot of the arithmetic of golden integers for themselves. (If you are a math teacher, I recommend getting some of these.)

If you use Wildberger’s “rational trigonometry”, then it is possible to answer essentially every metrical geometry question about configurations of Zometool using golden rational arithmetic without ever resorting to approximations. The squared length between every pair of Zome-constructible points is a golden integer and the squared sine of every Zome-constructible angle is a golden rational number.

I’m just pointing out that “It's possible to use Binet's formula, too [using golden integers]” is not actually any different in practice than the matrix computation.


Of course -- representation theory guarantees this. It's just that matrices obfuscate the underlying algebra.


It’s even possible to compute arbitrary Fibonacci numbers exactly using Binet’s formula with arbitrary-precision floating point arithmetic.

It’s not the fastest possible algorithm, but it’s fun. I implemented it here: https://github.com/robinhouston/fibonacci-float-calc

The implementation is pretty simple, using GMP. This is the whole function:

  /**
   * Compute the nth Fibonacci number with arbitrary-precision
   * floating-point arithmetic, using the formula fib(n) ≈ phi ^ n / sqrt(5)
   * (which is exact if you round the rhs to the nearest whole number).
   */
  void fib_float(mpf_t *result, long n)
  {
      mpf_t sqrt5, phi;
      mp_bitcnt_t bitcnt;
  
      /* We need about n lg(phi) bits of precision */
      bitcnt = n * 7 / 10;
  
      /* Initialise sqrt5 to the square root of 5 */
      mpf_init2(sqrt5, bitcnt);
      mpf_sqrt_ui(sqrt5, 5);
  
      /* Initialise phi to the Golden Ratio */
      mpf_init2(phi, bitcnt);
      mpf_set(phi, sqrt5);
      mpf_add_ui(phi, phi, 1);
      mpf_div_2exp(phi, phi, 1);
  
      /* Compute phi ^ n / sqrt5 */
      mpf_init2(*result, bitcnt);
      mpf_pow_ui(*result, phi, n);
      mpf_div(*result, *result, sqrt5);
  
      /* Dispose of the temporary variables */
      mpf_clear(sqrt5);
      mpf_clear(phi);
  }


You can do it with just integers (ie: in Z[φ]) by noting that φ^n = φFib(n) + Fib(n-1). Equivalently, compute X^n in Z[X] / (X^2-X-1). I have an article on my blog, but it's unfortunately down right now because of some github issue with changing my account to a free one.


Since ℚ(φ) = ℚ(√5), an alternative implementation is to use numbers of the form a + b√5, where a and b are rationals. This can be a nice stepping stone, since people are more familiar with manipulating square roots than they are with symbols which happen to obey an equation. It's also a nice exercise to try to match up these two implementations of the same number field.


Also true. I think I did Q(phi) originally because Binet’s formula is less obvious under that basis.


In practice, a + b√5 is a lot less pleasant to work with, especially by hand.


With doubles in Python, it's correct for the first 72 values.

    fib = lambda n: int(((((1+5**.5)/2)**n)-(((1-5**.5)/2)**n))/5**.5)


If you only need the first 72 values, the speed doesn't really matter. You can just build the table once and do lookups.


How are you going to obtain that many digits of the golden ratio to be able to compute a final answer?


You aren't, that was the point. The calculation is purely algebraic. Fib(n) = (φ^n + (1-φ)^n) / √5, so since √5 = -1 + 2φ, we can multiply by this in the numerator and denominator to get (φ^n + (1-φ)^n) (-1+2φ) / 5.

Now expand this as a polynomial and use the identity φ^2 = φ+1 to rewrite all the terms of degree > 1 and we have an expression A+Bφ, and since the Fibonacci numbers are integers, B will be zero, so the answer is A.


Ah sorry my question was rhetorical and I didn't expect an answer. I was just highlighting the fact that this simple looking formula is not a good way to calculate the Fibonacci numbers in practice because you are pushing the complexity of the calculations into highly precise golden ratio.


It seems rather weird to use an explicit cache for the dynamic programming exponentiation.

When computing x^n, n is either even or odd. If it's even, then the result is (x^(n/2))^2, else it's odd and the result is x * (x^(n/2))^2. i.e.

  def pow(x, n):
    if x == 1:
      return x
    if (n & 1) == 1:
      return x * square(pow(x,n//2))
    else:
      return square(pow(x,n//2))
Does ~ the minimum number of multiplies, no LRU cache required.


For those wondering about that ~: https://en.wikipedia.org/wiki/Addition_chain:

”There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques to calculate relatively short chains exist. One very well known technique to calculate relatively short addition chains is the binary method, similar to exponentiation by squaring.”


That's why I said "~ the minimum" :)

In particular, it does at most the same number of multiplies as the LRU cache version in the OP


Here's a fairly slow obfuscated Fibonacci function in Python I wrote a long time ago:

    f=lambda n:(4<<n*(3+n))//((4<<2*n)-(2<<n)-1)&~-(2<<n)
As you can see it's a strictly integer arithmetic closed form. Bonus points to those who can figure out how it works.


Amazing!! I had fun figuring it out. My analysis: https://pastebin.com/8hzSmxBu


You pretty much got it, although the analysis becomes a lot simpler if you consider evaluating the generating function in base b by substituting x = b. I'll use multiples of 10 here for visual understanding:

    >>> import decimal
    >>> decimal.getcontext().prec = 50
    >>> b = decimal.Decimal(10**3)
    >>> b/(b**2 - b - 1)
    Decimal('0.0010010020030050080130210340550891442333776109885996')
For a large enough b we don't have to worry of overflow from the next terms. So then we can shift k terms, mod b to get rid of the earlier terms, and floor to get rid of (the infinite) later terms:

    >>> k = 6
    >>> b**k * b/(b**2 - b - 1)
    Decimal('1001002003005008.0130210340550891442333776109885996')
             ^^^^^^^^^^^^^    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                mod b                       floor
So for any sufficiently large b we have floor(b^(k+1)/(b^2 - b - 1)) % b. And if we let b = 2^(k+c) for a sufficiently large constant c we use asymptotics to find that this is a sufficiently large b for large k, and check the first couple numbers to find that b = 2^(k+1) works.


There's a more fundamental way to derive the exponential form: linear difference equations are analogous to linear differential equations, so we should look at them in terms of actions on some vector space of functions. Similarly, we should also look at eigenfunctions, which end up being of the form a_i*C_i^n for some constants C_i, a_i and, where i=0..{dimension of the nullspace of the overall difference operator}. You can then substitute that form into the equation to find out the C_i and a_i using the initial conditions.

It's exactly like solving a differential equation.

Another path is to look at formal power series, and a good book is Generatingfunctionology.

Knuth Oren and Patashnik's Concrete Mathematics covers the basics very well and goes into much more detail on finite difference calculus.


>There's a more fundamental way...

I'd wager to say there's nothing more fundamental about it. You are still finding eigenvalues and eigenvectors of a linear operator. The matrix is exactly the same.


It's more fundamental since you're working with the underlying vector space and linear operator vs representing it as a matrix over R^n, running a decomposition, then converting back. No need to change domains, but yes they are isomorphic (which is the point).



doesn't even mention the closed form O(1) solution..


Closed form O(1) solution? Using Binet's formula, if that's what you mean, is not O(1), for a number of reasons. Firstly, exponentiation is not constant-time. Secondly, how are you going to do it to the necessary precision? Are you really going to use some sort of floating-point number for this? Of course, you could use an exact representation, using a pair of integers (or rational numbers, depending on what basis you use)... but that will be essentially the same as the usual matrix solution.

Remember, the n'th Fibonacci number has a number of bits linear in n, and you need at least the time to write all those bits out. That can't be done in O(1), no matter how you do the computation!


It’s cheating (or at best misleading) to allow arbitrarily expensive operations to be considered “constant time”.

Raising a very-high-precision approximation of an irrational number to a large integer power will get slower and slower as you handle larger numbers.

You might just as well call the matrix version O(1)


Not sure if you noticed that's literally what they did at the end of the "Eigenvalue Solution" section, but it reads "So there you have it – a O(1) algorithm for any Fibonacci number."

Although they do note that eigen_fib() completely breaks down once the numbers involved are larger than fit in a 64 bit integer.


The previous commenter was criticizing https://www.nayuki.io/page/fast-fibonacci-algorithms (for not mentioning the supposedly O(1) solution) not http://www.oranlooney.com/post/fibonacci/



Eh, it basically derives it, though in matrix form.


>In our case, the problem is no longer to calculate Fibonacci numbers – the problem is now to find a way to multiply large integers together efficiently. As far as I can tell, GMP is already state-of-the-art when it comes to that, and tends to come out ahead on most benchmarks.

And actually, the best known method of multiplying large integers is achieved with an FFT over the integers mod 2^n (which is what GMP does). So then your task is changed yet again to optimizing a modular FFT algorithm...


Multiplication in decimal systems is inefficient. FFT multiplication works by converting the numbers to and from a more efficient representation where multiplication is O(n) and embarrassingly parallel (convolution vs pointwise multiplication).

If I'm not mistaken addition is also trivial in this representation, and since no other operations are required, the entire computation can be done in the FFT space.

This definitely works in a residue number system, where a chinese remainder transform is used instead of an FFT. (CRT and FFT are algebraically related)

In short, you can create a massive parallel cluster of computers computing parts of the result without interaction, and then in the end combine the results using a single huge pass of FFT.


This article was a lot more involved than I was expecting. It was quite refreshing to see it go through pretty much every trick method I have ever heard of, and then keep going for a long time. Well done!


Something that people often seem to miss when talking about this sort of thing is that any correct exact algorithm to compute Fibonacci numbers must use exponential time and space — simply because the size of the correct exact output is exponential in the size of the input.


OK, I see this response is getting downvoted. Here's a PSA on arithmetic.

The function n -> 2^n takes input of size O(log(n)) and returns output of size O(n).

Of course, n = 2^(log n), so our function takes exponential time/space in input size to simply write the output.

The Fibonacci function is like that.

Of course, discussions of complexity become cumbersome if you don't specify variables; that's why we talk about complexity in n of computing Fib(n).

Potato, potato anyway.


No, the size of the output is linear in the size of the input. (The Fibonacci numbers themselves grow exponentially, but the size of the output goes like the log of the number itself.)


* linear in input, not size of the input :)


You're right. I stand corrected.


You need O(log(n)) of bits to write down the input. So the output size is indeed exponential.

Size of input: O(log n)

Size of output: O(log(c^n)) = O(n)


This is one of my favorite forms of fibonacci, because it unwinds the recurrence relation without having to apply some kind of relation/master-theorem to it. Rather it describes it as a relation in a way that allows square-and-multiply.


The article claims the scalar exponential solution is O(1) - this is incorrect because the exponential of a scalar is still O(logN).


What N?

Floating point operations might be limited in precision (and range to some extend), but pretty much any you'll normally encounter will be O(1). Unless you require arbitrary precision.


By exactly the same reasoning, every algorithm is O(1) because your computer is a finite object and so the size of the data is bounded.


For students learning introduction to algorithms, I have implementations for 12 simple Bibonacci number computation algorithms at https://github.com/alidasdan/fibonacci-number-algorithms . A related paper is at https://arxiv.org/abs/1803.07199 . Hope they can be useful.


It may not seem like it, but you can generalise this to any matrix. Compute the minimal polynomial of the matrix, that is, the least degree polynomial P such that P(A) = 0 and the first coefficient is 1. Then, if you want to compute A^n, first compute the polynomial x^n mod P, then plug in A.

In this case P(x) = x^2 - x - 1, and any polynomial Q looks like ax + b mod P, so we only have to keep track of two numbers (a,b), instead of the four numbers in the matrix A^n.

In general this allows you to reduce the k^2 numbers to k numbers.


The fastest way to compute a Fibonacci number is to simply look in up on the Internet -- not kidding. There was a coding challenge that I did a couple months back which required the cumulative sum of powers of two and the fibonacci sequence, and I just did that. We can talk about faster, general algorithms but most of the time, we are bounded by resources. For example, for the nth Fibonacci number, n will be bounded by some constant. So why not look it up? It's good for playing with theory though.

Edit: Fibonacci may be a specific example, but I wondered how much wasted computation has been spent on calculating the same problem on the same input.


If you're wondering about wasted computation, consider how much energy is expended when you look up a Fibonacci number on the internet.


I believe Binet's formula is the fastest: http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci...


The article mentions it.

> There exist several closed-form solutions to Fibonacci sequence which gives us the

> false hope that there might be an O(1) solution. Unfortunately they all turn out to

> be non-optimal if you want an exact solution for a large n.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: