An integer formula for Fibonacci numbers 297 points by 0xmohit on Apr 24, 2016 | hide | past | web | favorite | 56 comments

 Nice. When I was a teenage self-taught programmer, an article like this one, but about the superiority of quicksort to insertion sort, was the trigger that got me to understand that thinking mathematically about algorithms could lead us to a kind of understanding that transcended mere coding (although I couldn't have put it in those words at that time). It was quite an epiphany.I can imagine, today, a young enthusiast having learned Python and maybe done a few Project Euler exercises, stumbling upon that post and being amazed that you could generate Fibonacci numbers that way, and thinking that maybe there is something interesting about that "generating functions" thing.If that's you reading this, it's dangerous to go alone. Take this: https://www.math.upenn.edu/~wilf/DownldGF.html
 And then after Wilf, read Flajolet and Sedgewick: http://algo.inria.fr/flajolet/Publications/book.pdf
 Here is another method I wrote where you compute in the Q[√5] field. It can be modified to work over Z[√5] as well:
 I implemented the same in Haskell a while ago: https://github.com/xyzzyz/FibBinet/blob/master/FibBinet.hsWith yours, both make interesting comparison of how similar problem is solved in different languages.
 In Haskell you can use Data.Real.Constructible (http://hackage.haskell.org/package/constructible) to compute exactly with arbitrary square roots.
 Why did you define both negate and (-)?
 I don't know. I probably shouldn't have. I accept pull requests.
  def fib(n): return (4 << n*(3+n)) // ((4 << 2*n) - (2 << n) - 1) & ((2 << n) - 1)  Looks like there's a small bug in this code.fib(0) = 0fib(1) = 1fib(2) = 2fib(3) = 3fib(4) = 5etc.But there should be two values at the beginning that evaluate to 1. (Whether it's fib(0) and fib(1), vs. fib(1) and fib(2) is just a matter of convention. But there should definitely be two of them.)
 A couple of commenters have mentioned this, but with his definition of fib that's given right below in the 'overview' section, the function works: Fib(0) = 1 Fib(1) = 1 Fib(n) = Fib(n−1) + Fib(n−2)  So n has a constraint of n > 1, for which the function works.
 Another obscure way to generate the fibonacci sequence - Compute the fibonacci sequence on a hand-cranked mechanical computer: http://www.chrisfenton.com/the-turbo-entabulator/
 Nice. A cut down version using base-10: X = 10**3 print X**16//(X*X-X-1) # 1001002003005008013021034055089144233377610  It's like a sideways addition version of the standard Haskell lazy list version: fib = 0 : 1 : zipWith (+) fib (tail fib)
 n*(3+n) on the right side of a left shift, you're only moving the iteration into whatever big number library is executing that monster division
 Yep, he explains that already in the post.
 Nice post.I never really thought about evaluating a z-transform at a particular value, so I tied together some of the various solutions with derivations and greater generality here:
 He says that this is an inefficient way to generate the nth Fibonacci number, but since it actually gives you the first n Fibonacci numbers, thats not the fairest comparison. (Especially since the fastest methods for finding the nth Fibonacci number don't generate all the intermediates.) Just for curiosity sake, how does using this formula to generate all of the first n Fibonacci numbers compare to other ways of doing that?(You get all n with the same formula by replacing the mod 2^(n+1) (or reading the last n+1 bits) by splitting the whole integer that is generated into n+1 bit chunks.
 The difference equation approach (computationally fastest) can be used to derive a elegant closed form solution that relates to the golden ratio.Fn=(1+√5)^n−(1−√5)^n\2^n√5This deravation is pretty accessable and if you want to show off in code tests :)
 That's given early in the article, but requires arbitrary precision arithmetic:Another method is to find a closed form for the solution of the recurrence relation. This leads to the real-valued formula: Fib(n)=(ϕn+1−ψn+1)/5‾√) where ϕ=(1+5‾√)/2 and ψ=(1−5‾√)/2. The practical flaw in this method is that it requires arbitrary precision real-valued arithmetic, but it works for small n.
 If you add a "round to integer" step at the end, it doesn't require arbitrary precision arithmetic. Careful numerical analysis can tell you how many digits of precision you need to compute fib(n).I would guess the end result would be similar to that of this method (2n bits being more than sufficient), the difference being that this method puts all the digits before the decimal point.Also, for large n, you may be able to assume ψ = 0 to speed up computations (given ψ < n, its nth power will get vanishingly small)But the matrix method is the easiest fast method one can write and prove robust.(It is not fastest because the trivial binary approach doesn't product optimal addition chains. See https://en.wikipedia.org/wiki/Addition_chain or (dense writing, as is normal in HAKMEM) http://www.inwap.com/pdp10/hbaker/hakmem/recurrence.html)Also, there is a very simple method for implementing "isFib": isFib(n) := isSquare(5n^2+4) or isSquare(5n^2-4)
 Most fast methods for computing fib(n) come down to some variation of repeated squaring, so the time is dominated by the last multiplication (since the number of digits is doubling each time).
 At first glance, the given integer formula is worse regarding arbitrary precision. The first bracket term is (4 << n*(3+n))  That is shifting 4 by n squared digits to the left, requiring more than n^2 binary digits.I think calling this an integer formula is misleading, since in computing algorithms, by integers we normally understand the basic integer supported by an ALU. This formula, while dealing with integers in the mathematical sense, deals with arbitrary precision (or bignum, or whatever you call it) in the computing sense.
 I'm the author of the blog post. You're right, it requires n^2 binary digits (and I mention this at the end of the article). The formula isn't, and isn't meant to be, a good way to find Fibonacci numbers -- it's more of a fun curiosity.
 Regardless, when I learned this method in my linear algebra it inspired me to write a blog post [1] showing the derivation!This is a classic programming question, and it's refreshing to see a closed form solution.
 That it's the fastest is pretty surprising to me. Both Binet and matrix multiplication require exponentiation to n, which can be done in O(log n) multiplications (not n-1) using a clever recursion (and indeed numpy uses it). 2x2 matrices aren't exactly the hardest to multiply either, but I figured that the floating point operations with the Binet formula would enough baggage that would make it slower than the matrix method. Edit: I just saw Someone's comment - I guess that explains it.
 No, it doesn't, if I understand the OPs claim correctly.I agree with you that, for large enough n, using O(n) steps in the recursion will be slower than doing O(log n) matrix multiplication steps. The addition chain improves on the binary approach for some values of n, but I don't think it can go below O(log n) because the binary method is fastest for all n that are powers of two (disclaimer: I know almost nothing of addition chains besides what I learned from TAOCP).And of course, finding the fastest addition chain doesn't come cheaply, either, if P != NP.Hm, maybe I understand the OP's claim: if you want to generate the first n items in the series, the recursive formula will be hard to beat. Even then, it may not be fastest on modern CPUs. https://oeis.org/A000045 gives F(n + 12) = 18F(n + 6) - F(n)  So, six completely independent threads can each compute a sixth of the sequence. Doing that may be faster than trying to paralllelize the bignum additions.
 Or you can do exact arithmetic in Q[sqrt(5)].
 You only need to calculate round((1+√5)^n). The other term tends to 0, and thus can be ignored.
 Nice! Next time I have a job interview, I'll look this up the night before and confound the interviewer when it works!
 I like the matrix math one mentioned in the post, although I implemented the matrix arithmetic by hand.That provides an opportunity to discuss the optimization involved in performing integer exponentiation, so you aren’t depriving an interviewer of an opportunity to have a productive conversation about programming.
 I like this one as well: Fibonacci(n) = (Φ^n - (1-Φ)^n) / √5 (Where Φ = (1 + √5) / 2)
 This formula is attributed to Binet (1843).See https://math.temple.edu/~reich/Fib/fibo.html for more on the Fibonacci sequence and the golden mean.
 Mathematically they might be the same, but they reduce to different problems in computer science. The OP uses algorithmic approaches, but yours requires finite-precision root solvers (for the calculation of \Phi ) and some numerical analysis to determine how many bits of \sqrt(5) are needed to be correct to yield a calculation of Fibonacci(n) which is correct to the nearest integer.
 You don't need to use a root solver. If you represent numbers as "a + b sqrt(5)" you can do algebra directly in that space.
 I take it you are making a joke? Heh.Otherwise, I'd point out that any result where you ultimately need to use Fibonacci(n) as an actual integer, you can't represent it as an algebraic combination of "sqrt(5)".
 No joke! See this thread: https://news.ycombinator.com/item?id=11561336 for some real-life implementations of this technique. Since the Fibonacci numbers are integers, at the end of your computation you'll get a number like "a + 0 sqrt(5)". Then you can just take the "a" part.
 Ah! I see what you're saying. Thank for the link, that is a clever approach.
 When computing, you can ignore the second term, and just round the result to nearest integer. At for n larger than 2, or so, that should simplify calculations.
 The they will not hire you for writing spaghetti code. You can't win! :)
 I assume you're using the phrase "spaghetti code" to mean "hard to understand code." This is not what spaghetti code means.
 Don't be shy then, what does spaghetti code mean exactly?
 too many gotos, originally http://www.u.arizona.edu/~rubinson/copyright_violations/Go_T....More broadly code lacking structure, hard to compose, or improperly not composed of smaller components.
 A tangled controle structure. Think lots of jump/goto statements or multiple recursion through multiple functions a>b>c>a>c>b...The idea was mapping flow looked like a tangled mess.
 Edit: I forgot to check if there were other responses already. Oops! Never mind. Also, in addition to being late, I was also wrong (or, at best, partially wrong)Doesn't it usually at least connotate having large amounts of code, often repetitive code? Especially code that should be split into smaller functions?I thought that was the case, but I would not be surprised that it is also used more broadly
 It's generally the other end, where there is 'not enough' code. Early systems needed to be like this because there was very little memory for your code and people would often do things use calculated jumps based on some register value. Not to mention the joys of self modifying code.This all sounds cool until someone actually hands you "Mel's" code: http://www.pbm.com/~lindahl/mel.htmlPS: Beginners or Evil people, would also get into horrid messes by trying to get something that sort of works into something that almost worked.
 My recollection is that the closed formula (with phi) is derived from the generating function as well, though the derivation was nigh miraculous when I saw it done.
 It's not miraculous at all, and doesn't need to involve generating functions :-)Consider all sequences f(n) that fit the constraint f(n) + f(n+1) = f(n+2), with arbitrary initial values. Such sequences form a vector space, because adding them together or multiplying by a constant doesn't break the constraint. That vector space is two-dimensional, because the two initial values determine the rest of the sequence. Now we just need to find two different sequences that form a basis of that space. All other sequences can be expressed as linear combinations of these two.How do we find the two basis sequences? Let's make a lucky guess that they are geometric progressions with some constant c. Then we have c^n + c^(n+1) = c^(n+2), or equivalently 1 + c = c^2. That equation has two solutions, phi (the golden ratio, about 1.618) and psi = -1/phi.Going back to the original problem, we need to express the usual Fibonacci sequence (which starts with 1,1) as a linear combination of the sequences phi^n and psi^n. We just need to match the two initial values by picking the right coefficients, which turn out to be 1/sqrt(5) and -1/sqrt(5), giving you the desired closed form formula.The above approach generalizes to any recurrence of the form a * f(n) + b * f(n+1) + c * f(n+2) + ... = f(n+k). All such sequences will form a k-dimensional vector space, and you can find k different geometric progressions by solving a polynomial equation of degree k which has k roots. All other solutions will be linear combinations of these geometric progressions. (There will be complications if some roots coincide, but it can be worked out.)To me the whole chain of reasoning feels very natural and almost inevitable. If I didn't know anything about Fibonacci numbers but had a good grasp of popular math, that's how I would approach the problem right away. Does that make sense?
 It does- I remember enough from Linear Algebra that vector spaces aren't completely foreign, which helps a lot.I think the generating-function method[1] is a case of pulling out a powertool (generating functions) to prove a simple thing to demonstrate the power of the tool, whereas your explanation takes the direct route and connects more of the threads about why it's true.[1] found an instance here: http://austinrochford.com/posts/2013-11-01-generating-functi...
 Can anyone explain what's meant by this?> Multiplying by x^(n+2) and summing over all n, we getIs n the iterating variable, or the domain? What are the possible values of n? Wouldn't "all" values (whichever set that may be) give an infinite sum? I thought that infinite expressions did not obey the same laws as finite ones, and couldn't be generally used with algebraic transformations.I guess I don't know as much math as I thought.
 Disclaimer: I have studied these functions a long, long time ago. I barely remember anything.In that particular point in the article, n is both. It is a natural number that is the domain of Fib, and the iterating variable of the generating function. As you've noticed, the sum is indeed infinite - this does not matter, however, as at no point is the actual sum of a generating function series important - the whole series mechanism is just a nice conduit to reason about the coefficients (subsequent values of Fib, in this case) using more familiar and developed mathematics for infinite series (divergent or not).I remember being fascinated by generating functions in college. I viewed them as sort of a beautiful hack in how to tie various combinatorial structures to the more ordinary algebraic operations. This mini-field is called enumerative combinatorics. The only part of discrete mathematics that I've actually enjoyed :)
 Great! Somehow reminds me the Bailey–Borwein–Plouffe formula for computing the nth PI digit https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%9...
 Is anyone able to show how you get from this line:ΣnFib(n+2)x^(n+2)=ΣnFib(n+1)x^(n+2)+ΣnFib(n)x^(n+2)to this one:F(x)−x−1=x(F(x)−1)+x2F(x)I haven't managed to figure it out and have a couple of colleagues that haven't been able to work it out as well.
 ΣnFib(n+2)x^(n+2) is F(x)−x−1 because it's an infinite series of powers of x, except since it's x^(n+2) then x^1 (x) and x^0 (1) never appear, so you need to subtract them from F(x) which is ΣnFib(n)x^n. I hope that's clear and explains the rest.
 Google "fibonacci generating functions", for a more careful derivation. The first hit is: http://austinrochford.com/posts/2013-11-01-generating-functi...
 Hmm, I think there's an edge-condition bug. Starting with fib(0), its values are (0,1,2,3,5,8...) instead of (1,1,2,3,5,8...).
 1 must all appear twice in the sequence. Otherwise I agree: it ought to begin at 0.
 The Lucas numbers are more interesting. Instead of starting with 1, 1, which is kind of arbitrary, start with 2, 1. The resulting sequence is exactly round(phi^n). Which is a much stronger link to the golden ratio. And is found just as often in nature.
 It's nice to see something about the fibo numbers that has some real math in it, as opposed to the articles that make it big in proggit everyday that functional programming fanboys think are cool but that really discredit FP in the eyes of everybody else.

Search: