φ is the golden ratio and ℚ(φ) is the set of all numbers of the form a + bφ where a,b are rational numbers.
So if you create a class like PhiRational where PhiRational(a,b) represents a + bφ then Binet's formula
Fib(n) = (φ**n - (1-φ)**n) / √5
Fib(n) = (PhiRational(0,1)**n + PhiRational(1,-1)**n) / PhiRational(-1,2)
Remember φ = (1 + √5)/2, so that the √5 in the denominator can be written as -1 + 2φ (i.e., PhiRational(-1,2)).
When teaching I like showing all these solutions because it throws a wrench in (beginning) students' ideas about how shallow/deep simple exercises can be and the relatioship between math, recursion, efficiency, etc.
A lot of beginning students see the naïve recursive solution as mathematical-but-inefficient and draw the conclusion that the math is nice, but ultimately isn't practical. Then you show them an even-faster implementation using more math and they're pretty surprised.
For anyone wants to play with “golden integers” or “golden rational” numbers, https://beta.observablehq.com/@jrus/zome-arithmetic or see a bunch of concrete uses in notebooks at https://beta.observablehq.com/@vorth/
The (pedagogical) advantages of ℚ(φ) are that it seems like less of a trick to the student and Binet's formula is more apparent. It also gives them more surface area to explore.
A lot of students believe that mathematical know-how and practical problem-solving are at odds (especially since recursion is inherently "mathematical" to most beginners). IME exercises like this help prevent that false dichotomy from forming.
So, algorithmically equivalent, pedagogically distinct.
(BTW, not saying that the matrix implementation is bad or anything, it's the contrast in appearance and equivalence in computation together that makes for the learning.)
That seems very counter-intuitive to me, as the matrix form is a direct expression of the Fibonacci function's definition, and Binet's formula follows from the eigenvalues and eigenvectors of the matrix. I guess this ties in to the students you're teaching not liking math?
> A lot of students believe that mathematical know-how and practical problem-solving are at odds (especially since recursion is inherently "mathematical" to most beginners).
This is also counter-intuitive to me. I was under the impression that beginners tend to overestimate the importance of mathematical skill to programming. Do you have a more concrete example, or an explanation of what you mean by recursion not being seen as practical for problem solving?
If you start with, say, empirically discovered relationships in the regular pentagon and pentagram, then you can figure out φ (ratio of the diagonal to the side of a pentagon) must satisfy φ^2 = φ + 1. Proving that this relationship must hold for the regular pentagon is not too hard.
Then start taking powers of φ based on that definition, and you can if you like define the Fibonacci numbers as the coefficient F[i] in φ^i = F[i]φ + F[i-1]. Or define the Fibonacci numbers in the conventional way and simply demonstrate that they show up in the above expression.
There may be a lack of clarity here though. I am referring to the actual technical skill of writing computer programs. If by "programming" you mean "the activities you need to do to have a job as a software engineer" then I can attest that no actual ability to write programs is required whatsoever, or at least the walking proof by construction who showed me this provided no evidence thereof and still maintains their position. Which is sad, since even people with no real idea of how to write programs can be somewhat productive via copy-paste cargo cult programming.
What does it mean to use mathematical reasoning to derive a loop, and what error does this derivation prevent?
I think a lot more students should spend significant time with Zometool construction toys while in school, and should work out a lot of the arithmetic of golden integers for themselves. (If you are a math teacher, I recommend getting some of these.)
If you use Wildberger’s “rational trigonometry”, then it is possible to answer essentially every metrical geometry question about configurations of Zometool using golden rational arithmetic without ever resorting to approximations. The squared length between every pair of Zome-constructible points is a golden integer and the squared sine of every Zome-constructible angle is a golden rational number.
I’m just pointing out that “It's possible to use Binet's formula, too [using golden integers]” is not actually any different in practice than the matrix computation.
It’s not the fastest possible algorithm, but it’s fun. I implemented it here: https://github.com/robinhouston/fibonacci-float-calc
The implementation is pretty simple, using GMP. This is the whole function:
* Compute the nth Fibonacci number with arbitrary-precision
* floating-point arithmetic, using the formula fib(n) ≈ phi ^ n / sqrt(5)
* (which is exact if you round the rhs to the nearest whole number).
void fib_float(mpf_t *result, long n)
mpf_t sqrt5, phi;
/* We need about n lg(phi) bits of precision */
bitcnt = n * 7 / 10;
/* Initialise sqrt5 to the square root of 5 */
/* Initialise phi to the Golden Ratio */
mpf_add_ui(phi, phi, 1);
mpf_div_2exp(phi, phi, 1);
/* Compute phi ^ n / sqrt5 */
mpf_pow_ui(*result, phi, n);
mpf_div(*result, *result, sqrt5);
/* Dispose of the temporary variables */
fib = lambda n: int(((((1+5**.5)/2)**n)-(((1-5**.5)/2)**n))/5**.5)
Now expand this as a polynomial and use the identity φ^2 = φ+1 to rewrite all the terms of degree > 1 and we have an expression A+Bφ, and since the Fibonacci numbers are integers, B will be zero, so the answer is A.
When computing x^n, n is either even or odd. If it's even, then the result is (x^(n/2))^2, else it's odd and the result is x * (x^(n/2))^2. i.e.
def pow(x, n):
if x == 1:
if (n & 1) == 1:
return x * square(pow(x,n//2))
”There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques to calculate relatively short chains exist. One very well known technique to calculate relatively short addition chains is the binary method, similar to exponentiation by squaring.”
In particular, it does at most the same number of multiplies as the LRU cache version in the OP
>>> import decimal
>>> decimal.getcontext().prec = 50
>>> b = decimal.Decimal(10**3)
>>> b/(b**2 - b - 1)
>>> k = 6
>>> b**k * b/(b**2 - b - 1)
mod b floor
It's exactly like solving a differential equation.
Another path is to look at formal power series, and a good book is Generatingfunctionology.
Knuth Oren and Patashnik's Concrete Mathematics covers the basics very well and goes into much more detail on finite difference calculus.
I'd wager to say there's nothing more fundamental about it. You are still finding eigenvalues and eigenvectors of a linear operator. The matrix is exactly the same.
Remember, the n'th Fibonacci number has a number of bits linear in n, and you need at least the time to write all those bits out. That can't be done in O(1), no matter how you do the computation!
Raising a very-high-precision approximation of an irrational number to a large integer power will get slower and slower as you handle larger numbers.
You might just as well call the matrix version O(1)
Although they do note that eigen_fib() completely breaks down once the numbers involved are larger than fit in a 64 bit integer.
And actually, the best known method of multiplying large integers is achieved with an FFT over the integers mod 2^n (which is what GMP does). So then your task is changed yet again to optimizing a modular FFT algorithm...
If I'm not mistaken addition is also trivial in this representation, and since no other operations are required, the entire computation can be done in the FFT space.
This definitely works in a residue number system, where a chinese remainder transform is used instead of an FFT. (CRT and FFT are algebraically related)
In short, you can create a massive parallel cluster of computers computing parts of the result without interaction, and then in the end combine the results using a single huge pass of FFT.
The function n -> 2^n takes input of size O(log(n)) and returns output of size O(n).
Of course, n = 2^(log n), so our function takes exponential time/space in input size to simply write the output.
The Fibonacci function is like that.
Of course, discussions of complexity become cumbersome if you don't specify variables; that's why we talk about complexity in n of computing Fib(n).
Potato, potato anyway.
Size of input: O(log n)
Size of output: O(log(c^n)) = O(n)
Floating point operations might be limited in precision (and range to some extend), but pretty much any you'll normally encounter will be O(1). Unless you require arbitrary precision.
In this case P(x) = x^2 - x - 1, and any polynomial Q looks like ax + b mod P, so we only have to keep track of two numbers (a,b), instead of the four numbers in the matrix A^n.
In general this allows you to reduce the k^2 numbers to k numbers.
Edit: Fibonacci may be a specific example, but I wondered how much wasted computation has been spent on calculating the same problem on the same input.
> There exist several closed-form solutions to Fibonacci sequence which gives us the
> false hope that there might be an O(1) solution. Unfortunately they all turn out to
> be non-optimal if you want an exact solution for a large n.