
"The worst algorithm in the world?" - mccutchen
http://bosker.wordpress.com/2011/04/29/the-worst-algorithm-in-the-world/
======
mynegation
Very good demonstration of subsequent improvements of a naive algorithm. To me
that was somewhat depreciated by the fact that you can actually calculate
n-the Fibonacci number using Binet's closed form formula
([http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_ex...](http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_expression)). You will need arbitrary precision arithmetic starting with
certain 'n' though, as IEEE 754 will not give you correct result.

~~~
stralep
Actually, you need just Z ring enriched (if this is right word) with sqrt(5).
So instead of one float, you use pair of integers of arbitrary precision. So

(a,b)+(c,d) = (a+c,b+d)

(a,b)/sqrt(5) = (b,a/5)

(a,b) __(c,d) = (a __c+5 __b __d,a __d+b __c)

2phi = (1,1)

F(n) = (2phi^n - (2-2phi)^n)/(2^n*sqrt(5))

[edit] As ot said, the right word is "extended"

~~~
btilly
And when you calculate this out, you internally wind up doing the same
calculation as the matrix method.

~~~
vog
How do you know that? Have you actually done the math?

Although I agree that this whole stuff looks very similar, I'm not sure
whether it really leads to the exact same calculations.

~~~
btilly
Yes, I have actually done the math. On the surface it looks very different,
but a lot of the same numbers show up in intermediate calculations.

~~~
shasta
Well, both have computed the Fibonacci terms at a given level, so how
different could it be? Here's my implementation:

    
    
        def fib_fast2(n):
            assert n >= 0
            a, b = 2, 0 # invariant: a,b are components of 2(phi^n)
            for bit in bits(n):
                a, b = (a*a + 5*b*b)>>1, a*b
                if bit: a, b = (a + 5*b)>>1, (a+b)>>1
            return b
    

It's almost identical runtime as the one in the article - a hair slower
(15.32s vs. 16.17s to compute fib 10M). They're probably related by some well
known relation between Fibonacci numbers.

~~~
robinhouston
That’s very nice! And a little rearrangement will bring it from three to two
“big multiplications”, making it faster than my routine:

    
    
      def fib_ring2(n):
        assert n >= 0
        a, b = 2, 0 # invariant: phi^n = (a + b*sqrt(5)) / 2
        for bit in bits(n):
            ab = a*b
            a, b = (a+b)*((a+5*b)//2) - 3*ab, ab
            if bit: a, b = (a + 5*b)//2, (a+b)//2
        return b

~~~
shasta
Oops, I should have found that. I looked and thought there was some reason it
wouldn't work. Interesting post, thanks for the fun.

------
mccutchen
The title is hyperbole (I'm sure that I've written much, much worse
algorithms, many times), but the breakdown of Fibonacci sequence algorithms is
really enjoyable.

~~~
sevenproxies
Typical way to gain views and make it look like you know what you are talking
about (although the author does seem to know good algorithm development).
Hell, comparing it to Bogosort is a stretch. Bogosort is not even a naive
algorithm.

~~~
nandemo
FWIW, in my Programming Languages Theory class the first sorting algorithm we
learned for Prolog was Permutation sort, which is a better version of
Bogosort. Instead of trying a random permutation each time, Permutation sort
will try each permutation once. In Prolog, this is a (the?) naive sort.

The code below consists of declaring that S is a sorted version of L as long
as S is a permutation of L and S is sorted.

    
    
      permutation_sort(L,S) :- permutation(L,S), sorted(S).
    
      sorted([]).
      sorted([_]).
      sorted([X,Y|ZS]) :- X =< Y, sorted([Y|ZS]).
    
      permutation([],[]).
      permutation([X|XS],YS) :- permutation(XS,ZS),select(X,YS,ZS).

~~~
sevenproxies
permutation([X|XS],YS) :- permutation(XS,ZS),select(X,YS,ZS).

Where is ZS defined? Bear in mind that I only took half a module's worth (6
modules in a year at my university) of Prolog and we really just used it for
learning predicate logic.

Nevertheless, using Prolog (especially when my code worked!) blew my mind. In
the assignment I had to write a program that mimicked part of an aircraft
controller in that each aircraft in it's flightplan had to be separated from
all other aircraft. Do you think Prolog will ever become popular (perhaps with
the incoming Semantic web) or is it destined for academic work and system
proofs only?

~~~
sesqu
ZS is defined by select.

I don't believe Prolog will become any more popular than it is. The Japanese
had some sort of a government project a few years ago, but I think that fell
through. Now we have Erlang, Haskell and Java libraries that replicate most
functionality, and then some.

------
perlgeek
> It’s not just bad in the way that Bubble sort is a bad sorting algorithm;
> it’s bad in the way that Bogosort is a bad sorting algorithm.

Nonono, Bogosort is way worse than naive recursive fibonacci - the former
doesn't even guarantee termination, recursive fibonacci still does.

If you want to calculate fibonacci numbers not as a misguided exercise in
algorithms but actually efficiently, use an algebraic form:
[http://en.wikipedia.org/wiki/Fibonacci_number#Computation_by...](http://en.wikipedia.org/wiki/Fibonacci_number#Computation_by_rounding)

~~~
robinhouston
Bogosort terminates with probability 1. Is it really reasonable to say it
doesn’t guarantee termination? People often do say that about randomised
algorithms, which confuses me a little. Probability 1 is as guaranteed as
anything probabilistic can reasonably hope to be, isn’t it?

[ _Edited to add_ : thanks for the replies. I think I expressed myself poorly
here. It’s not that I don’t understand the difference between “with
probability 1” and “always”. What I mean is that I don’t understand why people
sometimes make a big deal out of it, and say “that algorithm is not even
guaranteed to terminate” as though that somehow means the algorithm is
untrustworthy or useless. The trouble with the game “roll a die until you get
100 sixes in a row” is that it has an astronomical expected running time – not
that it might _never_ end, but that it will almost certainly take a very long
time.]

~~~
vog
_Probability 1 is as guaranteed as anything probabilistic can reasonably hope
to be, isn’t it?_

Unfortunately, it isn't. This is the reason why in mathematics, we say that
"possibility 1" means that an event is "almost sure", which is different from
a "sure" event.

For instance, you can play a game where you roll a dice over an over again.
You win if you get 100 times a 6 in sequence. If you play this game without
any time limit, your possibility to win is exactly 1, which means that you can
be "almost sure" you'll win in the end. However, you can't be sure, because
there is still the possibility that you don't get 100 times a 6 in an
eternity.

Note that this only happens in infinite probability spaces. In finite spaces,
"almost sure" and "sure" are equivalent.

Also note that the same holds for "probability 0" which means "almost
impossible", not to be confused with "impossible".

~~~
nhaehnle
Your example is misleading, because the same can be said about the game where
you roll a dice over and over again, and you win if you get a 6.

The probability that the game ends is 1, but it is not guaranteed to end. Yet
no reasonable person will worry about this _in practice_.

Your example with 100 times 6 in a row is conceptually exactly the same, just
the expected time until the game terminates is much, much larger.

------
qntm
I always find it incredibly difficult to concentrate on comparisons of
Fibonacci sequence algorithms when I know for a fact that there is a closed-
form expression[1] which gives F(n) in constant time.

[1] [http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_ex...](http://en.wikipedia.org/wiki/Fibonacci_number#Closed-
form_expression)

~~~
Peaker
Raising something to the power of n is not constant time.

------
_delirium
In a somewhat similar genre, I enjoyed this paper on computing primes, and
some of the very-inefficient algorithms that have been used for such purposes
(often mislabeled as the "sieve of Eratosthenes"):
<http://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf>

------
ubasu
It seems the author hasn't read SICP:

[http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-11.html...](http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-11.html#%_sec_1.2.2)

~~~
TheBoff
I don't think you've read the article, actually.

The key point is that he is trying to calculate arbitrarily large fibonacci
numbers, and so the addition is itself an O(n) operation. This means that even
after memoization, the complexity is still O(n^2), and he uses a number of
tricks to reduce that.

~~~
ubasu
His way around that is to use the matrix recurrence relation, which is also
there in SICP as an exercise.

But it's a nice discussion in any case.

------
TheBoff
Ppffftt, I wrote a recursive algorithm that calculates 2^n in O(2^n) time for
an exam question.

That's what I call a bad algorithm!

------
mwbiz
If you're writing in JavaScript it's a nice candidate for self memoizing
functions. Obviously this is only helpful if you're making numerous requests
to the method, a single request still produces a series of recursive calls.

~~~
eru
But the single request will still be faster. Try it.

------
michaelcampbell
I always liked the sort where you randomize the elements, check to see if
they're sorted, and if not try again.

~~~
cmaggard
Yeah, I recalled the name as random sort, and was pleasantly surprised when
the bogosort link directed to the same algorithm. I particularly enjoyed the
"Quantum Bogosort" algorithm on the Wiki page.

<http://en.wikipedia.org/wiki/Bogosort#Quantum_bogosort>

~~~
arethuza
If you enjoyed that you might like the novel _Quarantine_ by Greg Egan:

[http://en.wikipedia.org/wiki/Quarantine_%28Greg_Egan_novel%2...](http://en.wikipedia.org/wiki/Quarantine_%28Greg_Egan_novel%29)

------
samuel
Dunno. IIRC Every time I have seen the "bad" Fibonacci recursive algorithm has
been followed by the "good" recursive one (bottom-up), which is O(1) in size
if your language/implementation does tail-call elimination...

------
georgieporgie
This started out taking baby steps, then took a huge leap in complexity right
here:

 _Since the Fibonacci numbers are defined by a linear recurrence, we can
express the recurrence as a matrix, and it’s easy to verify that..._

It's been way too long since my math minor for me to understand that.

~~~
robinhouston
Sorry.

I think it’s only the jargon that’s confusing you. As long as you can remember
(or look up) the definition of matrix multiplication, then it genuinely is
easy to verify.

    
    
      [ fib(n-1) fib(n)   ] x [0 1]
      [ fib(n)   fib(n+1) ]   [1 1]
      
      = [ fib(n)   fib(n-1)+fib(n) ]
        [ fib(n+1) fib(n)+fib(n+1) ]
      
      = [ fib(n)   fib(n+1) ]
        [ fib(n+1) fib(n+2) ]

------
ignifero
Interesting writeup, thanks. OT: For all its fame, has any of you ever needed
to calculate Fibbonacci numbers in real life? I haven't.

~~~
fhars
And neither am I a member of the Fibonacci Association
<http://www.mathstat.dal.ca/fibonacci/> <http://www.fq.math.ca/>

[Edit:] But there are some interesting data structures based on them:
<http://en.wikipedia.org/wiki/Fibonacci_heap>

~~~
arethuza
Thanks - I was struggling to remember where I had heard of a practical
application of the Fibonacci sequence in CS. The Fibonacci Heap was it - I'm
pretty sure it was mentioned on my CS course and it must have been '87, same
year it was published!

