

Why you should never, ever, ever use fibonacci for benchmarking - dons
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.4124

======
jrockway
But fib is a great benchmark for blog posts. You can write a naive
implementation in your least-favorite language, and write a simple iterative
implementation in your favorite language, and your favorite language will
always win! And that's the most important part of benchmarking, who wins!

Edit: did anyone replying to this post (other than dons) actually read the
linked paper?

~~~
dons
Indeed, no other benchmark is:

* so accessible to the naive -- anyone can understand fib!

* has so many ways to implement it -- each in different complexity classes!

* is so perfect for language pissing contests on the intertubes.

Endless fun!

A much better idea is to contribute programs to the Great Shootout,
[http://shootout.alioth.debian.org/u64/benchmark.php?test=all...](http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=all)

~~~
swannodette
fib _does_ present an interesting problem. Any language worth its weight
should be able to express the problem in such a way that it can perform

    
    
      1) quickly, 
      2) iteratively (tail-recursion)
      3) succinctly.
    

Not _that_ many languages can do that.

~~~
jrockway
Tail recursion for two recursive steps per invocation? I don't think so.

(Hint: the "naive" implementation of fib that I refer to above is "fib (x + 1)
= fib x + fib (x - 1)".)

~~~
swannodette
What are you saying exactly? That fib shouldn't be defined in terms of tail
recursion? Of course it doesn't have to be. But it's also the moment that you
realize that for loops are entirely unnecessary and broken since they aren't
composable the way functions are.

~~~
jrockway
Iteration and tail recursion are orthogonal. Yes, you can implement iteration
in terms of function application. But no, the results of benchmarking "fib"
won't make your favorite version win.

------
mahmud
I came to HN because CiteSeer is down and I couldn't get work done. Then I see
CiteSeer on the front page of HN: my mistress killed my wife :-(

~~~
ableal
Watch it, buddy. Your mistress _is_ your wife (in VR gear), and she's out to
nail your two-timing butt.

(Short-story plot by Jeffrey Archer, in an old paperback I absentmindedly read
a while ago.)

------
wingo
I have read the paper before, and it's certainly right -- but Gabriel never
claimed that the fib benchmark was anything other than a microbenchmark of
certain aspects of a system.

For example, when discussing the TAK microbenchmark, he says:

> When called on the above arguments, TAK makes 63,609 function calls and
> performs 47,706 subtractions by 1. The result of the function is 7. The
> depth of recursion is never greater than 18. No garbage collection takes
> place in the MacLisp version of this function because small fixnums are
> used.

(See <http://www.dreamsongs.com/Files/Timrep.pdf>)

With such a precise description of what the benchmark tests, it's obvious (a)
that it's a microbenchmark, but also (b) that it does indicate the speed of
certain combinations of operations.

So I don't think that the title of this article is actually the takeaway
message from that paper. Microbenchmarks like the Gabriel benchmarks are
around for a reason.

------
klodolph
I just made a similar complaint over at Reddit. Maybe interesting, maybe not.
[http://www.reddit.com/r/programming/comments/b5v64/tired_of_...](http://www.reddit.com/r/programming/comments/b5v64/tired_of_people_comparing_languages_by_doing/)

------
InclinedPlane
The real reason why it's such a poor example is that it's contrived. If you
honestly wanted to determine arbitrary elements from the Fibonacci series you
would calculate them using a closed form expression that runs in O(1) (wiki
Fibonacci sequence for examples).

Similarly, factorial is often a poor example problem because it's almost
always completely unnecessary to tweak the performance of any algorithm for
real world inputs. 13! is larger than an unsigned 32-bit int can hold, and 21!
is larger than an unsigned 64-bit int can hold. The amount of performance gain
you'd get from shaving down a loop or recursive call that ran at most 20 times
is inconsequential in the vast majority of cases. Indeed, a simple recursive
or iterative method is probably more efficient than even a lookup table method
in most cases.

As example problems to see how people think through issues of iteration vs.
recursion, efficiency, and performance these are ok problems but as real-world
benchmarks for any kind of coding style they are terribly artificial.

~~~
klodolph
The closed form solution isn't O(1). It's Ω(2^N) since the number of digits in
the output is Θ(2^N).

~~~
InclinedPlane
On the most prevalent electronic computers of our time the closed form
solution is effectively constant-time if the output is bounded to be within
the limits of a 32 or 64 bit integer.

