Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do people think that computing Fibonacci numbers recursively is a good benchmark of anything?

Doesn't it require O(n^2) function calls for something that can be computed in O(1)?




Actually being fast is not the point; naive fib is used because it's a very well known algorithm, easy to implement consistently across multiple languages, that heavily exercises both function calls and numeric addition.

In other words, as much as a 10-line program can be, it's a reasonable proxy for a language's performance on algorithmic code.


Also, it's a reasonable low bar to begin with for a college-educated programmer (not that all programmers need to be college-educated), because the Fibonacci sequence in mathematics (and in everyday language) is commonly defined in its recursive form, so it's likely that people will be familiar with it.


Note to self: If I ever implement a compiler, I'll include a special Fibonacci detection pass in the optimizer.


That kind of thing does happen when the stakes get high enough. There are a lot of rumors (and some evidence) of special-cased "cheating" on commonly used benchmarks, including the 3dMark GPU benchmarks, SPEC CPU benchmarks, even SunSpider JS benchmarks.


Both Nvidia and ATI have been caught cheating at graphics benchmarks. Their drivers would detect when the benchmark programs were being run, and go through lower quality fast paths. You can google "quack.exe" for details on one particularly famous case.


Actually, it takes real effort to make GCC not completely optimize-away this kind of stuff for C:

https://github.com/JuliaLang/julia/issues/1325


There's a simple solution to that: use Ackermann instead.

It measures the same thing and is a function of its own time complexity to boot.


It tests how well the compiler deals with function calls, and that's it. Useless for measuring a language's overall speed (as are all micro-benchmarks).


If the compiler deals with arithmetic in a very inefficient way, that will also show up.


Fibonacci is O(log n) or so in the best case, not quite O(1). Of course you can do a simple table lookup if you're limiting yourself to 32 or 64-bit numbers, but you can go far beyond that.


There's a formula from analysis that gives you the Nth Fibonacci number in closed form. It involves a couple of exponentials, which you can compute to a given accuracy in bounded time.


You obviously haven't tried doing this or you wouldn't suggest this as a way to actually compute Fibonacci numbers. Here's a definition in Julia (hey, why not) of a closed form Fibonnaci function:

  cfib(n) = ((1+sqrt(5))^n - (1-sqrt(5))^n)/(2^n*sqrt(5))
Let's see how it works:

  julia> cfib(1)
  1.0

  julia> cfib(2)
  1.0

  julia> cfib(3)
  2.0
Great! But let's look at a few more:

  julia> cfib(4)
  3.0000000000000004

  julia> cfib(5)
  5.000000000000001
Oops. Those aren't integers. I guess we could round, but at some point we might be rounding to the wrong integer. In fact, once you get up to 63, this happens and worse still, at 64 you get an infinite answer.

Here's a simple, reasonably efficient iterative Fibonacci function in Julia:

  function ifib(n)
    j, k = 1, 1
    for l = 3:n
      j, k = k, j+k
    end
    return k
  end
We know this one works up to the limits of Int64 range:

  julia> for n = 1:10
           println(fib(n))
         end
  1
  1
  2
  3
  5
  8
  13
  21
  34
  55
Using a doubly recursive algorithm for Fibonacci is indeed silly, but it's a good toy test of recursion performance.


You're right, I hadn't, but it sounds to me like it worked pretty well in your tests. It works better in Python than in Julia, apparently:

    >>> [int(round(((1+sqrt(5))**n - (1-sqrt(5))**n)/(2**n*sqrt(5)))) for n in range(128)]
    [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073L, 4807526976L, 7778742049L, 12586269025L, 20365011074L, 32951280099L, 53316291173L, 86267571272L, 139583862445L, 225851433717L, 365435296162L, 591286729879L, 956722026041L, 1548008755920L, 2504730781961L, 4052739537881L, 6557470319842L, 10610209857723L, 17167680177565L, 27777890035288L, 44945570212853L, 72723460248141L, 117669030460994L, 190392490709135L, 308061521170130L, 498454011879265L, 806515533049395L, 1304969544928660L, 2111485077978055L, 3416454622906716L, 5527939700884771L, 8944394323791488L, 14472334024676260L, 23416728348467744L, 37889062373144008L, 61305790721611752L, 99194853094755776L, 160500643816367552L, 259695496911123328L, 420196140727490880L, 679891637638614272L, 1100087778366105088L, 1779979416004719360L, 2880067194370824704L, 4660046610375544832L, 7540113804746369024L, 12200160415121913856L, 19740274219868282880L, 31940434634990198784L, 51680708854858489856L, 83621143489848688640L, 135301852344707186688L, 218922995834555891712L, 354224848179263111168L, 573147844013818970112L, 927372692193082081280L, 1500520536206901248000L, 2427893228399983329280L, 3928413764606884839424L, 6356306993006868692992L, 10284720757613753532416L, 16641027750620622225408L, 26925748508234379952128L, 43566776258855008468992L, 70492524767089384226816L, 114059301025944392695808L, 184551825793033785311232L, 298611126818978194784256L, 483162952612011980095488L, 781774079430990309097472L, 1264937032043002356301824L, 2046711111473992665399296L, 3311648143516995021701120L, 5358359254990987687100416L, 8670007398507983245672448L, 14028366653498970932772864L, 22698374052006954178445312L, 36726740705505935848636416L, 59425114757512894322049024L, 96151855463018834465652736L, 155576970220531703017897984L]
But, although it gives close answers up to cfib(604) (after which it throws an exception) it still gives slightly inexact answers starting at 308061521170129 (cfib(71)), which is between 2⁴⁸ and 2⁴⁹. I'd expect it to make it a few items further before going astray, since it's presumably doing floating-point calculations with 53 bits of mantissa that are supposed to be rounded to less than half an ulp.

I'm comparing with

    >>> fibs = range(2)
    >>> while len(fibs) < 128: fibs.append(fibs[-2] + fibs[-1])
    ...
    >>> fibs
    [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073L, 4807526976L, 7778742049L, 12586269025L, 20365011074L, 32951280099L, 53316291173L, 86267571272L, 139583862445L, 225851433717L, 365435296162L, 591286729879L, 956722026041L, 1548008755920L, 2504730781961L, 4052739537881L, 6557470319842L, 10610209857723L, 17167680177565L, 27777890035288L, 44945570212853L, 72723460248141L, 117669030460994L, 190392490709135L, 308061521170129L, 498454011879264L, 806515533049393L, 1304969544928657L, 2111485077978050L, 3416454622906707L, 5527939700884757L, 8944394323791464L, 14472334024676221L, 23416728348467685L, 37889062373143906L, 61305790721611591L, 99194853094755497L, 160500643816367088L, 259695496911122585L, 420196140727489673L, 679891637638612258L, 1100087778366101931L, 1779979416004714189L, 2880067194370816120L, 4660046610375530309L, 7540113804746346429L, 12200160415121876738L, 19740274219868223167L, 31940434634990099905L, 51680708854858323072L, 83621143489848422977L, 135301852344706746049L, 218922995834555169026L, 354224848179261915075L, 573147844013817084101L, 927372692193078999176L, 1500520536206896083277L, 2427893228399975082453L, 3928413764606871165730L, 6356306993006846248183L, 10284720757613717413913L, 16641027750620563662096L, 26925748508234281076009L, 43566776258854844738105L, 70492524767089125814114L, 114059301025943970552219L, 184551825793033096366333L, 298611126818977066918552L, 483162952612010163284885L, 781774079430987230203437L, 1264937032042997393488322L, 2046711111473984623691759L, 3311648143516982017180081L, 5358359254990966640871840L, 8670007398507948658051921L, 14028366653498915298923761L, 22698374052006863956975682L, 36726740705505779255899443L, 59425114757512643212875125L, 96151855463018422468774568L, 155576970220531065681649693L]

For exact answers, you clearly can't do better than O(N) (where N is the magnitude of the input parameter, not its length in bits) because the output is of size O(N). But for approximate answers, you can.

I agree about the uses for doubly recursive Fibonacci!


Yes. You can get the same approximate closed-form Fibonacci behavior in Julia with this definition:

  cfib(n) = ((1+sqrt(5))^n - (1-sqrt(5))^n)/(2.0^n*sqrt(5))

  julia> cfib(128)
  2.5172882568355055e26
Unfortunately, this approximate value is off by 1066512995643 (~1 trillion), so it's hard to see how it would be useful. Actually, it's hard to think of good uses for computing Fibonacci numbers in general :-)


Yes, but if you want to compute exact results, then you're stuck with log n time.


Can't you do it in O(1) if you limit yourself to floats? Arbitrary n but not arbitrary precision?


I imagine the table would still be of reasonable size then. Of course, this is still technically restricting the output to 32 or 64-bit numbers.


No need for a table lookup. There is a closed form formula. Of course this is only as accurate as your float precision. You can get out pretty far, though, with some clever rounding.

http://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_ex...


It actually requires O(fib(n)) function calls, or equivalently about O(1.62^n) function calls.


Fibonacci numbers cannot be computed in O(1), only estimated. All closed form algorithms start giving imprecise answers after some value of n.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: