
Compilers Love Messing With Benchmarks - helper
http://www.brendangregg.com/blog/2014-05-02/compilers-love-messing-with-benchmarks.html
======
habosa
Measurement bias can have a huge effect, even when you think you are doing
'good' benchmarking.

See the paper 'How To Produce Wrong Data Without Doing Anything Obviously
Wrong!' [1]. They just change the size of environment variables and the link
order of files, and get wildly different performance results. Most people just
assume if they close all other running programs they'll have a fair benchmark,
but there is so much more to consider.

[1] - [http://www-plan.cs.colorado.edu/diwan/asplos09.pdf](http://www-
plan.cs.colorado.edu/diwan/asplos09.pdf)

------
pdknsk
The only conclusion here is not to use old and/or synthetic benchmarks.

~~~
helper
Nope. The conclusion is that it is really easy to benchmark the wrong thing.
If you are trying to evaluate AWS vs DO vs Google Compute Engine vs Joyent you
better make sure you are actually comparing apples to apples. Your comparison
will probably invalid if you are using a different kernel version or compiler
version or possibly even different shared libraries.

