
ThresholdTest - joeyespo
http://martinfowler.com/bliki/ThresholdTest.html
======
osivertsson
My experience with setting thresholds for e.g. performance measurements is
that it goes like this:

1\. Works fine.

2\. Fails once every 25th time. You're very close to the threshold, but it is
not clear what broke it. Any of the previous N commits are suspicious. Or have
their been any change in the environment (overall network traffic, security
updates, etc.) that could have performance implications?

3\. Now you must take immediate action in figuring out what decreased
performance. Too often, no one thinks their changes could possibly have
decreased performance. If it is environment changes that cause the
ThresholdTest to fail on occasion, it is very easy to blame it too often, and
not spend the time necessary to really understand what changed.

4\. Dev team doesn't have the time right now => bump ThresholdTest in the
wrong direction temporarily, often turning into permanent.

Because of this, I think it is very important to keep ThresholdTests very
repeatable and not let them depend on anything that could fluctuate over time.
And you should of course log every test result to some database and graph it,
interesting finds in both directions are to be found!

