
100 Counterintuitive Things About Testing by the Author of Falsehoods About Time - Textarcana
http://infiniteundo.com/post/158170334513/108-counterintiuitive-things-qa-people-know-about
======
Textarcana
I am the author of this post! If you notice inaccuracies or have thoughts,
please comment here. I will be reading and responding to all comments as
always.

------
nerdy
> #2: Tight coupling is good.

Only between a test and the specific code for the behavior under test. It's
not good between test suites, abstraction layers, APIs, etc.

> #3: It’s good when something is known to be broken.

When compared to it being broken unknowingly, but not really otherwise.

> #15: The same bug almost never “reoccurs” so most “regression testing” is
> pure waste.

> #48: Writing tests for bugs that are fixed does NOT correlate with catching
> new bugs when they arise.

What about a situation where you have untested behavior (perhaps incomplete
test coverage to begin with), then you find a problem in that code and write a
test to prevent regression. When you later refactor the code under test
there's a definite possibility of "regression" of sorts, especially if the
refactoring is comprehensive.

> #20: “Black box” is a political designation and has no technical meaning.

A black box is a system where you can observe inputs and outputs but not the
system performing the transmutation.

> #31: On a Web site with 1 million visits per week,a one-in-a-million event
> can occur once a week.

Quoted for thought-fodder, this one is awesome.

> #51: Granular and/or comprehensive documentation of a nontrivial Web site is
> demonstrably impossible.

It certainly feels this way, but what's the solution? Have no docs, write docs
knowing they won't be comprehensive, or expect to chase an impossible goal?

> #66: If no one ever finds out about the bug then the bug never existed in
> the first place.

 _wink_ _wink_ ... this simply isn't true!

> #83: Bugs cluster at the interfaces, bugs are not distributed randomly.

Isn't this only true if there's more unit-level testing than
integration/functional testing?

> #84: There’s more time grains in 1 year of compute time than there are
> seconds in the age of the universe.

1 year of compute time for what? All of the computers in the world? A
datacenter? A computer with a 1μ grain size has roughly 3.15E+13 grains per
year but there are over 4E+17 seconds in the age of the universe.

> #93: More testing doesn’t correlate with better quality, less testing
> doesn’t correlate with worse quality.

I don't have any specific data but my experience has been that testing (manual
or automated) does correlate with quality. Testing might be the developer
pressing F5 a lot, writing automated tests, or a stakeholder manually end-user
testing. Think about the state of software which was developed but never run.
Software rarely executes flawlessly first-run, and every time a developer runs
it, it's testing.

~~~
Textarcana
Wow. Thanks for the detailed response. I will try to answer point by point.

#2 that tight coupling can ever be good comes as a shock to the learner

#3 any breakage is an opportunity learn from and about failure — also when
it's broken and you know that is better than if it's broken and you don't know

Regarding regression testing: writing tests is a _design_ activity so if it is
"driven" by whatever happened to break last, that sounds like treating the
symptom not the underlying cause. That is why I have never been a fan of "bug-
driven development" nor of large-scale regression testing.

#20 "who draws the lines" is an incredibly important political question. So
who makes the declaration that the so-called black box can not or is not to be
opened? This was my point. I understand what a black box means in software
testing parlance.

#31 check out this great article then:
[https://blogs.msdn.microsoft.com/larryosterman/2004/03/30/on...](https://blogs.msdn.microsoft.com/larryosterman/2004/03/30/one-
in-a-million-is-next-tuesday/)

#51 testing is runnable documentation and sufficiently advanced monitoring,
particularly ubiquitous StatsD usage, is indistinguishable in practice from
testing, see
[https://www.youtube.com/watch?v=uSo8i1N18oc](https://www.youtube.com/watch?v=uSo8i1N18oc)

#66 I truly do not understand the objection you are making here. If an event
goes unobserved and is without impact it is the same as if the event never
occurred.

#83 no bugs always cluster at the interfaces between components

#84 Uh oh. I need to check my math and will respond further in another comment
once I do so. Thanks!

#93 It's not that testing doesn't help. It's that if you have X amount of
testing and you add Y amount of additional testing, THAT is not correlated
with better quality. Likewise if for Reasons you do less testing in the future
than you do right now, that is not guaranteed to degrade your quality.

~~~
nerdy
>#66 I truly do not understand the objection you are making here. If an event
goes unobserved and is without impact it is the same as if the event never
occurred.

#66 says: "If no one ever finds out about the bug then the bug never existed
in the first place."

While the outcome is the same, it doesn't _literally_ mean the bug never
existed. The existence of a bug is orthogonal to its discovery. Its discovery
does not bring about its existence.

Do you have any data for #93? I'd expect a power log distribution.

~~~
Textarcana
> discovery does not bring about its existence.

I would argue that it does but this is a matter for phenomenologists. The
_practical_ result is that it's _as if_ the bug never existed. Beyond that
let's agree to disagree.

#93 No hard data. How would you even begin to measure such a thing? No two
software shops are the same, hell no two projects within the same _team_ are
anywhere similar. How to baseline? What about the impossibility of a control
group for a software team?

I find it interesting that a power log distribution would result in the kind
of behavior I am describing: relatively small impact for even fairly large
variations in the number of tests applied to a project.

~~~
nerdy
If a customer discovers a bug, who created it?

~~~
Textarcana
"who created it" implies a causal chain of events whereby someone is
responsible for the bug. This is Safety-I thinking. John Allspaw addresses
this when he states that "there is no root cause" for an incident or bug. So
in a very practical sense YES the bug is "born" at the moment the user
"discovers" it. Note that "discovery" here is in the imperial sense where the
the user has drawn a line on the software map (so to speak) an labeled it:
beyond this here be bugs. Devops very directly implies skepticism for
causality as a primary/default phenomenology for understanding bugs.

