
The Code Coverage Paradox - yawz
https://blog.decaresystems.ie/2016/09/22/the-code-coverage-paradox/
======
GrumpyYoungMan
Good article and it's saying something that I've also been saying for years.
tl;dr: A code coverage measurement only says where you did -not- check for
bugs (since code that was not exercised can't possibly have been checked for
bugs); moreover it also does -not- say whether the code that was covered (i.e.
exercised) was tested to any meaningful degree (it's perfectly possible to get
a code coverage number without checking any results at all).

I do disagree with one statement in the article: " _In fact I would argue,
it’s hard to imagine a code coverage value of over 70% to be of any value
whatsoever._ ". Having seen it done for a mission-critical system, if one uses
extremely strict TDD (i.e. no line of code is written without a corresponding
unit test that verifies its effects) then the high code coverage number does
provide useful direct guidance as to what areas of the code haven't been fully
scrutinized. In our case, the efforts paid off: there were no significant bugs
found after release.

