

Taking Automated Tests Off The Pedestal - lucisferre
https://plus.google.com/u/0/104920553571646483561/posts/fmyZi1MxMgo

======
TimGebhardt
For those not aware why this guy's rant might be important, he's the author of
"Working with Legacy Code". That book is pretty much a how-to (and a great one
at that) on how to get your legacy code base in a testable state so you can
write automated tests.

------
jerf
Are there really _that_ many teams who have just too darned many otherwise-
perfect automated tests? I freely concede it's theoretically true, any non-
trivial cost/benefit analysis with an infinity or zero for one or the other is
always wrong, but I'd be more concerned about people using this sort of thing
as an internal excuse to ignore tests than hopeful this will actually save
someone.

~~~
lucisferre
As I read it it wasn't that they were simply _too_ many but that they were not
_otherwise-perfect_. The implication is that we probably have a lot of
automated testing cruft but keep it because it is a passing test. This is
something I've seen frequently. Even when I strive to keep the tests as long
as I can I end up deleting them sooner or later because they are just too
brittle and hard to work with.

Tests themselves are a form of code coupling so they almost always come with a
cost.

I do agree with you (and said as much in my comment on G+) that there is some
danger to this discussion becoming and excuse for laziness and ignorance, but
that in itself isn't a good reason to not have the discussion.

~~~
jerf
I added the "otherwise perfect" clause because of his third paragraph, which
to my mind basically meant that for the rest of his post he wasn't talking
about tests that could be fixed to run faster (with reasonable effort), but
the remaining tests that are left even after you've polished the setup to a
reasonable degree.

Certainly I've broken tests with new code, examined broken tests, and simply
nuked the broken tests before. Usually it's because they're either better
covered by some other test written since then, or covering use cases I thought
I'd have that never emerged. Sometimes I fix them to be less brittle over
time.

It is a good discussion in theory, I just question whether there's that many
teams for which this really applies.

I may be suffering from excessive reliance on my own personal experience. As I
said yesterday in another comment, I tend to not to worry about whether
something is "unit" or "integration" or "acceptance" testing, and most of
testing is actually at least one abstraction level above what would be
properly considered "unit testing". I personally don't have much difficulty
with test stability as a result. People who write lower level unit tests may
encounter this far more often.

------
InclinedPlane
Automated testing is neither good nor evil, it is a tool like many others
which has certain advantages and certain shortcomings and disadvantages.
Unfortunately, we've been living in an era where, frankly, uneducated heathens
dominated the industry with their beliefs that automated tests were not worth
the effort.

Finally we've started to move beyond that. But unfortunately we've ended up
more or less lurching into a general mindset of unfettered adulation for
testing. On balance that's probably better than the unwashed heathenism of
earlier but it's still a dangerous situation to be in. Automated tests have
their own pitfalls. Test code is often lower quality than regular code, for
example, requiring special care to keep it up to par. Also, it's all too easy
to write passing tests which hinder rather than enhance your ability to modify
the code base (which, by the way, is the primary reason to have automated
tests in the first place).

The proper way to approach testing is openly and directly, by having an honest
debate on the level of testing necessary, the cost of testing, and the
processes necessary to ensure that tests are high quality and providing value.
Sometimes this can result in a decision to not test some aspects of a product.
And that's ok as long as that decision is made rationally.

More so, as Fowler says, automated testing is just one tool for increasing the
quality of a product, but in many ways it is not the most effective one.
Formal code reviews and beta testing, for example, have shown to be some of
the most effective techniques for finding defects. Additionally, they are
capable of finding defects that testing cannot: defects at the level of
design.

