
The maintainability of unit tests - j_baker
http://blog.jayfields.com/2010/02/maintainability-of-unit-tests.html
======
aero142
I'm finding that the same things that make good regular code, make good
testing code as well. Regardless of which layer you are testing(unit,
integration, functional), it is important to write a testing API. If you have
a bunch of copy, paste, change one line unit tests, it is going to be very
difficult to change an API. However, if you have one API call, "go and do this
thing high level thing", then when the underlying API changes, you just change
the implementation of that one API level function. The tests end up more
readable as well because they read like english. "Setup a basic environment
that looks like this." "Change the behavior of the one thing I am testing."
"Validate the response".

If you are writing functional tests, the same thing applies. For a UI, you
might have an API that reads like. "Go to the home page." "Create a User".
"Login". "Go to the page I want". "validate this one behavior".

Since all tests are using an API, you just change the implementation of the
API call. For example, if I change the number of fields required to login, I
only have to change the login API call. For all the tests that used it, they
don't care about how you login, because that wasn't the thing they were trying
to test. It was just a pre-req.

------
jrockway
Refactoring and unit tests conspire to play against every inexperienced
programmer's greatest fear -- deleting code. If you are dramatically changing
the internal details of some code, you will have to throw away the tests that
touch that internal code. You're deleting that code, after all.

Your refactor is not unguided, though, because you still have tests for the
thing that consumes the unit that you are refactoring. That's how you know you
didn't break anything -- the tests for the rest of your program stay the same
during the refactor. (And you write new tests for the details of the code that
you are changing, so that when you change something _that_ depends on, you
still have tests.)

A technique I find helpful is for classes to generally not call methods
directly on objects that they old references to. Instead of calling
$self->foo->bar, delegate $foo->bar to $self->bar. Then when you get rid of
foo, either by refactoring, or lack of need in a subclass, you still have a
bar method. Something else can provide it, or the class can provide it for
itself. If you follow this rule throughout your application, classes will only
be coupled to themselves; and this means that more tests survive refactoring.
(Use of this technique is pretty tedious if you don't have a language feature
to do the delegation automatically. I use Moose, and it makes delegation to
has_a members trivial. Other languages are not so lucky.)

------
jerf
This discussion boggles my mind, though I'm willing to assume there's
something to it. The reason I love my unit tests so much is that they _free_
me to refactor endlessly. In fact, given that I'm working in an old code base
that has no testing, the only thing letting me refactor to the extent I need
to accomplish my current task is the unit tests I'm adding.

And I'm not spouting propaganda here, as I know this is a talking point, this
is my _personal experience_. Take away this benefit and my desire to write the
tests would be much less.

Before unit tests, I would do design one, build a bit, discover I need design
two, build a bit more, discover I need design three, build a bit more,
discover I need design four but now design three is really embedded too deeply
into the system to know whether I can safely change things, and then the hacks
start going in because otherwise the subtle bugs start popping up faster than
I can catch or fix them. Unit tests don't eliminate the bugs popping up
unexpectedly after a design change, but in my experience it brings them down
far enough that the cost/benefit of the design change is positive again, which
in the long term pays off when we actually end up with the rather well-tuned,
powerful-yet-simple design thirteen. (Someday I hope to learn enough to get
there sooner, like, design six or something.)

Am I just lucky, or are they doing something wrong?

On the front of what I do differently than some, I actually reject the "unit
testing must test as small a unit as possible" dogma and the vast bulk of my
unit tests are actually what some people would call integration tests, though,
usually, rather low level integration tests; they don't take 20 minutes to
run, either. I also avoid mock objects like the plague, aggressively
preferring to either build the interface to be testable in the first place, or
failing that, adding some hooks in the tested code itself, rather than trying
to push it onto the code from the outside. Perhaps that's the difference...
basically I tend to run my testing one level above the bottom layer on the
real-as-possible code, rather than right on the bottom layer of the code with
various fake bits attached. Despite what intuition (and dogma) might be
telling you, this usually means I can change that bottom layer quite
significantly with little to no test code changes.

(I don't follow this religiously, there is some bottom-layer code that gets
tested on-the-spot, like raw validators or the output of a parser, but
typically the code I test on the bottom layer never subsequently changes much
because it is so simple that there's simply no reason for it to change.)

~~~
silentbicycle
Dr. Richard Hipp (the main author of SQLite) once said in a tech talk that one
of the big advantages of having thorough automated testing is that you can
freely optimize or replace individual components without being held back by
worrying about unknowingly breaking something. (I think he said they've
replaced the query optimizer several times.) This fits my experience, and I
think it's what Michael Feathers is getting at when he defines legacy code as
"code without tests".

I'm not convinced that letting testing _drive_ the design process is always a
good idea (though the benefits of test- _informed_ development are harder to
argue), but having thorough tests is usually a huge win. Having thorough tests
isn't free, particularly with good coverage, but it usually pays for itself in
the long run.

> I also avoid mock objects like the plague, aggressively preferring to either
> build the interface to be testable in the first place, or failing that,
> adding some hooks in the tested code itself, rather than trying to push it
> onto the code from the outside.

Same here. Mock objects can be useful when trying to retrofit tests onto
existing code, but that's just a concession to practicality. Reworking a huge,
untested code base to make it testable can be a great way to introduce bugs
unless done with extreme caution. (That's the main topic of Feathers's book,
FWIW.)

~~~
jerf
"I'm not convinced that letting testing drive the design process is always a
good idea"

Yeah, I gave full-on TDD the college try and did not like it. IMHO it might
not be a bad crutch for those who are still new to code design issues, but it
just involved too much closing my eyes to issues that I knew would be a
problem in the future and willfully blundering into design traps I knew about
in advance. Using it to learn about those issues is probably a good thing, but
it must eventually be discarded.

I still do sometimes write the tests in advance when it makes sense (usually
those same low-level things I referenced in my first post) but as a general
principle I don't find it gives _me_ the best APIs in the end.

~~~
hello_moto
I'm 50-50 on TDD. But I suppose that's because of the people around me
examples I mentioned somewhere in this thread.

On one side, TDD made sure that your code is designed to be testable at
maximum level. That means: developers have no excuses to design their code to
be testable (whether it uses DI, less coupling, or whatnot).

One thing about developers is that they're too self-confident that their code
is not buggy. When bugs found, they just fixed it and didn't bother to analyze
and learn from mistakes.

So what happened when they wrote untestable code and suddenly 2 weeks later
there were bugs manifest due to whatever reasons? If the untestable code is
easy to refactor, management probably won't mind a bit of hickup in their
schedule. But if it takes a wee bit more time?

I think TDD is good when the developers are inexperienced. But at the same
time, it's human nature to be lazy and made mistakes (or underestimate
things).

I don't know the best answer whether TDD is good or not.

~~~
silentbicycle
Right. The tests come from the same mind(s) that wrote the code. Some kinds of
bugs won't occur to them at the time, and may also be missed by the
corresponding tests.

Testing upfront is just another tool, whether by unit tests, static
typechecking, or a REPL. Often useful, but not a panacea.

------
johnwatson11218
To me this kind of article is really illustrating that we are just at the
beginning of understanding automated testing. One idea that I think is very
interesting is having the test case abstracted into a meta model and then the
different kinds of tests derived from that. Think about the difference in
registering an new user using mock objects to mock up from the db to the ui
versus the same test scenario using something like htmlUnit. Some things have
to change but there is something common to both test scenarios so how can that
common part be refactored out?

Secondly I think we may see a world where test suites are written by the
higher level engineers and the implementation is farmed out to the cheapest
bidder. What if there comes a day where we don't even read the actual code any
more? Just look at the test report for correctness and that it performs well?
In that situation I can imagine the test suite becoming the best design
document that software engineering has produced.

