

Testing: Why Bother? - The Frequent Refactoring Excuse - yonix
http://codesheriff.blogspot.com/2011/11/excuse-5-frequent-refactoring-excuse.html

======
barrkel
When I'm not writing a library (which will have a well-defined API), for
perhaps the first third of a project, my "refactoring steps are too big" -
except they don't "turn out to be an adventure [I] regret every moment of
getting into". I'm usually writing or rewriting something that I haven't
written before, so the approach may change quite drastically as I find out the
previous approach was wrong. Hence the rewrite, and hence why it doesn't turn
out to be a misadventure.

But if I'm closing in on the home straight, I definitely want lots of unit
tests so that I can fix bugs and make minor course corrections without
breaking the rest of the functionality.

Writing tests first, in advance of any functionality, has not been profitable
for the kinds of programs I write. Unless I already know how the whole thing
will be composed at an architectural level, I don't know what to test, because
I don't have any unit boundaries in mind.

The old Ron Jeffries TDD Sudoku attempt contrasted with Peter Norvig's
approach, only on a larger scale problem, comes to mind.

For example, one project I have in mind was adjusting the Delphi compiler to
support anonymous methods, including supporting variable capture. At a facile
level, the unit boundary is simple: an expression tree before, an expression
tree after. But that's usually not a good way to test a compiler, because the
creating the trees correctly is a challenge in itself, and one that is already
solved by the parser; and similarly, figuring out if the end result represents
the semantics correctly is more easily done by actually executing it. So you
tend to test in a black-box fashion, but with a unit-testing idea in mind,
where the syntax is just a way of describing the parameters to a function. So
you write some minimal anonymous method syntax, with expected output, and of
course the compiler barfs. It's going to be some time, quite a bit of code,
before that minimal test is going to pass, violating the core tenets of TDD
(the _very_ simplest thing, skipping over the syntax, avoiding a syntax error
and passing the test, would be asinine). And the first approach chosen
probably isn't right, for one reason or another that only becomes clear after
a few days or a week. What might work well on paper or in a Lisp prototype
falls over when faced with the limitation of how the existing codebase
represents things. So you throw it out and start again. Testing isn't buying
you much here.

Ironing the bugs out, making sure all the corner cases are covered: tests
galore! No problem there. But the early stages, that's a lot more problematic.

------
adam-a
I used to be a test engineer and was weaned on the dogma of people like Martin
Fowler and Misko Hevery. Tests are very useful and can become indispensable on
large, complex projects. As I was evangelising tests to the devs on my team
for a year I had all these kinds of clever answers for people who didn't want
to write them.

Now that I'm a 'proper' developer, working at a startup building software from
the ground up I hardly write any tests. My main beef is this thing about
refactoring vs rewriting, I like to have the freedom to change class
responsibilities and relationships in my code as the project develops and the
responsibilities and design solidify. Problems come up and get worked around
and you can't always foresee these things. I find that sitting down and
writing a very well intentioned set of classes, in a pretty hierarchy, with
defined interactions is not a good way to build new software. But unless you
write your software like this you will inevitably spend lots of time rewriting
tests as you rewrite your application code.

Tests are good when you have an architecture and API sent from on high, for
which only the implementation needs to be worked out. Tests are not worth
their weight when your architecture is code-driven and you design as you go.
The best way to design software - that's another, very lengthy discussion.

------
nahname
If you are writing tests and can switch from iteration to recursion without
tests failing, you are missing the point.

Code quality is derived primarily through unit tests. Refactoring relies more
on integration tests. Refactoring SHOULD break some of your unit tests.
Refactoring should not break your integration tests. If you are refactoring
and breaking integration tests, you are either not refactoring (try
redesigning) or you are writing poor integration tests.

Also remember, you don't just refactor code, you should refactor your tests
too. Why would one piece of code be write once and forget while another is
not?

~~~
adam-a
There are a lot of ideas about what unit tests should be and what they
shouldn't be. If the purpose is to ensure my code works with your code and to
check that my new changes haven't broken something in another part of the
project then the public methods on a class are the right thing to be writing
tests for, IMO, and refactoring the private methods or changing from a loop to
recursion shouldn't break your tests.

If you want to use tests to prove your clever algorithm is giving the right
output half-way through then you're right. Certainly I wouldn't call those
unit tests though. And that's too low-level if you want to do TDD, where your
tests should define the public methods you want to implement. The
implementation shouldn't matter so long as the output is what you want.

------
cbs
_3) Are your refactoring steps too big? (Is it actually a "rewrite"?)_

Or maybe its a sign that unit tests are your favorite hammer.

When to support your testing methodology you have to say that shuffling a bit
of code or structure around is probably a bad idea anyway because its a big
change, you've gone off the deep end.

For better or worse unit tests shackle your code in place. Maybe the
"refactor" or "rewrite" is a good idea, maybe it isn't. But its not up to the
unit tests to decide that.

I've heard way too many times that the headaches introduced by unit testing
are actually problems with my XYZ rather than a negative consequence of
testing. If the tech you're trying to get me on board with comes with a long
chain "you don't understand" and "you're doing [other thing] wrong"s after the
fact, you've done a horrible job of explaining what you're selling in the
first place.

If I take this advice, and also practice TDD, I wind up in a situation where
iteration is impossible. What were once nice tools to check that my code does
what I expect, have suddenly locked me into a waterfall methodology. I
suddenly have to know what I'm going to do before I do it, and when I change
my mind changing the code significantly isn't an option either.

Just admit that tests are not a panacea. They add significant cost to a
codebase, and most of that time we accept that cost because it is more than
made up for in improvements to code quality. Please don't pretend that all
those costs are a good thing.

