

Is it a Good Idea to Write Tests for Legacy Code? - stevehaunts
http://stephenhaunts.com/2013/11/19/is-it-a-good-idea-to-write-tests-for-legacy-code/

======
pmiller2
No. It's a _great_ idea.

The author mentions "characterization tests," which document what the code
actually does as opposed to what it "should" do, or what the documentation
says it does. These kinds of tests are _gold,_ especially if you go back
through the bug tracker and create tests for the major bugs that have been
fixed. Doing this gives you a good framework for constructing a regression
test suite, which is what you really want when working with legacy code. It's
like the opposite of TDD, because you want a green light to start, rather than
RED -> GREEN -> REFACTOR.

OTOH, some code just isn't all that testable by its very nature. I'm thinking
of stuff that requires expensive, custom hardware that would be difficult to
mock out, for instance. Or GUI code. Or, if you're unlucky, like I was in a
previous job... GUI code that requires specialized hardware.

Also keep in mind the amount of work it takes to make the code testable. You
can easily end up breaking things just doing this if you're not careful or if
the code is just not written with testability in mind.

~~~
ronaldx
Red -> Green -> Refactor is important because it indicates that the test
genuinely behaves as required.

(RED shows you that the test fails when it should: there is good reason why
this step should not normally be skipped; then GREEN shows you that the test
passes when it should)

~~~
prawks
Parent was suggesting that an existing system's tests should already pass
(green) before you monkey with it (red).

Although if you're fixing bugs, red-green-refactor can work well.

~~~
AYBABTME
It's very possible that without a red test, your green test would be green
with inputs for which you intended it to be red.

It's not enough to know that the test pass, you need to make sure the test
would fail. Otherwise it's like having a cybernetic canary that doesn't need
air to survive (great example, isn't it?).

~~~
ollysb
If you were expecting the test to fail for some inputs you might as well just
add another test case for that scenario.

------
petercooper
Often yes, although obviously your mileage may always vary, particularly by
the importance of the project (Your old blog? No. A busy e-commerce site?
Yes!)

I had an e-commerce system from a long time ago that was developed pre-TDD
mania. My post-TDD approach was to develop a suite of acceptance tests for it
testing a wide variety of typical use cases (add to cart, use discount codes,
check out, tax treatments for various countries, etc.) and this has since
saved my ass in a variety of ways without needing to go right down to the unit
testing level.

(As an aside, if you're working on a project, need to move super fast and
simply feel you haven't the time to do "proper" testing, be sure to do some
high level acceptance tests at the least. They _will_ save you time because
when the inevitable problems occur, you can just code the process to run
automatically rather than be clicking 101 times in the 23rd hour of that 24
hour hackathon ;-))

~~~
frownie
I definitely concur with this. Acceptance tests are really powerful as they
somewhat "imply" unit tests. Using your gut-feeling to write them makes the
process less brute force than "proper unit testing"

~~~
ollysb
As projects get older I find that the acceptance tests are the part that
change the least. Lower level there's always a degree of churn as you refactor
out duplication, consolidate interfaces etc. and the tests have to be
rewritten as you go. A good solid set of acceptance tests means you can
refactor away without having to worry about breaking the site.

------
wmij
My approach to situations when there are no unit tests for existing legacy
code is to start by writing a few coarse grained, or basic tests. Just enough
to get initial coverage on critical functionality. Then you can add a few more
tests over time.

What also works well is to give someone new to the project some initial tasks
to write and fill in gaps for missing unit tests and coverage. Just enough to
get them productive from the start, but don't task them with writing all the
missing tests. I've found that for most developers that it helps them come up
to speed on getting their environment configured, looking at the code, and
introduces them to your build and version control process from the start.

Ultimately it's going to depend on the project and how hard it is to go back
and add the tests.

------
jaggederest
Honestly it depends entirely on the expected lifetime of the changed code and
the complexity of the changes required.

If a particular application is just milking out another year or two until it's
replaced, it's probably not worth erecting unit test scaffolding around the
entire app. Sometimes it's safer to just change a couple constants and do a
smoke test than to really get into it.

That said, my experience was that it was immeasurably helpful in breaking
apart and working with older code - just taking a portion of my time devoted
solely to writing unit tests helped my understanding beyond what you'd expect.

------
moomin
This violates the law of headlines with questions in them: the answer is YES.
In particular, it's possible to turn a legacy system into a non-legacy system
this way.

~~~
davidgerard
That feels like a misuse of the word "legacy", but I know what you mean - the
sense of "legacy" as "an unknown quantity of technical debt just landed on my
desk."

~~~
moomin
Michael Feathers defines "legacy" as "something that doesn't have satisfactory
tests". Dan North adds "does something useful".

Without the tests, refactoring is dangerous. This means that the systems are
encouraged to remain static while the world around them changes. Systems that
do the job, but for the wrong reasons, are legacy.

That last one was me. :)

------
edem
Well it depends on the code quality. If you have a system which is actually
testable (with some mocking) then it is fine. You are in trouble however if
that system is full of tight coupled components, static calls, inlined cache
layers and so on. So you can't say "Yes" or "No" because it depends on the
context.

------
mtkd
Take a look at Michael Feathers - Working Effectively With Legacy Code

~~~
jsymolon
2nd the book recommendation.

It's relatively up to date in regards to technique and UML use but doesn't
overpower the reader with too much.

Written to a decent level (it's not a Knuth book).

One main critique, it covers a lot of common issues but no complex corner
cases, e.g. what to do about network architectures i.e. CORBA.

Should be required reading at the University level.

------
leerodgers
It could be quite a large task to unit test everything. Another option could
be to unit test portions of the code that are impacted by the changes going
forward. This can be handled by padding the development time for work done on
the legacy codebase with some unit testing time.

------
taylodl
I've termed this type of development "Maintenance Driven Development"
([http://taylodl.wordpress.com/2012/07/21/maintenance-
driven-d...](http://taylodl.wordpress.com/2012/07/21/maintenance-driven-
development/))

Yes, it's a good idea to write unit tests for the existing code base, though
be careful with setting unrealistic expectations with regards to test
coverage. The goal is to better understand the existing code base and have
tests around the most critical parts of functionality to provide evidence they
haven't been negatively impacted by the code changes.

------
gregors
Any code you modify now is much more likely to be modified in the future
compared to any code you haven't yet touched. Test the code you touch.

