
Adding tests when you don't have time to - fagnerbrack
https://understandlegacycode.com/blog/3-steps-to-add-tests-on-existing-code-when-you-have-short-deadlines/
======
clarry
Meh. I keep running into this thing.. I want to test a (legacy) system but it
wasn't designed with testability in mind. Every time I look for advice, I run
into a stumbling block where the advice seems to apply to systems that were
designed to be testable.

In this case, right at step 1. Capturing the output and making sense of it
while simultaneously allowing for normal variation e.g. due to timing is one
heck of a big job and requires a lot of mocking. It's not a job that can be
done in hours; probably weeks, if you want something that doesn't get thrown
away the next time you have time to spend on testing.

~~~
room271
Small changes. If possible, pick a bit of logic and extract it into a (unit)
testable function. Release it, see if anyone shouts. Repeat.

Even then, it is often hard to justify this work/find the time. It depends on
how long you think you'll be working with the codebase really.

Edit: the above assumes you have something like CD/fast release cycle. If not,
then it's not really viable unfortunately.

~~~
Retric
That approach is almost guaranteed to introduce bugs into complex systems.
It’s fine if you’re doing something directly useful at the same time, but just
be aware of the risks.

Though that dependent on what you mean by legacy. If nobody’s touched it in 5+
years it might very well be unchained when the system is completely replaced
with something else. It’s well worth adding tests if something is or will be
in active development.

------
diggan
I don't really understand the obsession around snapshot testing. Seems like a
lazy way of doing testing that doesn't add any confidence that you're doing
the right thing. It just adds confidence that the initial value you got the
snapshot from, is the same going forward, which is not super valuable.

Snapshot testing is basically what used to be referred to "master-knows
testing" (don't remember if this is the exact term) where only the one who
initially created the test knows if it's correct, because the intent is not
exposed at all. Instead, a properly written unit tests exposes what you want
that unit to do, and focuses on only that part, so you can change things in
the unit without breaking those specific parts. Snapshot testing would ruin
this.

> You want to add tests, but you don’t have time to!

This also seems like a weird thing. The idea behind testing is to save you
time. If you're taking longer time because you're adding tests, you're doing
something wrong! The whole idea is to make the feedback if something is
working or not faster, not slower.

~~~
true_religion
Snapshot testing is useful for catching regressions and nothing else. It’s
better than nothing, and in some cases is the only thing you can do if you
don’t properly understand what all the inputs and outputs of a system mean.

For example if working with a large legacy codebase, it’s important to
maintain previous functionality and only change it if you can rationalize why.
People and other systems might be relying on something that’s a bug, or a side
effect that’s not obvious.

If you don’t do this, then you can either choose to have no tests, or have to
tear down the whole system to understand it.

~~~
pydry
>Snapshot testing is useful for catching regressions and nothing else.

Visualizing and eyeballing snapshots in behavioral tests is a highly effective
way of catching bugs and defining behavior (edit code - run test - verify
output is correct, if it is, lock it down).

I find them to be vastly more effective _and_ cheaper to build than unit tests
even on greenfield code bases.

~~~
diggan
The painful part is "verify output is correct" which only the person who
initially did the test, can actually say it's correct or not.

If a new person joins the codebase and sees that a snapshot is now different,
how they know what's correct or not? What I've seen in the wild is 1) talk
with the person who authored the test or 2) just say yes and move on.

~~~
Tyr42
I mean you should have the context of your change right. If you're submitting
a change to reduce the margins on everything, you should expect to need a new
snapshot.

If you're are doing a pure refactor of your css, you shouldn't see a change.
Unless your css rules are order dependent and now you caught that.

------
darepublic
Enough with the memes. I enjoyed the boromir meme the first thousand times but
I'm only clicking this article because I want to see what you have to say not
to browse r/memes lite. It also is a poor deflection of the fact that a deep
topic such as this is getting such a light treatment. If the article is not
much in plain text the gifs colours and box shadow doesn't make it good

------
ricardobeat
This is a shortcut to achieve better coverage, brittle, possibly wrong, and
not maintainable; and it is not really giving you any additional confidence on
the code. If you're pressed for time, why waste it on something with close to
zero value? Coverage by itself is an empty goal, you're better off with no
tests than this.

~~~
dasil003
Suppose you're the first person to touch this code in a while. You don't
really know how solid it is, but given there's no tests you would be justified
in being a bit nervous.

Now you need to change it.

Any future bug discovered is probably going to come back to you, given you are
now the last person to touch it and the perceived expert on this legacy
monstrosity. Having proof of how the code worked before you touched it not
only prevents regressions while you're working on it, but also prevents having
to do this sleuthing on the fly with someone wagging a finger in your face.
Not only that, but depending on dependency drift you not be able to easily go
back and prove the old functionality after the fact, so having this in a CI
log will be valuable for your sanity and reputation.

~~~
ricardobeat
I don't think that scenario is realistic. If you're making a change that
doesn't alter a function's output, why is the code for that function being
changed at all? That would only be possible if the snapshot didn't capture all
the relevant outputs to being with.

Most likely the output _will_ change and you'll be left with a random blob of
data to figure out what went wrong instead of clear specifications.

------
rileytg
This is some of the best testing advice I've read. I follow the first two
points with enormous success. The third idea seems like it would solve a lot
of the cases where i've stumbled. Much appreciated!

reminds me a lot of how github's scientist works.
[https://github.com/github/scientist](https://github.com/github/scientist)

------
hinkley
Every time I forget there is always enough time to write tests, I quickly find
I’ve been grinding gears trying to sort something out that a test would have
helped me visualize much faster.

~~~
WrtCdEvrydy
The less time you have for tests, the more useful having tests will be. -
'Testing Axiom 203'

------
bluedino
>> You can’t afford spending days to write the tests that should be here in
the first place.

We had an outage because of some code that got pushed to production, and no
tests were written for it. It was a very simple bug, basically a list was
being appended to instead of replaced. Nobody noticed for a while because of
the right data 'being there', which made things worse because a lot of things
had to be re-worked after the bug was fixed, as they were going off the bad
data.

I mentioned that any new code should have tests. And the manager replied with
"We don't have time to write tests", and I had to bite my tongue to not reply,
"because we're spending all our time fixing simple bugs and reworking data"

~~~
bcrosby95
I don't understand the line of thought that automated tests are too time
consuming to add. You know what's more time consuming? _Manual_ testing. It
takes forever. And if you have to fix a bug somewhere, have fun manually
testing everything again, if you can even remember all the manual tests you
should be running.

For example, a project I work on is available in Asia. They put their last
name first. So anything related to names needs 2 tests to make sure it works
right. Oh, there is legacy data, so customers might be missing first, last, or
both names. So add another 3 tests onto the pile. You're looking at 5 tests
right there.

Let's say the feature you're working on related to names has 3 dimensions.
Well, now you potentially have to run 5*3=15 tests. If one of those fails and
you bugfix, you have to re-run those tests. Have fun with that if your tests
are manual.

------
rooam-dev
When I hear "add tests" it's always a red flag to me. Imho tests are not
optional, they are implied, they are a tool to build your stuff (like
scaffolding). If tests are added at the end, how would one know the code
written before that was correct?

The later tests are added, the less of their value is leveraged.

~~~
janpot
IMO, where tests are adding most productivity is when they enable rapid
refactoring. When you test with that goal in mind it doesn't really matter
much where in the cycle tests are added.

~~~
rooam-dev
Yes, it should not matter 3 or 30, once you start to test the higher level/API
then some classes don't need a separate test, because they are tested when
being invoked by other tested code.

------
ravenstine
I'd argue that if you feel like you _have_ to use snapshot testing to meet a
deadline, you probably don't even need the test in the first place.

It's one thing if you're using snapshot testing like your average test, to
make sure things don't break when you make changes. But if you're in a bind,
is a snapshot test really better than your own eyes?

Maybe it's better to release the thing in that case and add better tests
asynchronously to the deadline.

I suppose the snapshot test in a pinch is not so bad if it's just changes
being made to legacy code. But I'm not sure I'd do such a thing when building
new features.

------
aero142
It seems the author is using legacy code to mean "bad code I didn't write". My
experience with legacy code is that it exists on one server set up by someone
who left the company 10 years ago, only runs on a particular version of a
runtime that isn't available to download any more, and if you run it locally,
it makes hard coded calls to the live environment, breaking it. Having a unit
test framework is the dream.

------
davidjnelson
You can also skim chapters in this book. Best book I’ve read on the topic,
highly recommended ( Working Effectively With Legacy Code )
[https://www.amazon.com/Working-Effectively-Legacy-Michael-
Fe...](https://www.amazon.com/Working-Effectively-Legacy-Michael-
Feathers/dp/0131177052)

------
rgoulter
A "snapshot test" like this would save me time if I wanted to check that the
same input yields the same output for some module. -- I can see wanting that
if I'm adding new code which doesn't change the old part of the module much,
or if I'm changing the implementation (but not the interface) of the module.

But, yes, in many cases it's not a good automated check.

~~~
jpochtar
I like to "make the change easy, then make the easy change". This looks like 2
commits: 1st a refactor to make code more straightforward around the area I'm
about to change, and 2nd to actually make the change. Ideally, the 2nd should
be a small, local, easy to reason about change, enabled by code cleanup in the
1st.

The 1st commit in this sequence is a pure refactor, and definitionally should
change no behavior. The "snapshot test" described sounds perfect for this,
especially in unfamiliar code. Ideally, I'd go even further, and have a
compiler prove for me that my refactor produced a perfectly equivalent program
from a black-box perspective. Snapshot testing is great because it gets pretty
close, very cheaply, whereas the full program equivalence problem is
impossible.

~~~
staticassertion
Very simple - I really like that, thanks for sharing.

------
heydenberk
At a previous job, we had a law called "Peeler's Law", named after an engineer
who was a top front-line bug-fixer: if you don't have time to add a test,
that's when you need to add a test the most.

~~~
the_af
I find this to be a double-edged sword. It's undoubtedly true that the
_system_ needs tests, and if they can't be easily added, it _needs_
refactoring. And it often saves time in the mid/long run.

All of this is true. But what's best for the system isn't necessarily best for
the programmer. If you're under pressure to show results in order to keep your
job or get a good performance review, buggy untestable code is better than no
code because you spent time writing tests. Remember, _if_ the goal is to keep
your job and/or get that raise.

"But," you may argue, "in that case your job is terrible and you should quit."

I often see this advice in the bubble that is HN. The reality is that a lot of
programmers can't change their jobs that easily because of a multitude of
problems (age, difficulty interviewing, social anxiety, bad economy,
disabilities, pressing financial problems, etc).

~~~
Jtsummers
Well, even if you decide not to quit the decision will be made for you if you
choose to continue to develop a system without proper testing. Eventually your
customers will fire you.

~~~
the_af
You would think so, but this is very often not true.

You'd be surprised by in how many businesses you can simply plod on with no
repercussions, but if you try to test (and reduce your apparent immediate
output) you can get a bad review or a PIP. In some, automated testing is not
even something they are aware of. But what happens to the software, you say?
Well, all software eventually ends up either working/used or not
working/abandoned, and tests often have very little impact in this decision...

This was for example the case in the "shared services" unit of a MAJOR oil &
energy company. Probably the first you will think of.

------
yegle
Isn't this considered "Change Detector Test"?

Basically what your test does is to ensure there's no change to the result
(compare the new serialized output to the old one). Google's ToTT had a good
write-up on this topic: [https://testing.googleblog.com/2015/01/testing-on-
toilet-cha...](https://testing.googleblog.com/2015/01/testing-on-toilet-
change-detector-tests.html)

~~~
peteradio
X considered harmful, barf. What is with these pretentious pricks who feel the
need to hold "strong opinions" (TM) to be considered a "senior" (TM).
Sometimes testing the blackbox is all you got, thats it. Change detector test
is an absolutely valid way to prevent code from changing behavior, that seems
to be a pretty basic need and a very common form of regression. If you haven't
seen the case for when that might be necessary then you live in unittest
heaven or maybe you don't have a whole lot of experience dealing with other
peoples legacy code or with real business deadlines. Good fuckin luck!

~~~
ravenstine
I think that even the most "pretentious" opinions are worth learning about,
but people need to have the confidence to go their own way and do their own
thing when an opinion doesn't make sense to them. In programming, there are
very few things that are objectively bad, and even then what's "bad" is often
limited to who will be working on a given project.

The best example at the top of my mind is semicolons in JavaScript.

Opinions on semicolons seem to have changed a lot since I began to seriously
learn JavaScript. I remember most people saying to "always use semicolons"
without really explaining why. Some of the more seasoned developers would
point out a few reasons why not using semicolons could lead to issues, yet
even though one with a fair amount of knowledge about how JavaScript is
interpreted could simply avoid those issues in the first place, people would
still use said reasons to label semicolonless JavaScript code as "objectively
bad". It didn't even matter to these people whether the code actually worked
perfectly well, or if the code was just as readable, or if it was the
preference of the authors of the code. The zealotry towards semicolons entered
the point of obvious signaling of senority; most of these people just came off
as pretentious and unhelpful.

If someone says you shouldn't do something, but what you want to do makes
perfect sense to you, go ahead and do it. You might run into issues, or you
might have no problems at all. I use semicolons in JavaScript today, but I
never had problems when I used to write it without semicolons despite how I
was told by everyone that it was "dangerous".

