

What, more tests are always the best way to improve my product? - moconnor
http://yieldthought.com/post/1430830776/what-more-tests-are-always-the-best-way-to-improve-my

======
michaelfeathers
As much as I advocate automated testing, I've seen people waste time with it
also by chasing coverage goals.

If you are writing a new class or a new method, by all means, write tests for
it. Getting tests around all of your currently existing code, however, is
often a waste. You need to test the areas you are changing and the areas which
are impacted by the change. In any code base, there are hotspots, places where
both change frequency and complexity are high. Tackling those first with
automated testing often gives you the best ROI.

Remember, you aren't writing automated tests for existing code to find bugs,
you're writing them to characterize the current behavior and get a behavioral
invariant so that you can refactor and also so that you can add features with
the knowledge that you haven't changed the old behavior in unexpected ways.

------
wkornewald
Test coverage is a fake feel-good metric that makes you want to increase the
number and by this give you the impression of progress. Real progress is
measured differently. Test as much as necessary, but not everything.

~~~
bphogan
I disagree. Test coverage is a wonderful metric.

100% coverage is a feel-good metric. I can easily achieve 100% coverage with
bad tests. But I routinely use coverage as my guide to find things that might
benefit from further testing, as in "oooh this path is never tested and stuff
is more likely to break here".

I try really really hard not to write useless tests-I use tests to drive the
development of my app. I need feature x, so I write a test for it. I end up
with relatively high coverage because of that. And because I've gotten more
disciplined over the years, I'm only implementing the stuff that passes my
tests, so I'm less likely now to have things that aren't covered.

So, I view coverage as a guide, not a rule, but it's not a useless metric at
all unless you designate an arbitrary percentage.

------
davidsiems
This whole debate is getting wildly out of hand.

Just because someone says 'I don't unit test' doesn't mean they're not testing
their code.

Most code spends 80% of its execution time in 20% of the code. Test that code
and fix the outlying bugs as they come up.

Most of the hard bugs crop up where subsystems interact. The bugs inside the
subsystems themselves are usually pretty easy to spot and fix.

The goal isn't to produce a 100% tested product that you can have supreme
confidence in when you deploy. The goal is to provide a reliable service that
people are going to pay you for. It turns out people pay for things that
aren't perfect all the time.

Every line of code you write comes with a maintenance tax. This includes your
test code. This is a fact.

Unit testing is a tool. Use it wisely and it will help you. Using it for
everything is overkill.

Come to terms with the fact that there will be bugs, no matter how much you
test.

------
jtbigwoo
I agree that arbitrary code coverage goals are foolish, but there's a huge
benefit that we seem to be glossing over. A good suite of automated tests can
double or triple the productivity of a new team member. Whenever I join a
project, I spend the first few months working at half speed or slower because
I investigate every dependency before I make a fix or build a new feature.
Even working on an unfamiliar part of a big system slows me down as I ask
questions and double-check dependencies.

Every large team (or team that plans to get large) should consider unit tests
to help keep their velocity constant.

~~~
dkarl
_Whenever I join a project, I spend the first few months working at half speed
or slower because I investigate every dependency before I make a fix or build
a new feature. Even working on an unfamiliar part of a big system slows me
down as I ask questions and double-check dependencies._

Carelessly changing things you don't understand and expecting the tests to
save you does not count as "productivity." Good design decreases the amount
you need to know to safely and productively hack on a system. Tests don't.
Tests can save an experienced and knowledgeable person from their inevitable
mistakes -- they can tell you something is wrong -- but they can't direct you
toward the right solution. Someone who doesn't know what they're doing can
hack by trial and error until the tests pass, but at that point their code is
still likely to be wrong.

------
espinchi
They key is to write high-quality tests: provide high 'bank for buck', and are
unlikely to break easily unless the requirements change.

What the article says is certainly true, there are trade-offs to make, but one
needs to take into account how effective and maintainable those tests are.

~~~
moconnor
High bang for buck is definitely key. I wonder if the differing points of view
really come from different kinds of development - consulting vs product
development.

~~~
mjw
I too have noticed that some testing practises seem to be particularly optimal
for (and preferred by) consultants and agencies working on short-to-mid-term
web development projects.

When they go on to over-generalise TDD as the one true 'professional' approach
to all development on all projects, it can become a little grating though.
Like all things, the right level of test coverage and the methodology used to
achieve it, are a matter of trade-offs and the right trade-off can vary a lot
depending on the type and lifespan of project, the way requirements are
gathered and the project is governed, the business goals, the people involved,
etc etc etc.

------
joshcrews
The tests today are a way of documenting the developer's expectations (in an
executable test) so that you can extend, refactor and delete code in the
future without the 'edit and pray' method. And bug hunting time is massively
reduced because the bug broke the test suite 10 minutes after being introduced
as opposed to 2 weeks later in production.

I can't document this, but many software projects fail or are forced into
rewrites and rescues. That never happens when you have high code coverage.

Has any rescue team ever come into a project and gone 'Well, the test suite is
in excellent shape, so this shouldn't be that hard'?

That's what I believe, but many products won't have a huge payoff for unit-
testing every method. Most of my code coverage is more like automated QA tests
through Cucumber / Capybara / Selenium in Ruby.

~~~
jbarciauskas
You prefaced this with "I can't document this" but the statement that
"[software project failures] never happen when you have high code coverage"
seems outlandish, I'm certain there are thousands of projects failing right
now with 80%+ code coverage due to requirements documentation that doesn't
match the user's expectations and developers writing trivial rather than
intelligent unit tests.

~~~
Tamerlin
I've witnessed that a number of times -- at one company I worked with, we had
a team of 30 people disabled even though all of the unit tests worked fine.
Someone had inserted something into some shared code (that with even
marginally competent engineering would not have shared) that rendered the
entire application stack inoperable. I think it was a monkeypatch (Ruby on
Rails), so it was VERY hard to detect.

------
AngeloAnolin
Whilst I believe that good TDD practices should be the norm in any software
development, there is still a fine line between maintaining that balance of
ensuring that your test would cover those operations which are vital to your
software. I think it is an overkill to have all tests attached to your
software even on the most mundane part. There is a certain trade off between
having test scripts to the code and limiting what only needs to be essentially
tested.

Having said that, I would still prefer there is some level of test in my codes
especially for those crucial processes, but for the others, I leave it to my
users to give me feedback which the code won't simply be able to cover.

------
hartror
In a perfect world with perfect coders we wouldn't need tests, things would
just work. So tests are a trade off, insurance against our human fallibility.
As with other types of insurance you weigh the probability and costs involved
in something failing and invest accordingly in insuring it.

Covering every edge case isn't necessary. Tests should cover core
functionality and then the ongoing addition of regression tests. We hover in
the mid to high eighties with our test code coverage and find getting more
than that overly time consuming (Python/Django btw).

80/20 rule coming into play?

------
Roboprog
Is this a call for balance, or a call not to waste any time on testing at all?
Perhaps it's not the latter, but will be perceived that way by some.

Disclosure: I am biased towards test driven development. Of course, I also
worked, years ago, at a place selling (niche) program development tools. Some
value was placed on correctness. YMMV.

~~~
moconnor
This is a call for balance. Not testing at all is not a productive use of
resources either.

~~~
michaelfeathers
I agree with both you. The thing is, we don't need to appeal to a sense of
balance. It's good practice to write tests for the things we change. But, in
every code base under version control, we have data. We can make this
empirical. Start aggressively writing tests for the hotspots.

------
anirudh
The main reason unit tests are useful: catching regression bugs.

------
rue
Tests are the best way to ensure you actually _are_ improving your product
(rather than breaking it or introducing bugs).

