
Why Most Unit Testing is Waste [pdf] - henrik_w
http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
======
peterclary
From the paper: "Throw away tests that haven’t failed in a year."

Hell, no.

Our unit tests provide reassurance that when somebody revisits a component N
years from now, and makes a change, they are significantly less likely to
break the existing behaviour, even subtly.

Throwing away unit tests is like saying "This ship has been sailing for years
and has never sunk - let's throw away the lifeboats!"

Now, yes - somebody could erroneously change the test condition to make it
pass, although hopefully that kind of change would be spotted by even a
cursory code review. You can say the same thing about developers who
carelessly suppress compiler warnings without understanding what they're
telling them. However, the support is there in case I, or another developer,
make a mistake.

FWIW, catching future mistakes is definitely not the only thing for which we
use Unit Tests. The problem domain is hard, and for every test fixture we've
written we've found at least one bug. Catching and debugging these bugs from
the unit test runner is a lot easier than spotting and debugging these kinds
of issues at runtime.

~~~
e28eta
I think it'd be impossible to know which tests haven't failed. What if they
failed on a developer machine because he introduced a regression, but it was
fixed prior to checkin?

Your CI build thinks the test hasn't failed, but instead it was performing an
important job.

Those tests probably are a good place to start if you think you have useless
tests, but it's simply a heuristic.

------
programminggeek
A good design is going to provide more value over time than a good test suite.
A lot of developers spend a lot of time writing tests for terrible design on
prototype codebases. Those prototypes eventually are thrown into production
full of bugs and the answer tends towards "more tests!" instead of "better
design!"

Most developers just keep building on top of the same codebase until it is a
total mess, long after the team has learned a bunch and should refactor or
throw away the code and build something designed for production use.

There is really not enough discussion about when he right time to do thorough
testing and when a project transitions from prototype to production quality.

I realize things like software design, planning, and thoughtfulness are heresy
in the agile world of 2 week sprints, but most of these things are our own
fault due to a lack of planning and craftsmanship. Complain all you want, but
in general it's our own fault.

~~~
tigroferoce
> Most developers just keep building on top of the same codebase until it is a
> total mess, long after the team has learned a bunch and should refactor or
> throw away the code and build something designed for production use.

That's exactly where a good test suite comes to save you. If you have tests
you can refactor as much as you want until you get the code you want. If you
don't you will never have enough time to refactor the code base because the
effort of manually testing will be too high. And as result your code will rot
over time.

~~~
programminggeek
Yes, but if you are leaning on your test suite to verify a huge refactor, you
are unlikely to make the kind of sweeping design changes that are sometimes
needed.

For example, say your ORM is causing major trouble and you need to replace
everything from the controller back to the database. Heck, maybe you decouple
the whole backend out into an API. The only tests that help you are the end to
end tests. The rest of your test suite for your backend needs to be thrown
away as much as your backend itself does.

------
michaelfeathers
"When I was programming on a daily basis, I did make code for testability
purposes but I hardly did write any unit tests. However I was renowned for my
code quality and my nearly bug free software. I like to investigate WHY did
this work for me?"

Here's why: anything that focuses attention during development improves
quality. Unit tests can be that thing but that means that their value is in a
place that Cope is not describing.

See:
[http://michaelfeathers.typepad.com/michael_feathers_blog/200...](http://michaelfeathers.typepad.com/michael_feathers_blog/2008/06/the-
flawed-theo.html)

------
revscat
I'm back in Java-land these days, which is culturally very pro-unit testing.
After getting exposed to it again for a few months again I've come to side
with the author here. I've never really been comfortable with the amount of
time certain people dedicate to unit testing, especially the TDD crowd, but in
my hiatus something has arisen in popularity which has made it all the worse:
mockito.

Prior to mockito, unit testing was (more or less) limited to testing that your
methods behaved as expected, and would occasionally expose
NullPointerExceptions or other exceptional conditions. Dependent objects were
either simplified or simply ignored. With the rise of mock object frameworks,
however, your tests specifically say "this method on this mock will be called
X number of times, with this result". Mind you, this is all happening in the
context of another method call. So, for example, if you were testing the
method "calculateDueDate", and that method took a DateTime object, your test
might look like this:

    
    
            @Before
            public void setup() {
                MyClass myClass = new MyClass();
                DateTime mockDateTime = mock(DateTime.class);
            }   
    
            @Test
            public void specificDueDateShouldBeTenDaysFromNow() {
                DateTime result = myClass.calculateDueDate(mockDateTime);
                verify(mockDateTime, times(2)).getHour(); // contrived
            }   
    

The problem with this is that the tests become obstacles in the way of
refactoring the code. Should you decide that you don't want to use the
DateTime library any longer you will have to not just change the code which is
using it but the tests as well. Or what if, going back to the example above,
you decided not to use the getHour() method any more? Every test referencing
that will have to be changed. And changing those tests is very likely to be
more involved than changing the code under test, because there frequently are
more tests than code. This has a negative impact on the design of the
application. Because companies rarely dedicate resources to making existing
software better purely for its own sake, you tend to have to do what you can
when you can. This means your time is limited to make that refactor, or
upgrade that library, or do whatevever change it is that needs to be done.
Unit tests, especially those that use mocks, can get in the way of this to
such an extent as to make such efforts impossible.

I think testing is important. I do not, however, share the belief that is the
sole, or even primary, determinant of code quality. In fact, an over reliance
on unit testing can _easily_ be a net negative. Should unit tests be thrown
out? No. Baby with the bathwater and all that. But they should not be viewed
as a silver bullet, either. They're not. They can help, but they can hurt.

~~~
PaulHoule
I've had two experiences with unit testing recently that have made me a
believer.

One of them was that I was working on a team where a programmer quit and I had
to get a very complex codebase ready for production. The last programmer was
terrible, the kind of guy who had trouble making primary keys that were
unique, where any code that could possibly have a race condition did, and so
forth. The code had unit tests, however, and that made it salvagable, and
eventually I got the system to a place where it worked correctly and the
customers loved it.

In nine months of effort on this, I ran into one refactoring where it felt the
unit tests were a burden, and that involved about a day of work rewriting the
tests. Unit tests are more likely to be a problem, however, when they add to
the time of the build process. For instance, I wrote something in JUnit that
hammered part of the system for race conditions, and this was key to fixing
races in that part of the system. It fired off a thousand threads and took two
minutes to run, and adding two minutes to your build is a BIG PROBLEM,
particularly if anybody who wants to add two minutes to your build can do so
and if anybody who wants to remove two minutes from the build is called "a
complainer" and "not a team player". Overall the CPU time it takes to run is
more likely to be a problem than the developer time it takes to maintain them.

As for Mockito I have found it is a great help for writing Map/Reduce jobs. As
I don't own a big cluster and as I sometimes like to code on the run with my
laptop, an integration test typically takes ten minutes with Amazon Elastic
Map/Reduce. It takes some time to code up tests, but I get it all back with
dividends because often I get jobs running with two or three integration test
cycles instead of ten or twenty. When I find problems in the integration
tests, usually I can reproduce them in the unit tests and solve them there.

Now, it did take considerable investment to get to the point where unit
testing worked so well for me. I used to have problems where "the tests
worked" but the real application didn't because Hadoop reuses Writable objects
so if you just pass a List of objects to the reducer, you might get different
results in a test than you do in real life. Creating an Iterable object that
behaves more like Hadoop does solved that problem.

Generally if you are feeling that "unit testing sucks" or "mockito sucks" it's
often that case that you're not doing it the right way.

~~~
collyw
> Generally if you are feeling that "unit testing sucks" or "mockito sucks"
> it's often that case that you're not doing it the right way.

Well explain further. I hate these smart arse sounding comments - "you are
doing it wrong" without any indication why, or how to do it better.

~~~
jervisfm
My sense is that one should make the unit tests to be as resilient as possible
to refactoring and changes. This means that for so long as the public behavior
of the class does not change, one should not need to do much if any updates to
the tests.

Thus any test that is written in such a way that it would be present an issue
in refactoring code should be avoided if at all possible. A simple example is
directly constructing the class under test in the test method:

    
    
      @Test 
      public void tryToAvoidDoingThis() {
         MyClass = new MyClass(param1, param2);
         // do stuff to my class
       }
    

If this is done for each test method, when the constructor parameters change,
e.g. a new one is added, then each of the constructors calls in the test
method(s) will have to be updated.

Instead, have a level of indirection and have a single method that can create
a sample MyClass. Now when the parameters change, only one construction site
has to be updated.

In general, unit tests should _not_ be testing specific / internal
implementation details of the class. Rather, the tests should verify the
documented public behavior of the class.

~~~
emn13
There's an inconsistency here: unit tests should depend only on the public
behavior of a class; the constructor is public; constructor calls should
nevertheless be avoided where possible.

~~~
fcanas
Factoring out a common constructor in tests is an example of making the tests
resilient against changes in the underlying code. If the constructor changes,
the tests need to be fixed in one place, not in 50.

Other examples may be around a `setup` method. If the method is private, don't
test it. Then you can refactor freely. If it's public, test the pre/post
conditions around the method in as few places as possible (hopefully one).
Even if other tests rely on the object having been "setup", just trust that it
works. If the specification of `setup` changes, you just have the `setup`
tests to update, not the entire object.

------
morganherlocker
It varies from project to project, but my flow nowadays is something like
this:

1\. Describe the behavior of the function in a readme with a code example.

2\. Create a test that is mostly just a copy/paste from the readme code.

3\. Write the actual function until it passes the first test.

4\. Write a few more tests for any edge cases I can think of.

5\. Get the function to pass again.

6\. Encounter another edge case, goto 4.

It seems to work pretty well for me anyway. I think a lot of the debate really
centers around unit tests vs integration tests (no one serious I know of is
arguing for no testing, or all manual testing).

In my experience, you get the most bang for your buck with mostly unit tests
in library code, and mostly integration tests in applications. Despite the
difference in use cases, I think both sides talk past each other most of the
time.

Unit tests all over the place in an application code tends to be a waste,
because the tendency is to test the libraries your application is using.
Integration tests in library code is often terrible, because they are usually
slower and tightly coupled to external libs.

~~~
rhizome
I've been considering for some time to write the documentation to an app first
in a README restricted to Cucumber grammar and language, and see how
everything develops from there.

------
herge
We are looking at updating our version of Django in the next year, and I am
quite happy that we have a battery of unit tests that test how the features of
the current version framework work against our code. I will be a lot more
confident about upgrading versions because of what might appear at first
glance as 'wasteful unittests'.

Also, as a rebuttal to the author, how can you expect crappy developers who
write crappy tests to write good code?

~~~
collyw
Integration tests would be more useful, no?

~~~
rhizome31
The more I become experienced with testing, the less I think the difference
between functional, integration and unit tests is important. What's important
is to have a test suite with good coverage that runs fast. I found that a mix
of high level and lower level tests is a good way to achieve this goal and I
don't mind mixing them in the same files. High level tests provide coverage
more quickly, but some stuff, such as verifying that an exception is handled
properly, is easier to test with unit tests.

------
jayvanguard
Pragmatic unit test coverage is the way to go. Most of the summary advice
isn't too bad but this statement is beyond absurd:

> • Throw away tests that haven’t failed in a year.

Apparently the author has never worked on a large software project with many
versions supported over 5+ years. The value of unit tests increases over time
as the software and libraries it depends upon change.

~~~
boobsbr
Heck, I work with software that is 10 years old, and the only thing that keeps
us sane is very meticulous testing.

Seems like the author never heard of testing the tests: mutation testing.

------
geebee
Some of my my comment here may be a difference in how we view unit tests vs
functional or integration tests, and it may also have a lot to do with my use
of ruby rather than a language that needs to be compiled. But I hate having
less than 100% test coverage, I find it unsettling.

One thing I've often said to sell people on the concept of testing: you're
already testing. Guaranteed. If you're writing a complicated method, you're
almost certainly calling it on the command line and checking output, right?
Well, those things that you're writing to check it? save them in a folder
called unit tests. You've probably written some code to ensure that different
bits of code work properly together? Save those in a folder called integration
tests. You may be opening a browser to make sure that the app is behaving the
way you expect. Save those tests in a folder called integration. All of this
would make testing natural, rather than something you impose onto your project
that feels contrived and vaguely forced.

I will admit that there is some overhead on all of this. Rather than just
typing into a browser and checking the results, you'll need to figure out how
to automate that process. Rails makes this quite easy in my opinion, and rails
does blur the lines between a few of those categories I mentioned above. I
wouldn't worry too much about that - if you're writing tests that cover these
scenarios, I wouldn't bother getting too deep into the semantics of testing.
You're good.

I run a code coverage tool because I like to see what I haven't tested. Often,
it turns out I left some code to rot and forgot about it. My gut feeling is
that code rot is probably the most confusing thing to a new developer, and one
of the hardest things to deal with (can I remove this? should I?) Leaving
around obsolete code is kind of a burn on anyone who comes after you.

I have mixed feelings about TDD. I will say that when things are going
swimmingly for me, I am often doing TDD. But it requires a clarity that I
sometimes just don't have. More often, I'm writing tests and code essentially
in parallel with each other (like, write a line of code, write a test, write a
line of code, write a test). I suppose I could reverse them.

One last thing - I think that tests, especially the integration tests, should
give you a very good sense of what an application does. In short, if you
wanted to give someone a demo of your system, a walk through every case in
your integration tests would probably be a very good way to illustrate what it
does.

~~~
ulisesrmzroche
You're not doing Test-Driven Development if you're writing code and then
writing tests, though. You must write your test first, and only after you see
it failing, write the code to make it pass. You have to stick to this cadence
if you want to put TDD in practice.

~~~
geebee
Agreed... though I guess I gotta ask, what did I write that made you believe
that I think writing code before tests is TDD?

~~~
chris_mahan
I think your comment just made me understand what was nagging me about
testing: writing tests is a form of decomposition. The method/function is
complex/complicated, and therefore hard to reason about. The unit tests limit
the scope of input so that the in particular instances, the method/function
becomes more simple to reason about, enough that a test can be written.

Sadly, I feel there are things for which tests can't be written. If a function
takes two integers, and adds them together (the typical 2+2=4 therefore
n+n=n). Does this work when the integers are near-infinity large (trillions of
zeros -- theoretically possible with python)? How would a unit test validate
this?

If you wanted to test all combinations possible, you would have to brute-force
until the end of the universe, or until the machine runs out of hard drive
space (or SAN space), whichever comes first. If you wanted to take a
statistically significant sample, you would only have an elevated level of
certainty, not an absolute level of certainty.

I think that what the author is pointing at is that like mathematics, which,
let's face it, the human brain is much better at than computers, programming
is best done in the human brain, and that once the human brain has satisfied
itself of the proper of the program, the coding becomes simple.

And the program no longer needs to be decomposed, because it is understood as
a whole.

~~~
romaniv
_If you wanted to test all combinations possible, you would have to brute-
force until the end of the universe_

You only need to test for a combination of all the input variables that affect
the execution flow. However, to do that properly, you need to know the flow of
the methods your methods calls. (E.g. you would need to know that substring
throws an exception if the string in shorter than expected.) This kind of
information could be captured in some kind of meta-data that could be
propagated up the call chain for testing.

~~~
chris_mahan
But then you still wouldn't know that the function could be used for any two
integers, including those in the trillions of zeros.

Also, you would have to anticipate what possible uses it could be used in, and
that's impossible to know with certainty, so how could your tests be accurate?

~~~
geebee
Well, you wouldn't be able to conclude that your method works for any possible
input. You'd have a verification for how it is supposed to work for a
predetermined set of inputs.

Think about the situation above, where you're dealing with a method to add two
integers. So, you test 2, 3, and 5. It passes. Later, you decide that this
method should multiply rather than add, so you change it. Your unit test
breaks.

In my personal experience, about 95% of the time, I want to keep the
modification, and so I need to update the test. But every now and then, I
realize that the test is accurate and I have introduced unintended side
effects into my code.

That's just me developing for me. The tests are also very important when a new
developer is working on the app. If they change something, they need to know
if they've broken anything downstream.

It's not failsafe but I do think it's a huge improvement over no tests.

~~~
chris_mahan
If they've broken anything downstream... Well that would mean that something
downstream is using it.

This, to me, would mean that the app is being used as an API. API rules need
to be applied (don't change existing versions, etc). That's something OOP is
very bad at, I think.

Using the Unix philosophy or not adding to existing programs would also
alleviate the need for regression testing, no?

------
agentultra
Source code is a poor way to specify the behavior of a system. It is complex,
verbose, and difficult to reason about without proper language abstractions at
appropriate levels. Worse still are machine optimizations obscuring intent as
the software evolves, work-around "patterns" for inferior PL designs, and
other savageries inflicted upon us. It is highly unlikely you could give a
brilliant programmer a few hundred thousand or millions of lines of code and
ask them what the program does in an afternoon let alone ask them to find the
bug in it.

 _Good_ unit tests provide the specification for a certain low-level of the
code to a developer. You will never catch everything but their purpose is not
to test for absolute correctness. Their purpose is to specify, for a given
range of expected behavior, what the boundaries are. If I write the _test
first_ in earnest I am telling my future maintainers what I intended to write.
We can execute that intention against what I wrote and make assertions about
whether it works as expected. That's all I expect unit tests to do.

Formal correctness proofs for all possible inputs is way out of my league and
beyond the scope of almost every project I've encountered save for automated
proof verification software.

You should write unit tests and you should write them first. They are the
executable specification you will run your program against to ensure your
software behaves in the manner in which you intended it to. With solid, well-
thought out tests guiding your design you should be able to optimize and
refactor your software over time and see that it _still_ behaves as expected
in the face of change. And you should be able to hand a suite of well-written
tests to a new developer and expect them to understand what a given piece of
software does in a day (perhaps in even less time for smaller pieces).

~~~
radicalbyte
Spot on.

Unit Tests aren't there to ensure that you get the right answer. It's there to
ensure that you've asked the right question.

Or at the very least to document the thought process of the person
implementing code.

Use static analysis tools + code reviews to improve correctness, and unit
tests to guide design + document.

~~~
googletron
Unit tests aren't for asking the right question. There code is there and
readable (hopefully) so that shouldn't be an issue. Unit tests are there to
ensure when you need to make a change you don't include a regression (moving
code backwards). Missed something that the function was previously doing.

They aren't useless and have a clear place in testing. Unit tests alone can't
guarantee a fully tested system, but they minimize engineer error when working
on various pieces of a code base, but this is predicated on cleanly separated
and testable code pieces.

~~~
FeloniousHam
Can't you both be right? Another facet: unit tests are the living, useful
documentation of the code. Starting on a team with a large legacy code base,
unit tests are the tutorial on the best practices of the API. This magnified
if the "unit" tests include full functional test with an embeddable container
(eg. OpenEJB).

(apologies for parroting the gp)

------
collyw
I have been trying to add unit tests to my app, but almost all the tutorials
you find are useless. They show you how to test 1 + 1, or how to test the
framework I am using. The parts of the code I am almost certain will work.
None of them show you how to actually test anything useful.

~~~
michaelfeathers
I wrote a book called 'Working Effectively with Legacy Code' that's really all
about testing code that is not easy to test - real apps beyond toy examples.

There are techniques in the book but really you should try to write tests for
code as you write it. It's the best way to make sure that your code is
decoupled enough for testing. If you do that, you don't need the techniques in
the book.

~~~
earlz
You're the author of that!? I love that book, and even though I stick to it's
practices less than I probably should, it was incredibly helpful

------
unoti
The idea of making the software worse to increase some arbitrary measure of
"maturity" is sad and amusing. It makes me think of the saying "what gets
measured gets done." It seems it would make more sense for management to be
serious about things that actually matter, like customer satisfaction, user
experience, productivity, and defects in production.

~~~
notastartup
Unfortunately, the word "testing" somehow equates to the quality of the
software which it does not, not only to managers but those who have been
taught to read about it and forced to repeat back it's list of advantages.
It's a false sense of security found in a process that has minimal impact on
the software performance and quality.

When you write a crappy source code that performs crappy, writing all the unit
tests in the world won't save you. You simply increase the technical debt to
future changes in the source code or design of the software.

Simply keeping things as simple as possible, focusing on quality and
incrementally verifying along each step of the way saves far more time.

It always puzzles me how quality can be increased by writing more code around
existing code of questionable quality.

------
chiph
> 1.5 Low-Risk Tests Have Low (even potentially negative) Payoff

That's the key point for me -- does this test add sufficient value to the test
suite to justify it's creation and existence? If it's checking to see if a
value got copied over from one object to another, then no. If it checks some
logic, then probably yes. Is the code under test used in many many places?
Then test the hell out of it.

------
moron4hire
I used to be a diehard test fanatic, but then I grew up and realized that,
just like SQL ins't suitable for every task every double-plus-good, neither is
TDD. Some things I don't unit test:

That clicking a particular button fires a particular event. Syntax-wise,
setting up the event handler was essentially an assignment, and testing the
physical clicking of the button was the job of whomever wrote the operating
system's GUI system.

Similarly, that setting a field actually persists the value. Assignments are
easy to get right. They get wrong when I'm naming my variables poorly. Unless
that field is actually a property that is really syntactic-sugar for a method
call, in which case it is necessary, so test away.

That values adhere to certain type properties. If I have to write a test to
check type information at runtime, then I'm not using the type system in my
programming language (be it static or dynamic) correctly.

I do not use mocks to be able to test stateful subsystems. The whole mock
system creates a maintenance nightmare and makes changing APIs extremely
brittle. It's far better to have scripting in place to build a base, test
system from scratch and walk the program through a set of steps, checking
state transitions after each step. In other words, it all becomes on, big,
stateless test.

The added benefit: such scripts tend to help figure out deployment issues
before they happen.

I try to use stateless designs for everything. Method calls do not mutate in
place, the create a new object and return the results. This is not only safer,
it's infinitely easier to test, as you automatically have both halves of your
state transition, rather than having to clone the initial state before calling
the mutating function.

~~~
gaius
Sometimes people say to me "unit testing!". And I say "great, of course you
are already using a strongly typed language that the compiler can statically
check, that'll give you loads of tests for free!"

Then they stare at me blankly because they're Javascript or Ruby guys who are
only playing at being serious software engineers.

~~~
gregors
probably very close to the way I stare at people who exclaim "Yay! It
compiles! It must work!"

~~~
jksmith
I predate the popularity of unit testing, so I've developed some pretty good
skills understanding the problem domain and carefully coding for it. So yeah
when I did get my code to finally compile in Modula-2, it usually worked
pretty damn well.

------
ilbe
"If you find your testers splitting up functions to support the testing
process, you’re destroying your system architecture and code comprehension
along with it. Test at a coarser level of granularity."

What? Splitting up large functions into smaller, well-named units destroys
code comprehension?

~~~
route66
If your code turns into 30 one-liners with 30! unforeseen possible calling
combinations thereby exposing an API of irrelevant auxiliary functions to the
outside world? Yes.

~~~
ilbe
I think there's a bigger design problem then, e.g. those functions should
really be classes. I'm just talking about doing a few 'extract method'
refactorings.

~~~
collyw
I sometimes find that code which doesn't involve lot of nesting is a lot
easier to read and comprehend as one long method / function / subroutine than
chopping it up into multiple small methods and having to jump around trying to
work out what is happening. There is often nothing to be gained by splitting
it up in my opinion.

Is that not why some languages are known as _scripting_ languages, because you
can read them like a script?

~~~
nthj
I see what you mean. I'd suggest that longer, more descriptive method names
can help here, though. If I'm referring to a method body to understand what it
does when reading another block of code, the method probably doesn't have a
descriptive enough name.

[https://signalvnoise.com/posts/3250-clarity-over-brevity-
in-...](https://signalvnoise.com/posts/3250-clarity-over-brevity-in-variable-
and-method-names)

------
Roboprog
I liked the part about turning unit tests into assertions. Too bad most of us
(including me) are programming in Java, rather than something like Eiffel.

I really liked the "design by contract" stuff built into Eiffel when I read
about it years ago (before there was such a thing as Java).

[http://en.wikipedia.org/wiki/Eiffel_(programming_language)#D...](http://en.wikipedia.org/wiki/Eiffel_\(programming_language\)#Design_by_Contract)

Contrast the notion of a class invariant with the notion of a Java Bean, which
is constructed as a useless empty piece of crap, and is eventually mutated
enough to represent something useful. Of course, this rant takes us into
immutability in general...

------
stcredzero
_Another client of mine also had too many unit tests. I pointed out to them
that this would decrease their velocity, because every change to a function
should require a coordinated change to the test._

This is one thing I have never seen replicated outside of Smalltalk projects:
Automated refactorings of code automatically applied those refactorings to the
tests.

------
xioxox
The tests in my largest code work at a very high level. They are a number of
example documents, chosen to exercise particular features, which are processed
by the program and compared with known output. The tests have been pretty
effective at spotting changes in behaviour and crashes, so I've found them
useful. I'm not convinced that any lower level object or function verification
would be particularly helpful, but of course programs vary in their structure.
Treating the program as a black box with a variety of functions is a helpful
way to test I believe.

~~~
Roboprog
Agreed. I've found system testing, using things like HtmlUnit, of things the
user sees to be a much better use of time that testing individual method /
function / subroutine calls.

For some types of applications, you might have to make your own testing tools,
though: e.g. - compiler/tokenizer might need a tool to dump & examine the
generated binary; batch job file-watcher & scheduler might need a tool to
supply input files and manipulate apparent system time; ...

------
yeukhon
Some code are very hard to test unless you spend a great deal amount of time
in making a test driver. I once wrote some 1000 unit tests all by myself for a
project I was working on. It was painful to keep updating code, no matter how
much time I put into abstraction. At some point a big feature blows things
away and tests start to fail.

Some systems are hard to test. Anything involving spawning processes doing
unit-testing is hell. I can mock/stub but it takes a lot of effort to do that.
Nowadays I go with functional tests as much as possible. If I can't even pass
my functional tests how on earth do I ship my code upstream. Sometimes unit-
tests don't reflect problem in the real world and the hours i have to spend on
making unit tests passing can be used to make a few more functional tests more
robust.

Does anyone know unit test coverage for software like puppet/chef/ansible. A
year ago when I checked ansible the number of unit tests was very little. Most
of the time functional tests look more promising then unittesting in that kind
of complex, process/system interactive software.

------
daxfohl
"Premature optimization" has become something of an anathema in the coding
world, why isn't "premature testing" so scrutinized? Back in the 90's
everything was over-optimized because, well, it made for more interesting work
than coding to requirements and it showed off how smart you were. It seems
like unit testing is now the "interesting" thing to show off your
intelligence—"how can I abstract this function to allow me to inject a mock
that asynchronously...?"

And the sublime thing is that unit test will always be there to show off the
original writer's intelligence, just like the crazy cool optimization in the
90's. And while it works perfectly with the developer's idea of how things
_should_ work, there's a disparity between that and the real world, and as
soon as that disparity is revealed, it's suddenly a lot more work to wade
through all the cruft than it would have been if the requirement had just been
implemented in the most straightforward fashion.

A few years from now "premature testing" will be a thing.

~~~
daxfohl
Makes a lot of sense really, and the parallels are there: "First, design
without (optimization/testability) in the most straightforward fashion such
that it works and the code can be reviewed in a straightforward manner. Only
then, decide if some increased (optimization/testability) _is worth the extra
complexity_ and implement as necessary." Pragmatic not dogmatic.

------
Aqueous
Yes poorly written or tautological tests are useless . But I think we already
knew that.

I primarily use unit tests as a way to clarify my own thinking about a
function. It helps to step through the process of what the function should
return given what parameters it is given. Why is that psychological crutch
useless? Is your argument that we all should work the same way? Can't we do
whatever works?

~~~
rubiquity
> Yes poorly written or tautological tests are useless

> It helps to step through the process of what the function should return
> given what parameters it is given.

Congratulations, you just described tautological unit testing. Unit testing
should be about guiding design and describing the behavior of the subject
under test. Focusing on input-output testing is exactly what leads to the
"unit testing is worthless" blog posts.

~~~
Aqueous
Not sure at all what you mean. Can you describe what "guiding design and
describing the behavior of the subject" looks like from the unit test's
perspective?

How else are we to specify the behavior of a function than to specify, for a
given combination of typed parameters, what is the return value?

As the article points out it is impossible to exhaust the space of all
possible combinations - but nobody is trying to do that, so it is a straw man.
For me, I test the branches that are likely to occur in a production setting,
and any obvious edge cases that might occur, but infrequently.

~~~
jasonlotito
> Can you describe what "guiding design and describing the behavior of the
> subject" looks like from the unit test's perspective?

That's the first part of unit testing. You write a test that fails. By doing
that, you are describing what the API should look like. The unit test guides
the design of the interface, and how that object works. People do this, even
if they aren't unit testing. You've done this if you've ever written some code
calling a method, knowing that the method isn't there yet. But you stub it in,
and then go write the method.

~~~
the_af
Just a nitpick: that's TDD, not unit testing in general. When unit testing
guides the design, it's usually called TDD.

It's perfectly acceptable to write unit tests _after_ the production code, and
to not have them guide the design.

~~~
jasonlotito
You are right, of course. And that's what I meant. I just equate unit testing
and TDD together so often that I tend to (incorrectly) equate the two.

------
ejain
Hard to argue with this logic:

1\. Poorly designed unit tests are a waste.

2\. Most code is poorly designed.

3\. Therefore most unit tests are a waste.

To be fair, there's no shortage of high level advice on how to write good
tests (e.g. "each change in the code should break just one test"), but it's
hard to find examples of how to do so in practice (unless you're writing yet
another calculator app)...

~~~
huherto
>Hard to argue with this logic: > ... Agree > but it's hard to find examples
of how to do so in practice

yeah, I think that it is because every architecture/design/component/layer
seem to need a different testing approach. You really have to look at the
specific case to come up with a good strategy.

------
EGreg
First of all, let me describe my bias. I don't write unit tests. I have
written a giant, well-organized framework which is used by several of my web
apps. I don't even have integration tests at this point. My apps are my
integration tests.

Whenever I make a change, I am very careful to reason about what it affects
and then proceed to test it manually across the apps. Still, bugs crop up an
alarmingly large fraction of the time - probably 0.25%.

For any team, I wish we would do TDD. It's as simple as this: you want an
automated system that will signal an alarm, just like a compiler. This is
especially useful for weakly typed and duck typed languages.

Right now, we do have a system but it's not automated. It's better than
nothing - having many clients of an API who use it heavily allows us to make
sure we didn't break anything significant.

However, at this point, I would go for API Unit Testing first. Meaning - every
function exposed by the API should have unit tests matching the documentation.
You can document internal functions later, but start with the most outward
facing ones.

There is a second consideration: VERSIONING

As for us, we have a system where code that potentially breaks existing API
contracts is strictly kept in a branch. This branch is then imported by all
stakeholders and tested. Once they sign off that their clients are compatible,
it is merged into the mainline with a version number and a notice of breaking
changes. Everyone pulling has to read the breaking changes before updating. If
they aren't ready to deal with those changes, they have to clone their own
repo of the framework and cherry-pick commits until they ARE ready.

On the good side, our installer automates all this. The framework and plugins
keep track of version numbers for the db schemas, apis, everything, and signal
when something is out of date.

Automated systems are better than blaming humans. They are worth up to 50%
extra time of the project.

~~~
EGreg
I should say I meant 25% in the above. I need to improve my patience before
committing. And for that we need to put in place a good process, with a bigger
team and checklists.

------
bphogan
Unit testing is supposed to help you design better code. It's a design process
that helps you find where you might be tightly coupling things. Are you
mocking an external library? Ok, but how? Are you mocking the library
directly? Hmmm, maybe you've lashed your code too tightly to that external
library. Maybe that means you need some intermediate layer that you do
control, so that when Braintree gets bought by Paypal you can change your
payment processor by changing the middleware you built.

A unit test tends to expose those tight couplings so that you and your team
can decide if you want to keep going down that road or not.

"If it's too hard to test, maybe you should look at the design." Best advice I
ever got on TDD.

It's a tool, not a dogma. Use it like a tool to write better code, and get
regression tests as a side effect.

------
daxfohl
My feeling is that unit testing is so popular among developers because it's
_developers_ that get to decide the requirements. Don't really feel like
working on that mundane piece of new functionality? Then spend a couple hours
doing code golf and figuring out how to test this four-line async piece of
code. Nevermind that the level of abstraction required to do so will add
twelve lines and five new parameters to your method and ripple all the way
through your architecture. But hey you can still charge the 8 hours for that
because "unit tests add value".

IMO unit tests are a business requirement and should be handled as such. Unit
tests that go beyond what is required should be deleted, just YAGNI.

------
neurobro
Regarding the point about a test conveying little information if it never
fails: That's true if it has never failed in the past _and_ will never fail in
the future. E.g. maybe it really is tautological or it's testing legacy code
that has been locked down forever. But if it does fail at some point in the
future after having passed in dozens of iterations, that failure will convey a
lot of information. Every time it passes, the information-theoretic value of
that future failure increases. Unless running the test involves some onerous
cost, there is no reason to delete it. Just hide it in a folder where you
won't have to look at it.

------
weatherlight
"Object orientation slowly took the world by storm.."

That sentence doesn't even make sense.

------
bebop
In my opinion, I see unit test/integration tests as a way to refactor with
confidence. It also gives the added benefit of allowing one to test your
application with many different versions of dependent software.

My current project example is porting my Django application to multiple
version of both Django and Python. I can now start to make my application work
with python3 just by running tests. So yes, some of the tests may be testing
things that should always be correct, but it helps in other ways.

------
InclinedPlane
A few things many coders don't worry about enough when it comes to testing:

1: Most test code is of lower quality than production code.

2: Tests often contain assumptions and those assumptions are usually not
checked against any external reference. Whatever is assumed within a test is
de facto law. Consider mocks, what ensures that mocks have the same behavior
as the objects or services they're mocking? Mostly it's just an untested
assumption.

3: By far one of the biggest class of errors in coding is omission. And there
tests barely protect you at all. If you forget to implement a particular
feature/aspect of code chances are that you're going to forget to implement a
test for it as well. The way to solve these problems is with thorough code
review and integration/beta testing.

Ultimately you get into a "who tests the testers?" problem. Which, most of the
time, is answered with the resounding noise of crickets. Tests need code
review. Tests need owners. Tests need to be challenged. Tests need
justifications. A lot of the critical rigor surrounding testing is eroded by
common practices which encourage a distinct lack of rigor around test writing
(TDD I'm looking at you). Tests aren't magic they're just code, they'll tell
you what you ask but if you're not asking the right questions the result is
just GIGO.

Too many devs think that unit tests are cruise control for quality. They're
not. Doing testing right should be just as rigorous and just as difficult as
implementing features.

~~~
chris_mahan
And since it's impossible to have error-free code, and tests are code,
therefore it is impossible to have error-free test code.

Should there be tests for the tests? And tests for those tests?

(Ohhh I see where you're going...)

~~~
InclinedPlane
Turtles all the way down, right? The point being that ultimately you need to
have processes _other than automated tests_ to ensure code, product, and test
quality. Otherwise you're just shifting the problem around. Bad tests can be
just as hazardous as bad code, if not more so, since they can easily waste
lots of development resources which could have been used more productively.

~~~
chris_mahan
Which is why I would rather spend time designing on paper than on the
computer, and enter into the computer what I know to be logically correct.

------
ollysb
I've been building my first full blown ember app since the start of the year.
I haven't got up to speed with testing yet (as opposed to my usual
cucumber/rspec combo with ruby). I can tell you it is painful! I'm used to
being able to change code on a whim, safe in the knowledge that my test suite
is going to tell me about any regressions. We're using ember-data and there's
some changes (like changing a model's association from sync to async) that can
have consequences far from the code that you're working on so we've been
seeing a lot of regressions. This has meant we're doing a huge amount of
manual testing (I'm the only dev but there's 2 biz guys so this balance works
at the moment). But I'll definitely be spending the rest of the week getting
up to speed with ember testing. Maybe you can get away without tests if you
have a really strong type system (i.e. better than java's) but for dynamic
languages I think you're in for a world of pain without that safety net.

------
fredgrott
The way I found to maximize this is:

1\. TDD is for when I am learning new programming as it keeps my wrong
assumptions in check when learning new stuff

2\. BDD is when you have decrease the learning curve to a low level and helps
verify behavior of the system.

I use a combination of both but generally my TDD number of tests decreases
when I get at a certain advance level of programming in a subject while my BDD
tests generally increase.

~~~
the_af
I find this puzzling. Now don't get me wrong: I'm no fan of TDD, but if I
understand TDD proponents correctly, it's a __design tool __, not a testing
tool. I assume your software will always need designing. So are you saying
that as you get more familiar with programming you need less design? Or maybe
instead of TDD you meant plain unit testing?

------
lectrick
I disagree with many things in this article.

For example, he says that you should throw away tests that haven't failed in a
year. I say simply _disable_ them and put them in a separate test suite, maybe
called a _validation_ suite. If the code that they test is changed, you can
re-run those tests again to ensure no expected functionality is broken.

------
K0nserv
In my opinion there are two things that makes unit test really useful and they
are API design and refactoring security. The article didn't talk about either
of these.

Writing unit tests creates a mindset where the API for a class /method/module
will be designed in a decoupled way to ensure that unit testing is possible.
Writing unit tests force the programmer into creating software that is
decoupled and modular.

You unit test suite is also a formal specification and documentation of what
the program is suposed to do. This drastically reduces concerns and issue when
refactoring the code base because if the tests are still passing the changed
code has, with high probability, not introduced bugs. This of course relies
somewhat on the tests actually being meaningfu and not tautological.

As pointed out in the article striving for 100% test coverage is not a good
measurement of quality. Some things simply don't need to and shouldn't be
tested.

------
cbsmith
Okay, I'm not familiar with this paper, and I've only read the history intro
so far, but it was bad enough I had to stop and comment.

It's just wrong on many different levels... For starters, the point about
determining which function is going to be called being determined at
runtime... That's a function of dynamic binding/function selection, not object
orientation.

You can have OO software that doesn't have any runtime polymorphism, and you
can have non-OO software that is entirely based on dynamic dispatch.... and
assuming the logic is such that you really can't know how it will be resolved
until runtime, the problem is no better that static code, because the static
code effectively becomes an branch based on a dynamically calculated runtime
value NO MATTER THE CODING STYLE.

I could go on... but I'd have to write as much as the author has.

------
einhverfr
From the article:

> _In most businesses, the only tests that have business value are those that
> are derived from business requirements._

This 10000 times over. I write _lots_ of tests for financial software. My
tests are always longer than my code. However, my tests are _always_ written
to business/legal requirements and never to the code in a financial software
context. Moreover I try very hard to ensure the tests can be run on production
software safely, so we know for sure whether or not things are actually
working as expected in the field.

For application frameworks it is different. The tests are written to the
specification, not to the code.

But if I had a dollar for every time I have seen a test case that shouldn't be
there, because it tested some non-requirement, or worse (some behavior of a
dependency).....

------
magicroundabout
The argument seems confused.

'Few developers admit that they do only random or partial testing and many
will tell you that they do complete testing for some assumed vision of
complete. Such visions include notions such as: "Every line of code has been
reached," which, from the perspective of theory of computation, is pure
nonsense in terms of knowing whether the code does what it should.'

It is as if he had never heard of 'use cases', which in fact are not mentioned
anywhere in the text - a pretty glaring omission. The portion of the code that
is reached is irrelevant as long as you have covered all of the use cases of
the system. Ideally code that is not run for any of the use cases should be
removed but it does not prevent the software from being fit for purpose.

------
daxfohl
Much unit testing seems to paradoxically exist to justify OOP than to really
test functionality. You'll never need to swap databases (and your DB
abstraction layer probably wouldn't isolate all those problems anyway), but
hey it lets you substitute mocks that let you verify that your implementation
of the mock operates according to the code you happen to have written. So
let's add that extra layer of abstraction, break all our "go to definition"
IDE actions, because now we can prove that when we change DBs it's the DB that
is wrong because it doesn't correspond to the mock we hacked together.

------
spotman
I think its important to note that the title says _most_ , and doesn't say
100% of the time unit tests are bad. It specifically mentions using them for
critical functions/algorithms, or 'units of computation'.

But, he outlines some of the less useful parts about it, common mistakes, and
my takeaway from the article really hits spot on with some poor experiences I
have had with teams that get lost in unit testing, and it really can lower the
code quality and simultaneously the velocity if not approached carefully.

Some key quotes from the article:

"If you find your testers splitting up functions to support the testing
process, you’re destroying your system architecture and code comprehension
along with it. Test at a coarser level of granularity."

This is all too common. Especially with inexperienced developers. Management
requires 40, 60, 80, or 100% test coverage blindly, without thinking about
whether it makes sense to test that particular part of the code, and
furthermore doesn't take into account readability, or in my experience the
pain of over-abstracting something. Little is worse when trying to debug a
program, and dealing with over abstraction hell to the point where all you can
read is the tests, and trying to read the source code has become entirely over
complicated, all in the name of keeping the code in a format where its
testable, at the cost of it being understandable.

Developers that have a lot of experience with system design and software
architecture are in a much better place to write appropriate tests while still
maintaining understandable source code, but if I had my choice between an over
complicated codebase with 80% test coverage, or a more simple codebase with 0%
test coverage, I would choose the simple codebase every time.

"There’s something really sloppy about this ‘fail fast’ culture in that it
encourages throwing a bunch of pasta at the wall without thinking much... in
part due to an over- confidence in the level of risk mitigation that unit
tests are achieving."

In modern development shops that do a lot of TDD, the tests are relied on way
too much. Testing of any sort is not a silver bullet. But you find people,
even in pretty big, mainstream development shops of large internet properties,
relying almost solely on this. Then they pass their 'finished work' over to
operations to be deployed, and when something breaks because there was not a
test that counted how many file descriptors were used, the developers are
always so quick to say 'well, all the tests pass, so its an operations problem
now'.

"However, many supposedly agile nerds put processes and JUnit ahead of
individuals and interactions."

In the article he comments on how someone told him that debugging isn't what
you do in front of a debugger (obviously debatable) but that its also when
you're staring at your ceiling, or discussing the inner workings of a program
or algorithm with a counterpart. This is so key, and this is why pair
programming is often helpful if you get into a good rhythm with someone.
Thinking intrinsically about how software works is the take away here, and all
too often we see people rely on tests as a silver bullet and the end result
can be code that is over complicated, over confident, and when deployed is an
operational nightmare. These sort of things often have a giant net loss in
revenue, due to the net loss in a teams velocity to ultimately produce code
that works rapidly. When developers lean on tests less (but still employ them
where it counts) you'll find easier to maintain code, written by people that
will step up to the plate and be responsible for that code.

Obviously there are exceptions to this, and there are shops that have the
right balance, and maintain high quality understandable code, while
maintaining high velocity. Personally, in over 10 years of being in this
industry, almost all examples of this that I have witnessed are open source
projects that are peer reviewed.

~~~
nimblegorilla
>> If you find your testers splitting up functions to support the testing
process, you’re destroying your system architecture and code comprehension
along with it. Test at a coarser level of granularity.

The author has horrible reasoning. Splitting up large or complicated functions
is almost always a good thing.

> if I had my choice between an over complicated codebase with 80% test
> coverage, or a more simple codebase with 0% test coverage, I would choose
> the simple codebase every time.

This is a false choice. I prefer a simple codebase with 80% coverage. The
notion that highly tested code must be complex is simply not true.

~~~
lostcolony
The author's reasoning is fine.

"If you find your testers splitting up functions _to support the testing
process_ "; he's condemning splitting the function for that PARTICULAR
rationale; he's not universally condemning splitting the function.

Of course it's a false choice. He's not saying you can't have both. He's using
it as an illustrative example that if you're making the code more complex just
to make it more easily testable (see prior point), then you're choosing the
wrong thing to do.

~~~
nimblegorilla
The idea that splitting up functions makes code more complex is ridiculous.
The author claims splitting up functions is "destroying your system
architecture". He's trying to claim the exact opposite of what usually
happens.

If the code needs to be split up to support testing then its likely that the
code should be split up to support other development. Splitting up large
functions generally makes software better. Whether that splitting is done as
part of normal refactoring or is motivated by a test suite seems irrelevant.
Saying that small methods lead to complex code is insane.

~~~
lostcolony
"likely". "generally". I.e., not always.

If your only motivation is "it makes it easier to write tests", and there is
no other gain, it falls into the remaining case that you even allow for.
You're now splitting functions that don't make sense to be split, solely for
the sake of making testing easier. And that is bad. A lovely discrete chunk of
abstraction is being split across two functions, that you would never call
separately, solely to aid testing. And that is bad. That is all this article
is asserting with the statements you quote.

~~~
nimblegorilla
No where does the article acknowledge that method decomposition is a valid
software practice. It's pretty clear he considers splitting up of functions to
be bad regardless of the motivation. Like others have said - if a large
function is too complex to test then the codebase is probably improved by
splitting it up. That is a benefit of testing and not a downside.

------
dllthomas
Positively brain-damaged.

It's not like unit tests are a magic bullet, or applicable to every situation,
or nowhere deserving of criticisms, but both the criticisms and
recommendations here seem poorly founded at best and frequently harmful.

------
S_A_P
Unit testing along with every development methodology d'jour is that it fails
to account for the fact that some devs are good and some, quite frankly suck.
There really just isn't any way around it. Unit testing does solve a subset of
development problems. It also creates a whole new set of problems when
managers expect perfect software because it was tdd'd. They most important
trait I've seen in good developers is a deep understanding of the problem
domain in which they are working. If you don't know the how and why of what
you are doing nothing can save you.

------
shitgoose
Question to TDD crowd - should we write test cases to test the test cases? If
your test cases are always correct and don't need verification, then why do
not you write code that is always correct in first place?

~~~
drpre
Something something turtles all the way down.

When you write a unit test, you are writing code that expresses intended
behavior in a different way than the original implementation. The probability
of making exactly corresponding errors (i.e. the implementation is wrong but
the test incorrectly checks it and erroneously passes) is lower than the
probability of making errors in the original implementation independently of
any tests. If your test is incorrect, chances are good that it is wrong in a
different way than your implementation, and ideally this will trigger a
failure that will lead you to recognize your mistake and fix it.

If you do not feel that first-degree tests offer enough confidence in the
correctness of your code, then by all means write tests for the tests. But
that many will find this idea absurd demonstrates that the cost/benefit ratio
diminishes the more meta you get. (EDIT: Especially since integration and
other tests also help contribute to the confidence that the code is correct.)

Alternatively, if your unit test is so similar to the implementation itself
that corresponding errors (between the implementation and the test) are
likely, then it is probably a poorly written test and of little value.

------
lgunsch
James Coplien and Bob Martin have a debate of TDD and unit testing. There is a
video of it on YouTube:

[http://www.youtube.com/watch?v=KtHQGs3zFAM](http://www.youtube.com/watch?v=KtHQGs3zFAM)

------
al2o3cr
"Unit tests are unlikely to test more than one trillionth of the functionality
of any given method in a reasonable testing cycle."

Taken _literally_ , sure. But are you seriously implying that it's useful, or
even sensible, to write test cases for every possible input to the line "z = x
+ y"?

(previous HN article on testing floating-point implementation details
notwithstanding)

~~~
michaelfeathers
Unit tests are a tool for understanding. Their value does not come from
targeting coverage and formal correctness.

At the end of the day, we have to convince ourselves that our code works. We
can do that by looking at it and seeing what it does, and by writing tests to
demonstrate that we are not lying to ourselves about what we've reasoned.

Write tests for things that you are curious about or don't understand - those
are thing things that need to be tested.

------
methodin
Unit tests suck if you are not writing proper testable code to begin. If your
classes and objects are not written to be separate from each other to a decent
degree any change has widespread effects.

When you are properly componentizing your code, then while refactoring the
innerworkings of classes without affecting their method footprints, unit
testing becomes and invaluable tool.

------
rqebmm
I do most of my work for mobile, and in that space unit tests generally feels
like this: [http://xkcd.com/1319/](http://xkcd.com/1319/)

The vast majority of your typical app is made up of untestable UI, or
framework methods and API calls that should be doing their own testing.

Now, INTEGRATION testing on the other hand...

------
Revex
I can't seem to justify writing tests. It is time consuming and cumbersome. I
figure as long as I document reasonably well, and the design is straight
forward there is no sense is wasting my time to write tests. I'd prefer to get
something done instead of supposedly helping my future self(or my employers).

~~~
chadcf
Really depends on what your app is doing and how critical it is to a business.
For a complex application that has been in production for years, whose
original developers weren't very talented and now are long gone, and your
business relies heavily on customer satisfaction and bugs can cause lost
customers and thus revenue as well as potentially exposing protected data,
tests are absolutely critical.

On the other hand, a hobby project used by a few people or internally at your
company where bugs can be discovered in use and aren't usually a big deal, eh.

And of course, there are a lot of levels between the two extremes...

------
davvid
An interesting debate: "Jim Coplien and Bob Martin debate TDD":
[http://www.youtube.com/watch?v=KtHQGs3zFAM](http://www.youtube.com/watch?v=KtHQGs3zFAM)

------
bennyg
I usually only write unit tests for external open source libraries I am the
author of, or ones for other libraries when I can think of a test case they
aren't already checking for.

------
sivanmz
Objects can substantially reduce and simplify testing, if you make guarantees
about them during construction, keep them immutable and abide by Demeter's
Law.

------
gregors
short version "programmers can mess up unit tests"

let me suggest: [http://www.amazon.com/Growing-Object-Oriented-Software-
Guide...](http://www.amazon.com/Growing-Object-Oriented-Software-Guided-
Tests/dp/0321503627)

------
jmnicolas
TL;DR

Can anyone that read it provide a summary so that I know if it's worth
spending time reading it ?

~~~
sehugg
_• Keep regression tests around for up to a year — but most of those will be
system-level tests rather than unit tests.

• Keep unit tests that test key algorithms for which there is a broad, formal,
independent oracle of correctness, and for which there is ascribable business
value.

• Except for the preceding case, if X has business value and you can text X
with either a system test or a unit test, use a system test — context is
everything.

• Design a test with more care than you design the code.

• Turn most unit tests into assertions.

• Throw away tests that haven’t failed in a year.

• Testing can’t replace good development: a high test failure rate suggests
you should shorten development intervals, perhaps radically, and make sure
your architecture and design regimens have teeth

• If you find that individual functions being tested are trivial, double-check
the way you incentivize developers’ performance. Rewarding coverage or other
meaningless metrics can lead to rapid architecture decay.

• Be humble about what tests can achieve. Tests don’t improve quality:
developers do_

~~~
TwoBit

       >> Throw away tests that haven’t failed in a year.
    

That's absolutely f __*ing crazy. Throwing those tests away makes it
impossible for me to know that any changes I make to the implementation are
OK.

~~~
mempko
Why? Don't you use assertions?

~~~
smtddr
No amount of assertions in production code can replace assertions of unit-
tests; assuming the unittests are designed to try all known common & edge
cases.

~~~
TwoBit
Also, assertions only flag code that gets executed when you test it. Unless
you can be sure that you have tested every possible execution possibility
before shipping, you cannot know if after shipping your code will execute
something that you missed during testing. This happens a lot.

Additionally, the unit tests allow me to safely maintain the code from the
comfort of my own desk. If I relied only on assertions then I would literally
have to send changes to all possible users and ask each of them to please test
and tell me if they got any assertions. And do this for me three times a week
for the next two years.

~~~
mempko
Who said only?

------
cranklin
Unit tests may cost time initially, but saves more time later.

------
thrownaway2424
This guy totally misses the point of TDD. You write the test first because it
is the first client of the interface you believe you have finished designing.
While writing the test, half the time you will realize that the API sucks and
you need to change it.

~~~
illumen
Also, for pair programming it gives the pair a common goal to aim towards. The
people know the inputs, and outputs up front. So one person doesn't just stare
into space whilst the other person codes the function, and then says "this is
what I wanted it to do".

It also helps reduce code, since you only write code to pass tests. You do not
write code for some general theoretical use case that is never required.

------
opinali
Didn't read; disagreed.

