
Good code needs few unit tests - andreyf
http://andreyf.tumblr.com/post/459323557/good-code-has-few-unit-tests
======
dusklight
This argument is not sound at all. The statefulness of your code has nothing
to do with why we write unit tests. Even if you write a purely stateless
function, it would still be useful to write a unit test to check for
correctness. If you are doing TDD, you wrote the test before you wrote the
function and that way you know exactly when you are done (because the test now
passes). If you wrote your test afterwards, it is still useful if you ever
change the behavior of the function, because any unit tests that were written
that use this function will now let you know where and in what manner your
change has affected the entire code base.

If you are writing bad stateful code that has a lot of bugs and you are not
smart enough to figure out how all the states interact, unit tests will help
you a lot, but there are many other scenarios where unit testing is useful.

I would not say good code has few unit tests. I say arrogant code has few unit
tests. If you write few unit tests, as the size of the codebase grows, you
have increasingly less certainty about how any change to the existing codebase
will affect other parts. When the codebase is small, this is not a problem. If
you are very intelligent, the size of the codebase that you can manage in your
head might be quite large. But you are just wasting mental capacity that might
have been doing so many more interesting things if you didn't have to remember
everything and could trust your unit tests to give you the information you
need when you need it.

~~~
jhancock
I'm not completely against TDD. However, one thing I've found is bugs are
caused by things you didn't think to write a test for (or write your code to
handle sans tests). The mundane stuff I see in the vast majority of unit tests
are things an experienced programmer almost never screws up. So how do you
write a test for a case you couldn't foresee?

My biggest argument in favor of writing tests is to ensure my code doesn't
break relative to other people's code. There are many times I've encountered
an update to a ruby gem with no major or minor version change that has broken
a, usually implied, contract with my code. I say implied, because duck typers,
at least what I've seen in the ruby world, tend to be slack about formalizing
interfaces. I write tests to protect against that sort of thing.

~~~
youngian
_So how do you write a test for a case you couldn't foresee?_

You don't, but once you discover it, you write a regression test to make sure
it doesn't come back.

However, IANATDD, I am not a test-driven developer - I don't see any reason to
be dogmatic, but I do think unit tests can be extremely valuable. So this
answer might not be TDD-approved.

~~~
InclinedPlane
" _So how do you write a test for a case you couldn't foresee?_ "

"You don't, but once you discover it, you write a regression test to make sure
it doesn't come back."

Sure, that's SOP in any good dev. house that does testing. But you've failed
to address a key weakness in TDD. The real question is how do you discover
these sorts of defects given that a single developer writing in a TDD style is
unlikely to do so? TDD is clearly not a cure all, and this is a major weakness
of it. Other development techniques, many of which can complement TDD, such as
formal code reviews and beta testing can do a better job of getting you to
higher product quality than TDD alone.

~~~
chuckm
"TDD is clearly not a cure all"

Who said it was?

~~~
binspace
Exactly!! There is no such thing as a cure all. Anybody angry over a technique
that does not solve all problems is clearly suffering from the Silver Bullet
Syndrome.

~~~
devinj
What? Why does something have to be perfect for somebody to get angry over it?

~~~
binspace
Not exactly sure what you are asking. I mainly meant any tool you use will not
solve all of your problems. Pretty basic common sense, IMO.

------
ajross
The point doesn't follow from the headline. The argument isn't that unit tests
are bad per se, it's that good code has fewer testable abstractions, and thus
requires fewer unit tests.

Which is basically isomorphic to "good code is tight code". Duh.

~~~
andreyf
_The argument is that good code [...] requires fewer unit tests_

Great point, my mistake. Title and headline updated (was: Good code _has_ few
unit tests). I really wish it were "duh", but measuring code quality by "unit
tests per line" [1] is frighteningly common, and seductive in its simplicity
and intuition.

1\. Not necessarily directly. It might be in the form of "I'll add more unit
tests to improve this code" (unit tests either cover an interface or they
don't; going in to 'add more unit tests' to 'improve' code means you didn't
define a clear enough interface to begin with). Or "our open source project
has more unit tests than another" (but I won't point fingers).

~~~
nostrademons
But "unit tests per line" is a _good_ metric. Why? Because programmers (well,
at least I) hate writing unit tests. If you have a high unit-tests-per-line
ratio, then writing one fewer line of code will let you avoid writing several
lines of unit tests.

The easiest way to bump your unit-tests-per-line metric is to delete lines of
code. That's a positive thing, in my book.

------
hendler
The title should have been "A lot of unit tests does not mean you've written
good code." I think the implications of the post are correct, but I think it
would be incorrect to say that if you have a lot of unit tests it means poor
code.

There are engineers who have a difficult time seeing things architecturally
(for reasons ranging from ability to priorities). Razor sharp focus can
produce a lot of unit tests while missing some refactoring that might have
helped reduce the code base, and therefore the number of unit tests.

[Edited]

------
locopati
The thing I rarely see brought up in discussions of automated testing is
longevity. If you have a team of developers and the codebase is going to be
around for a while, you need to grow tests over time so that the someone
doesn't accidentally break something they weren't aware of and so that you can
refactor your code with confidence that you're not breaking existing
functionality.

Sure, in an ideal world, everything is isolated by clean interfaces and
encapsulation, but even in that ideal world, you still have a complex system
that produces sometimes unexpected side effects (the emergent behavior of a
software ecosystem).

------
Confusion

      - One quality measure of an abstraction is the complexity
        ofits interface (ie API size).
      - Another measure is the amount of state the abstraction
        encapsulates.
    

The simplest abstraction of many interfaces is two methods:

    
    
      setParameterForNextCall(paramName, paramValue)
      call(functionName)
    

The API is small, not complex and the amount of state is small, given that
parameters are only set right before a call, all parameters are cleared after
every call and no other state is retained. Yet it's clear to anyone that this
isn't a good abstraction and that it doesn't result in an implementation that
requires few unit tests. It requires an immense amount of documentation and
allows a great many variations that yield bad/unexpected results.

    
    
      Hence, good abstractions require few unit tests.
    

Abstractions don't have unit tests: _implementations_ have unit tests. It's
the size and complexity of the implementation made possible by the abstraction
that matters.

~~~
ekiru
That interface is certainly inconvenient to use, and potentially quite
dangerous without a clearParameters().

It's not at all a rebuttal of the article, though. The article points out that
the number of unit tests a design requires is larger for more complicated
designs. An interface like yours would require no fewer tests than another
interface exposing the same functionality using conventional function calls.

------
cruise02
Good arguments need few straw men.

------
tetha
Hm. I think, one could condense the argumentation as follows: Number of Unit
tests = c * (Complexity * Use Cases + State-Tests) for some constant c, if
Complexity is a measure of the interface complexity, use cases is the number
of use cases of the API and state-tests is the number of tests required due to
statefulness of the API.

I think, this approximation is at least not wrong. I don't know if the
characterization is complete, but it looks right enough to fool me :)

However, I think the problem in the argumentation is that the author assumes
that "use cases" is small. While I agree, that usually, the number of use
cases for an API should be small, there certainly are abstractions with a lot
of use cases which are good even though they require a lot of unit tests.

An example of such a big abstraction is something I developed in the recent
university project. Basically, it is a domain specific language which is
compiled into state machines on partial bit sequences (which are used to
describe data deserializers). The major job of this abstraction can be
described in a few formulas and the abstraction hides enough complexity to be
worth it, but it requires quite a large number of unit tests due to the number
of possible edge cases in the specification of the DSL. Thus, at least I'd
consider it a good abstraction in spite of needing many unit tests.

------
ollysb
BDD encourages the developer to define a clear behavioral specification for
the unit of functionality before you implement it. By first focusing on the
usage I find that the abstractions I produce are simpler and in turn have
fewer unit tests. I think the author is saying that TDD done well results in
fewer tests for simpler code.

~~~
jlouis
All good code has the property of clear specification. The place where BDD
really shines is that it forces you to use your own abstraction in practice,
which often uncovers where it is unwieldy to use.

Another way to get lean code is to use a proper, powerful, static type system.
It enforces you to think about the code "skeleton" and the "tests" are done
automatically by the type checker at each compile/load of the code. It also
reduces the amount of testing needed: There is no way something different from
an integer can be passed as a parameter, so you do not need to account for
that.

My code does not use many Unit tests per se. But it does autogenerate a lot of
test cases to check certain properties. Autogeneration is excellent at
producing corner-cases, so if that is what you are after... I generally find
this is a better use of my time as a programmer: Better spend some time
developing the autogenerator than writing boring tests which my code for most
part will pass in first attempt.

------
synnik
All he is really saying is that simple architectures are better than complex
architectures, if they fulfill the same functions.

I don't argue that underlying point, but the way he got there sure was a lot
more complex than, "KISS."

------
binspace
Any line of code metric is fundamentally flawed.

~~~
YogSothoth
Including yours? ;-)

~~~
binspace
Exactly!! I can take your response as meaning two different things, both of
which I agree with.

\- My assertion is fundamentally flawed. \- Even the line of code metric YOU
come up with is fundamentally flawed.

