
Most Unit Testing Is Waste (2014) [pdf] - fagnerbrack
https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
======
biesnecker
My experience with testing is like that old adage about advertising, "I know
I'm wasting 50% of my money but I don't know which 50%." Most of it is a waste
but it's hard to know in advance which test will stop an engineer in a couple
of years from altering some fundamental contract and bringing the system down.

~~~
chongli
I'll say that any unit test for a bug which would have been caught by a more
sophisticated type system is a waste. I don't know how much time people spend
writing such "obsolete tests", but I doubt it's insubstantial.

~~~
quanticle
The problem is that sophisticated type systems only catch a subset of the bugs
that a unit test can catch. For example, let's say I'm adding the ability to
transfer funds from one account to the other in a banking application. I want
to display a warning when the amount of money being transferred is over a
certain percentage (let's say 95%) of the funds in the account. That's pretty
easy to do in a unit test: create mock account, call the transferFunds()
method, and verify that the warning is triggered when the value being
transferred is over 95% of the amount in the account.

How would I do that with a type system?

~~~
xorcist
At the risk of veering off topic, I'm genuinely curious what kind of bug that
unit test is designed to catch?

Given that transferFunds shows a warning over that 95% threshold, what type of
change is going to break that logic? Some sort of botched code rewrite where
values doesn't retain their meanings? That batchMode shouldn't trigger the
warning? That race condition between the check and the actual transfer?

Most of the unit testing I see is more like test-of-defintions (is fullName
still 80 characters wide and doesn't accept null bytes?) which confuses me
because a definition can only be correct or incorrect in context.

I accept the usefulness in dynamic languages in the absence of types. My
Personal preference is for tests to be a bit higher level (does login with a
username containing null still cause an exception?) but I have come to terms
with the fact that few people agree with me across multiple organizations, so
there must be some point to this trivial testing that is completely lost on
me.

~~~
nickjj
> My Personal preference is for tests to be a bit higher level (does login
> with a username containing null still cause an exception?).

These tests are great, but what if username has a bunch of validations on it?

For example, it's reasonable to think a username field might be validated
with:

\- Must be required

\- Within 1-32 characters that match a certain pattern (let's say a regex to
limit it to lowercase letters, numbers and -)

\- Must not be a blacklisted word (admin, administrator, etc.)

\- Must be unique (enforced with a database index)

Pretty standard stuff. Are you going to write 5 integration tests for this? 4
to test each validation and then the success case? These would be tests that
exercise your entire web framework's routing stack from request to response
(ie. the user visiting a /register URL and then submitting the form).

Personally I would not. I would write 1 unit test for each of those things (4
unhappy cases where I assert a specific validation error for each invalid
input and 1 happy case where with valid input I expect 0 validation errors).
In this case, the "unit" would likely be a `register_user` function that
accepts params as input and either aborts with validation errors, or succeeds
by writing the record to the DB.

Then, for an integration test I would have 2 tests. One to make sure with
invalid input I end up with some type of error displayed in the HTML response
(it doesn't matter which one), and another test with the success case to make
sure things work when they should (such as the user is registered and a new
record was created in the DB).

So I end up with a tiny bit of overlap in tests. Technically the unit test for
the success case doesn't need to be there since the integration test covers it
but I usually include it for the sake of completeness because it's usually
like 4 lines of code to make that test but I'm not 100% opposed to someone
saying it should be left out.

------
BaronSamedi
This paper is emblematic of a serious problem in the software development
field: lack of empirical research. It is unsurprisingly filled with opinion
and anecdote, with little mention of research in the area. And in the one
actual study cited, "Does Test-Driven Development Really Improve Software
Design Quality?", the author mis-characterizes the findings. Not that there is
a huge set of research on unit testing to refer to--there isn't--and hence the
problem.

"Software development has been characterized from its origins by a serious
want of empirical facts tested against reality, which provide evidence of the
advantages or disadvantages of the different methods, techniques or tools used
in the development of software systems."[1] In my view, the best thing we
could do is to adopt "Evidence-Based Software Engineering"[2] as other
disciplines have. This is more likely to have a major positive impact than the
newest and hottest language, tool, or technique.

[1] Reviewing 25 Years of Testing Technique Experiments.
[https://www.researchgate.net/publication/220277637_Reviewing...](https://www.researchgate.net/publication/220277637_Reviewing_25_Years_of_Testing_Technique_Experiments)

[2] Evidence-Based Software Engineering.
[https://dl.acm.org/citation.cfm?id=999432](https://dl.acm.org/citation.cfm?id=999432)

~~~
Frost1x
I agree entirely. The more tendency there is to push development as an
"engineering" discipline, the more evidence based research needs to be used to
solidify or disprove certain development dogmas repeated over and over again,
which in many cases, seemingly have no empirical evidence beyond a few
anecdotal success cases.

This is the norm in the industry, "you're doing it wrong, the best way is this
way..."\--based on what evidence that pertains to this case or has proven
generalized success applicable here?

In most cases, it's someone's successful anecdotal experiences which worked
for the specific cases they were involved with. That doesn't mean those
approaches can be abstracted away and generalized to all cases, but many in
this industry do that regularly and critique others' approaches based on that.
It becomes this competitive ego contest: well my work was at x, y, z solving q
and was successful--making me an authority on a, b, c's problems solving p
because x, y, z is a leader (appeal to authority fallacy)... etc.

It's one thing to treat development as much of an art (which to me, it very
well still is) but once you start treating it as a concrete discipline, you
need to provide the lyme, cement, aggregates, water... evidence and studies
showing approaches and how they faired across controls and varied cases.

~~~
0x445442
Agree with you and parent poster. I remember 15 years ago or more going
through CMMI certification and there was a push to get to level 3. Our company
hired a consultant who came in every eight weeks or so to track our progress,
give direction etc.

On one of his visits he started interviewing engineers individually to see how
things were going and I asked him what the point of it all was and I could
tell he was quite taken back. But then, not surprisingly, he said to be more
efficient developing software. I then asked him if he or any organization he'd
worked with had ever actually tracked the time it took to implement the
process, to which he answered no. Then I asked him, then how the hell do you
know if any of this is making software development more efficient?

------
njharman
I did not read the article. I did read all comments

Unit tests for the most part aren't about "testing". They are developer tool.
To verify rthat modifications (refactoring, additions, bug fixes, etc) doesn't
break contracts etc. Oh and showing that your code is a codependent mess of
poorly isolated spagehtti, if your unit tests are hard to write, the code
under test is a mess. Inittests are more useful in languages with loose or
poor type systems.

Many unit tests are not written well. testing more than the
interface/contract. Or full of complexity boilerplate and mock. Which means
code needs fixing.

~~~
cryptica
Unit tests don't guarantee that your code units are well designed. You can
still write unit tests for spaghetti code. In fact, sometimes it encourages it
because it encourages antipatterns like dependency injection; because
developers get the urge to inject mock objects into the units from the unit
tests.

Just think about it at a high level. If your module depends on another module,
is it really always necessary that this module be substitutable for another
module that has the same interface (e.g. another similar module or a mock)?
99% of the time, the answer is no! In fact it's often a problem because this
wrongly assumes that if a class exposes the required interface, then it is
compatible. There is more to compatibility than just interface; if the
submodule maintains its own state (OOP), then its behaviour can change in a
way which could break the dependent logic. You can't have Dependency Injection
for everything because it's never so simple; these customizable parts of the
code have to be designed very carefully.

~~~
jrhurst
I actually don't get where you are going with this breakdown. Can you give me
an example.

~~~
cryptica
I mean when you have a class which, instead of importing/including/requiring
its dependencies directly, expects instances of various classes to be passed
into its constructor for example. So the class cannot do its job unless you
pass its dependencies into its constructor... This is an antipattern which
unit tests tend to encourage because it allows you to decouple the unit logic
from its dependencies and it makes it easy to inject mocks in the place of
dependencies.

A unit test should only test a single class so that means you need to mock out
all other classes which your class depends on. Mocking out dependencies in the
test code is difficult or not feasible and that's why developers often resort
to Dependency Injection in their source code but as mentioned before, it is an
antipattern.

Also, I noticed that a lot of developers confuse unit tests with integration
tests but the definition is quite clear: If your test covers more than one
class without mocks, then it's an integration test. Integration testing does
not necessarily mean end-to-end testing. It could just be a single class with
its internal dependencies.

~~~
tsimionescu
Passing dependencies in the constructor is the only acceptable way of having
stateful dependencies (like a database or some external services). I am
assuming you are against passing things like Math libraries by creating an
instance, which I would agree with.

The only reason dependency injection gets a bad rep is magical frameworks
which obscure the actual wiring and end up causing bigger problems.

~~~
cryptica
I also would not pass in a database client instance. I would pass in the
config for the database client though.

I think that the class which ineteracts with the database directly via the
client should be tightly coupled to the database client. It's not very often
that you change database and when you do, you can just swap out that entire
class completely. Classes which interact with the database should expose
simple interfaces for performing actions against the database and those
wrappers should be replaceable.

~~~
tsimionescu
Then how would you handle connection pooling, if every class which interacts
with the database has its own instance of the database client?

------
fogetti
The thing is that most people don't understand that the code needs to be
tested anyway. If you don't test it, you are just an idiot who has no proof
that the code does what it claims to do.

Now because you have to test it anyway, you can just simply spend the same
time writing a unit test instead of executing the code with manually
configured non-repeatable test cycles, a.k.a clicking through the UI, or
sending Postman requests, etc.

Also it's kind of selfish not to make something repeatable by others.

And as someone pointed out before me, unit tests are not about testing at all.
It's a documentation about the system, and what it's supposed to do. Also it's
the way to stop the next person who works on the code to ruin something by not
knowing the business rules.

~~~
phereford
To add to this sentiment, tests also give teams confidence that is needed. It
gives engineers confidence when developing new features because the test suite
will break if you introduce a regression. It gives engineers confidence during
deployment/CD.

While the tangible benefits of unit tests are very important, there are other
intangible benefits that are equally important.

~~~
didericis
I’d say that confidence is a tangible factor.

The single most important reason to test your code seems to be that it allows
you to refactor your code while ensuring it still meets the original outside
expectations.

The author’s main gripe seems to be about tightly coupled tests that make
refactoring larger systems more difficult, and about prioritizing meeting
arbitrary metrics (lines of codes covered) rather that thinking critically
about the actual benefits.

This is in line with his emphasis on integration tests determined by the
business. I think that same thought process can be applied to internal
components of code as well, and you can empirically determine the quality of
your approach (roughly) by evaluating how often your tests change. If nearly
every change breaks a test, that probably means they’re low value/too tightly
coupled.

Excessive mocking seems to be the biggest source of evil in that regard.

------
ed-209
"The cross product of those paths with the possible state configurations of
all global data (including instance data which, from a method scope, are
global) and formal parameters is indeed very large. And the cross product of
that number with the possible sequencing of methods within a class is
countably infinite."

Sounds more like a condemnation of OOP than of unit testing, and I do
genuinely feel sorry for the unit testing, OOP purists out there. I prefer to
design more functional methods, which operate on parameters and injected
config instead of instance state and/or globals (cringe). Incidently, this
approach makes full coverage attainable.

"Large functions for which 80% coverage was impossible were broken down into
many small functions for which 80% coverage was trivial. ... Of course, this
also meant that functions no longer encapsulated algorithms. It was no longer
possible to reason about the execution context of a line of code in terms of
the lines that precede and follow it in execution"

I can reason about such an implentation MUCH more effectively by glancing at
the small bit of higher level code which integrates everything (and as
mentioned above by foresaking instance state and polymorphism). This strikes
me as a bit like advocating a flat directory structure cause it's important to
be able to see all your files at once.

~~~
ereyes01
Yeah a lot of the 90s/early 2000s OOP stuff I learned in school seemed to
always result in really tightly coupled systems and bespoke webs of tests and
fixtures that strung along weird dependency chains in unwieldy spaghetti piles
that did no good. Following TDD has helped me land at decoupled functional
interfaces like the ones you've described, and it all scales and composes so
nicely, yet stays very tractable.

Robert Martin sketched 2 diagrams in [1] that elegantly illustrate these two
different design patterns and how testable usually means composable and more
tractable:

[1] [http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-
Ar...](http://blog.cleancoder.com/uncle-bob/2017/03/03/TDD-Harms-
Architecture.html)

~~~
MetalGuru
I wish he gave actual examples to illustrate what he was talking about. So you
have an additional API layer the tests hit to call the functions you’re
testing? Do endpoints map to classes? Modules? So aren’t you just tightly
coupling this API to your service? When the service changes, you still have to
update the API. Can you explain to me how this solves the problem?

------
quality_theater
One of the things I dislike the most about unit tests is how it's used for
"quality theater". Some examples:

\- Trying to use coverage percentage metric as a sign of quality. As if a
simple percentage means anything about the quality of the tests. It's as
useless as using lines of code as a way to measure progress.

\- Not recognizing that useless unit tests are harmful for the code
maintenance. It makes refactoring code into better structure difficult and
developers just give up because it's too much work to fix the tests that
weren't even providing any value in the first place. This ignorance is
expressed in statements like how there's a testing "pyramid" with unit tests
at the bottom and end to end tests at the top. Which is a nice sounding
soundbite and image, but is useless. Forget pyramids, just write tests where
it makes sense.

\- Code review comments that try to look smart with "where's the unit test?"
Then the developer doesn't want a long drawn out fight about how a unit test
would be useless here since there's a huge crowd just cargo cult yelling "code
coverage!" "unit tests are good!" "pyramid!" So the developer just writes the
stupid test to get the code merged. This is also an example of how harmful
code reviews can sometimes be, when there's a popular stupid idea, code
reviews perpetuate it because the developers who know better just get tired of
fighting the same fight over and over again.

I really hate useless unit tests. I've seen tests that setup a mock, configure
it to return a value when called, call the code using the mock, then check the
returned value is the mocked value. This tested absolutely nothing! I've seen
unit tests that verify every line of the method was called, completely
pointless. The point is supposed to be to verify that for a given input, it
has a given output, not lock it into a specific implementation by verifying
every line of code ran a certain way.

There is an idea for structuring software "functional core, imperative shell".
Write the software this way and the natural place for unit tests and
integration tests becomes obvious. But nope, the industry is all about unit
test coverage percentage, stupid pyramids, looking good in code reviews. It's
all quality theater, not actual focus on quality.

~~~
reallydontask
> I really hate useless unit tests. I've seen tests that setup a mock,
> configure it to return a value when called, call the code using the mock,
> then check the returned value is the mocked value. This tested absolutely
> nothing!

I've had a similar disagreement about the validity of such a test.

The other mocking issue I've seen, is people mocking blackbox third-party APIs
over which they have absolutely no control, which sometimes leads to passing
unit tests, failing integration tests and head scratching all around.

~~~
sime2009
I'm with you people on this mocking issue. It is about time that mocking be
recognised as an anti-pattern in automated testing, or at least as an
undesirable tool of last resort.

Mocks often contain the same bad assumptions and misunderstandings about the
mocked API which the developer used during the implementation of the unit they
are trying to test.

If you feel the need to mock something then you should first ask yourself
whether an integration test can do the job for you. Actually, I would
generalise this advice to: Don't write a unit test when you can write an
integration test.

------
miccah
> If you find your testers splitting up functions to support the testing
> process, you’re destroying your system architecture and code comprehension
> along with it. Test at a coarser level of granularity.

The emphasis here should be on the reason for splitting up functions. Long,
complex functions can be difficult to understand, and removing a few lines in
exchange for a (well named) function call is very beneficial for the reader.
The opportunity for testing comes from this delegation. A function call is a
contract, and the test ensures it complies. Now the reader can comprehend what
the code is doing at a higher level; trusting the sub functions do what they
intend.

~~~
tsss
It's quite the opposite: The whole point of the architecture is often to
facilitate testing. If we didn't have to write tests we could do without a lot
of that boilerplate.

------
dep_b
Doing a typical "front-end" mobile application I write the following tests:

* Integration tests that test the API I get a ton of value out of it though it's sometimes hard to guarantee a certain state at the API's end. But I use them while building the API's but they also can detect any kind of problem after an API upgrade etcetera

* Unit tests that test data transformations This is stuff like date formatting, building headers into a request given certain input, building more complex ViewModels that take a few structs as input and turn them into something that reflects the actual process happening in the view. They're valuable while building the logic, help separating the logic since you need to make it testable and also help a ton if you find a bug since a bug simply means adding another test case to see if something goes wrong and then fix it.

I don't think I even get to 30% code coverage but I think what I cover is
super valuable and the other 70% is usually mostly CRUD boiler plate.

~~~
hhjinks
I personally find testing incoming APIs a waste. Sometimes I write tests to
verify that I treat the incoming data correctly if I transform it in fancy
ways, but other than that, you should trust the API contract. It's almost like
writing tests for the frameworks you're using.

Granted, that's from a unit test perspective. Integration tests of APIs are
invaluable. I wish we had them already where I currently work.

~~~
Mandatum
> but other than that, you should trust the API contract

As someone who's been working in corporate integration for the past 6 years..

Never trust a contract. Be in WSDL, OpenAPI spec, Word documents or otherwise.
I've worked with large tech vendors, I've worked with finance, I've worked
with large consultancies. The only people that seem to get it right, is the
people you don't want to work with because it's soul-crushing - think HL7 et
al.

~~~
hhjinks
The point is you can't write unit tests for broken API contracts, so either
you trust that the contract is upheld, or you've got bigger problems on your
hands.

~~~
Mandatum
Do you mean integration tests? I don't think API contracts should be a part of
a unit test. A unit test should be self-contained, unless of course there is a
tight coupling to an API.. Which I would steer clear from.

That's why I value integration tests over most thing. I can see immediately
that something is broken at a high level, what business impacts it has and
explain what systems are affected.

------
sulam
I got progressively more and more frustrated with this article, largely
because he keeps making statements about the impossibility of covering all
states a class may take on (true!) but then followed up with espousing more
use of integration and system tests, which are clearly combinatorial harder to
test “completely”. He also implied at the very beginning that somehow this was
driven by the switch from FORTRAN to OOP, as if FORTRAN avoided this
combinatorial explosion (magic!) and it’s only because of polymorphism that we
live in the “unit testing is good” world.

The logical contradictions eventually overcame me.

~~~
pknopf
> espousing more use of integration and system tests, which are clearly
> combinatorial harder to test “completely”

The point is that when you do integration tests, you will test the underlying
classes the way in which it _actually matters_. You can't test every possible
condition a unit can have, but you can test for the most likely.

~~~
crdoconnor
This is my experience too. Realism in testing is criminally underrated while
code coverage is criminally overrated.

IME unit tests can only effectively substitute for integration tests where
you're testing logical/algorithmic code with simple function inputs/outputs.

~~~
0xB31B1B
I somewhat agree, somewhat disagree. IMO the largest value I get from unit
tests is the confidence that I can make changes and understand what breaks.
Refactoring w/o unit tests makes me feel like I am flying blind

~~~
closeparen
The units we test are almost never big enough for internal refactoring. It’s
the decomposition into units that wants refactoring, and there the test suite
actively fights back (mock expectations in particular).

If there were enough code involved in a test that we could meaningfully
refactor it while keeping the test green, we would call it an integration
test.

------
dang
A thread from 2017:
[https://news.ycombinator.com/item?id=15591190](https://news.ycombinator.com/item?id=15591190)

2016:
[https://news.ycombinator.com/item?id=12666454](https://news.ycombinator.com/item?id=12666454)

2016:
[https://news.ycombinator.com/item?id=11799272](https://news.ycombinator.com/item?id=11799272)

2014:
[https://news.ycombinator.com/item?id=7353767](https://news.ycombinator.com/item?id=7353767)

------
desc
When I started programming, I'd think up lots of clever ways to avoid
repeating things. Ten years on, that code is a nightmare to maintain because
changing the behaviour for just _one_ call site of hundreds is next to
impossible, because everything gets funnelled through one extremely DRY group
of modules.

Redundancy vs. Dependencies: it's the dependencies which kill you. Redundancy
is often a _good thing_.

If you state an algorithm only once, as the implementation, then the next
programmer only knows what it does, not whether it's correct.

This is, in my opinion, the main value of unit tests: state the algorithm
twice, once as implementation and once as expectations, and _if they don 't
agree_ then something's wrong. While the odds of a bug existing in any line of
code haven't changed, the odds that the exact same bug exists in both sides
are much lower.

Any bugs which survive that are probably in design rather than implementation,
ie. my mental model of what the module should achieve is wrong somehow.
Catching that is a job for integration tests.

I definitely agree with the 80/20 rule here. 100% code coverage is neither
necessary nor desirable, and 20% is fine if it's the most valuable 20%.

------
the_arun
Unit Tests = The assurance/confidence that you get, if something changes
logic, this test will break & you will know it. Aside from just limiting bugs
- this is more powerful.

~~~
mobjack
95% of the time when a unit test fails for me, it is the test that need fixing
instead of my code. It hardly inspires confidence.

~~~
qznc
To me this suggests that there is no clear understanding what your units are
supposed to do.

My second theory would be that the unit tests are written by the inexperienced
developers while the good developers write the other code.

~~~
mobjack
The tests will involve 5 mocked dependencies just to test a simple if
statement. They test the implemention and are a waste of time.

Isolated units that are well tested usually don't need to be updated in my
experience so they rarely break.

~~~
jrhurst
Tests that have 5 mocked dependencies is usually a signed we done messed up
somewhere.

------
dmitriid
Kent Beck (yes, _the_ Kent Beck): “I get paid for code that works, not for
tests, so my philosophy is to test as little as possible to reach a given
level of confidence”

[https://stackoverflow.com/a/153565](https://stackoverflow.com/a/153565)

------
rileymat2
> If you want to reduce your test mass, the number one thing you should do is
> look at the tests that have never failed in a year and consider throwing
> them away. They are producing no information for you — or at least very
> little information.

I feel like there is some subtlety here that most will miss. Specifically,
"and consider". That is the part we are really bad at.

~~~
acchow
Those tests are helpful when you do major architectural changes, which may not
happen every year.

Tests also help in general development. Make some changes, then fix the tests.
If you are running tests locally on your machine (and offline) how can you be
sure that every time a test fails locally the failure is logged?

~~~
war1025
There's a difference between tests that haven't broken because they are in
stable code, and tests that haven't failed because they are testing
tautologies or even just completely failing to break when the underlying code
is broken.

I've lost count of how many times I've gone into an old test suite at work and
found that the tests were still passing even though the code they were testing
had completely changed or been removed.

Sometimes tests are written very poorly. The codebase benefits from their
removal.

------
ChrisMarshallNY
I've never tested 100% of my code.

But I've also never written a unit test that didn't expose bugs.

That's mainly because I know what parts of the code will be dicey, and focus
unit tests on them.

~~~
aey
That’s funny. I only write tests to prove that the program does what it’s
meant to do and not to find bugs.

The goal is to prevent a future programmer, including myself, from breaking
any of the declared properties.

~~~
falsedan
Tests don’t prove anything, they just show the intended behaviour. Only full
verification proofs show correctness.

Too often I’ve seen “tests show it’s correct” suites horribly fail to provide
value when the behaviour changes in a unit of business logic and the
brittleness needs to be unwound and replaced with robust assumptions.

------
WrtCdEvrydy
This is very interesting, when you contrast it to
[[https://dl.acm.org/citation.cfm?id=3106270](https://dl.acm.org/citation.cfm?id=3106270)].

I'm always weary of arguments that rely on 'unit tests are more complex than
the code'. If that's true, then it's correct. Tests have to encapsulate the
software contract that's being enforced and should be more complex than the
underlying code.

------
lumost
The utility of unit tests greatly depends on the type of software you're
building. Building a DB without tests is difficult and unlikely to win many
customers - particularly when news of that gnarly data race comes out. Whereas
a crud webapp where everyone can see what's working and what's not is a bit
hit or miss.

At the very least testing helps communicate to a code reviewer "hey this thing
does what I think it does".

------
msclrhd
When implementing lexers and parsers for language specifications, I have taken
the approach of testing each token (keyword, number, etc.) for the lexer for
each EBNF symbol. This repeats tests for common keywords and other
constructions, but means I can see that the lexer handles all the tokens in
each EBNF symbol. In this way, these tests are more BDD (behavioural driven
development) in style, but without things like gherkin/cucumber. With this
approach, I do full coverage tests for a given symbol/token, and just check
the basic symbol/token case elsewhere.

I do a similar thing with the parser tests -- have a test for each valid
complete symbol production, and then tests for as many error cases that the
parser can handle for that symbol.

With that base, I can add additional tests for bugs and additional error
recovery cases I implement.

I never see unit tests as a waste as they provide a set of regression tests
that are invaluable for refactoring and making other changes to the code, like
implementing new features or better error handling.

------
thisgoodlife
Most insurance is waste

~~~
Toine
If your house didn’t burn in the last two years, consider not insuring it
anymore.

------
nickthemagicman
I don't just code for a single project. I create libraries when I write a
piece of code that I know will be useful down the line. And for those
libraries that are going to be reused throughout many applications I think
unit testing is gold.

------
devin
Absent from this discussion is the use of generative testing. At the unit
level they can automatically enumerate a variety of scenarios the programmer
would likely overlook.

~~~
steve_adams_86
I just discovered this, only it was called 'property based testing'. Do you
recommend any resources for learning to use it effectively? I haven't dug in
yet, but so far it hasn't clicked when thinking about it. I like the idea a
lot, but I guess I don't see where it would be most useful for my own code.

------
he0001
I run a one-man project and I write test first unit tests for all of my code.
This has enabled me to advance the code with new functions without breaking
current API’s. The tests are not in the way and usually doesn’t need to change
because I change the implementation. It has been tremendous for productivity
because I don’t need to worry about if stuff breaks. I can trust my tests for
changing those and so far they have always done that.

------
kstenerud
After 15 years of writing unit tests, I haven't reached the same conclusion as
the author.

What I've found in my time is that unit testing can be good, but like anything
it's not a panacea. It requires discipline, and like normal code, it has code
smells.

Black box unit tests are the most likely to be good tests, and white box unit
tests are the most likely to be bad tests. The more you depend on the inner
workings of a function in order to test it, the more likely it is that you are
coupling your test to the implementation rather than the purpose of the unit
being tested. Once you tie to the implementation, refactoring becomes a LOT
harder, because changes will break the tests even if they don't break the
functionality.

Mocks are also a major source of trouble, and more likely to be a code smell.
If your tests are using a mock to test how many times your unit called it,
either your tests are bad or your architecture is wrong.

There are three main kinds of code:

\- Code that fetches data

\- Code that stores data

\- Code that transforms data

Mocks are necessary when you mix these. If you have a function that opens a DB
connection, fetches data, transforms the data, and then stores the data, you
now have an extra problem to deal with (the database), when all you wanted to
do was test the transformation. Things would be far easier if you separated
the transformation out, tested that in isolation, and then integrated that
encapsulated functionality with fetching/storing code. This also improves
separation of concerns and code duplication, since now your fetching/storing
code can be generalized and also tested in isolation.

Actually I lie. There is a fourth kind of code: code that modifies state. This
is the evilest, smelliest code around, and it's also something that
unfortunately we can't get completely away from. But we can manage it, by
isolating state, reducing the need for or scope of the state, and providing
"configuration object" function entry points to make testing these
monstrosities less nasty.

Code coverage is not just a measure of quality, but also of waste. If your
code is not being called, then one of three things is happening:

1\. It's error checking code for another API it's calling, which you normally
shouldn't be writing tests for (unless that API is known to be buggy and you
need to guard against it).

2\. It's not contributing to the goals of the program, and can be taken out.

3\. It does contribute to the goals of the program, in which case you need a
test for it.

You can't reach 100% code coverage because of (1). But you absolutely should
check WHICH code is covered in your tests because of (2) and (3). Anything
higher than 80% coverage is pure luck, and tells you nothing about quality or
wastage. In many cases, even 60-70% is sufficient.

~~~
winrid
If you don't use mocks then won't you just end up with integration tests?

~~~
falsedan
Who cares what they’re called as long as they’re fast and don’t crash when
running at scale/fill the disk or memory

~~~
winrid
Because if you're not using mocks the tests get unweildy. Suddenly changing
components three layers down the dependency tree breaks "unit" tests at the
top. Those are the tests you just end up @ignore'ing.

~~~
msclrhd
Mock things like database access, or better yet pass the database connection
via an interface and create a test version of the database access so you can
create a database-like thing for your tests (e.g. an in-memory database).

If you are depending on another component of the application (a lexer, a JSON
class, a maths function, etc.) don't mock that because if that class breaks
you _want_ tests to fail instead of silently passing because the broken
class/function was mocked.

If you are depending on thirdparty libraries, don't mock those unless you have
to (i.e. if you cannot run the tests). This will help avoid unexpected bugs
after upgrading libraries or if supporting different versions of a library.

If you are writing code for a complex infrastructure (e.g. an plugin for an
IDE), try to use test-specific versions of enough of the infrastructure to get
the rest functioning, and use the real versions of as much as you can. This
will help pick up issues in your code when new versions of that infrastructure
make changes -- you want your tests to fail in this case, as the real code
would fail.

Understand why your tests are breaking and address that. Use @Ignore as a last
resort. If APIs or behaviour has changed, update the code and tests to reflect
that. If you are supporting different versions, create version-specific
compatibility layers.

~~~
winrid
As someone that has written thousands of tests - what I disagree with you on
is when writing unit tests those application components should have their own
tests. Then you can cleanly mock them. Even if they are utilities etc.

Same goes for application code in the same service or whatever. Mock those
calls and only test your unit of code.

If you don't do this things can be fine. But at some point the code base will
become unwieldy and changing code in one place will break tests all over the
place.

Integration tests are fine and they have their place but are not a substitute
for proper unit tests.

------
yodon
I've generally preferred integration tests to unit tests but Storybook has
completely changed how I write React components, for the better.

~~~
notus
Agreed on integration tests over unit tests. I get the most bang for my buck
with them. Storybook is pretty nice. I use it for developing smaller
components, more of the building blocks of the application. Have you ever
checked out Kent C Dodd's react-testing-library? It was a refreshing approach
to testing react components IMO and it has gained quite a bit of adoption.

~~~
yodon
The docs for react-testing-library look like they offer a great deal of wisdom
in a tiny package. I find my React components tend to be simple enough that
I'm generally far more concerned about whether the CSS works than the JS,
leading me to probably keep my focus on Storybook for the moment but I suspect
I'll be reaching for react-testing-library before too long. Thanks for the
heads up about it.

------
henrik_w
I wrote a response to this article (and the follow-up) here:
[https://henrikwarne.com/2014/09/04/a-response-to-why-most-
un...](https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-
is-waste/)

------
revskill
I agree. Unit Test is for testing API usage, not implementation. We can use it
to figure out the API design good or not.

But the problem, is API change is very often, and we don't want to change both
implementation and test just for the sake of API change.

Integration test is enough in most case.

Even API is the implementation details of an abstraction.

------
derangedHorse
This article seems to ignore the fact that unit tests are typically made for
other programmers down the line as a form of documentation, as an informal
description of business rules, and as a way to gauge whether new code will
break the product in a very obvious way.

------
HankB99
tl;dr I made it to "Testing does not increase quality; programming and design
do" and quit.

It seems clear that the author has a strong opinion on this and perhaps that
has been formed by exposure to unit tests done wrong. I suppose his article is
worth reviewing and asking if any of my unit tests suffer from the problems he
identified. I think most of his complaints relate to badly applied unit tests
and I think we can all agree that _any_ methodology can be badly applied. That
does not provide grounds to condemn the methodology.

Having used Fortran (and Macro-11) as the first languages I employed
professionally, I do not recall any thrust for unit testing at that time.
Maybe it was just the shop I worked in. More recently I have used unit tests
for Go, C/C++, Python, Perl, Java, shell scripts and probably some I'm
forgetting. I wouldn't consider coding anything w/out some kind of unit test.

Back to the quote "Testing does not increase quality; programming and design
do", I disagree vehemently with the claim that testing does not increase
quality. While true that one cannot 'test in' quality, I find that designing
code to be testable provides higher quality results. This is particularly true
in languages such as shell scripts that often start small and grow until they
are hundreds of lines of in-line code. Testable code is generally better
partitioned and structured than it would otherwise be.

A second benefit to unit testing is immediate feedback and completion of parts
of the system. I get a feeling of accomplishment when I complete something
that passes its tests and prefer that to deferring satisfaction until the
entire thing works.

Finally, if I test the bits in isolation, I can provide them data for all of
the corner cases I think could cause trouble and make sure they work for a
wider range of inputs than could easily be done during integration testing.
When I do get to the point where I put the bits together, I have a much higher
success rate with integration testing.

------
chrisweekly
Obligatory(?) "Unit testing without integration testing" pic:

[https://www.reddit.com/r/ProgrammerHumor/comments/5pbl2q/two...](https://www.reddit.com/r/ProgrammerHumor/comments/5pbl2q/two_unit_tests_but_no_integration_test/)

~~~
qznc
We have this one in our office:
[https://twitter.com/dave1010/status/613601365529657344](https://twitter.com/dave1010/status/613601365529657344)

------
kissgyorgy
> If you find your testers splitting up functions to support the testing
> process, you’re destroying your system Why Most Unit Testing is Waste 4
> architecture and code comprehension along with it. Test at a coarser level
> of granularity.

This is very much in line with DHH's opinion about testing in general (the
discussion "TDD is dead" is about this). He said that he doesn't want to split
logic for the sake of testing, so HE drives the design NOT tests. I very much
agree with this. I think a human can design better code (meaning that's easier
to use, so easier to consume by other humans) than any automated process (like
TDD).

I noticed his blog post quotes this PDF: [https://dhh.dk/2014/tdd-is-dead-
long-live-testing.html](https://dhh.dk/2014/tdd-is-dead-long-live-
testing.html)

