
Unit Testing Is Overrated - ingve
https://tyrrrz.me/blog/unit-testing-is-overrated
======
chimprich
I tend to mentally divide code into roughly two types: "computational" and
"plumbing".

Computational code handles your business logic. This is usually in the
minority in a typical codebase. What it does is quite well defined and usually
benefits a lot from unit tests ("is this doing what we intended"). Happily, it
changes less often than plumbing code, so unit tests tend to stay valuable and
need little modification.

Plumbing code is everything else, and mainly involves moving information from
place to place. This includes database access, moving data between components,
conveying information from the end user, and so on. Unit tests here are next
to useless because a) you'd have to mock everything out b) this type of code
seems to change frequently and c) it has a less clearly defined behaviour.

What you really want to test with plumbing code is "does it work", which is
handled by integration and system tests.

~~~
qmmmur
What if you write code that isn't for a business? How does your workflow apply
then?

~~~
jugg1es
business logic does not mean it has to be for a business. It's more like
calling the pointy part of a spear the "business end". It's the part that does
the job.

------
claudiusd
I can't believe I'm wasting my time on another testing debate.

Speaking as a formerly young and arrogant programmer (now I'm simply an
arrogant programmer), there's a certain progression I went through upon
joining the workforce that I think is common among young, arrogant
programmers:

1\. Tests waste time. I know how to write code that works. Why would I
compromise the design of my program for tests? Here, let me explain to you all
the reasons why testing is stupid.

2\. Get burned by not having tests. I've built a really complex system that
breaks every time I try to update it. I can't bring on help because anyone who
doesn't know this code intimately is 10x more likely to break it. I limp to
the end of this project and practically burn out.

3\. Go overboard on testing. It's the best thing since sliced bread. I'm never
going to get burned again. My code works all the time now. TDD has changed my
life. Here, let me explain to you all the reasons why you need to test
religiously.

4\. Programming is pedantic and no fun anymore. Simple toy projects and
prototypes take forever now because I spend half of my time writing tests.
Maybe I'll go into management?

5\. You know what? There are some times when testing is good and some times
where testing is more effort than it's worth. There's no hard-set rule for all
projects and situations. I'll test where and when it makes the most sense and
set expectations appropriately so I don't get burned like I did in the past.

~~~
drchopchop
One of the dark arts of being an experienced developer is knowing how to
calculate the business ROI of tests. There are a lot of subtle reasons why
they may or may not be useful, including:

\- Is the language you're using dynamic? Large refactors in Ruby are much
harder than in Java, since the compiler can't catch dumb mistakes

\- What is the likelihood that you're going to get bad/invalid inputs to your
functions? Does the data come from an internal source? The outside world?

\- What is the core business logic that your customers find the most value in
/ constantly execute? Error tolerances across a large project are not uniform,
and you should focus the highest quality testing on the most critical parts of
your application

\- Test coverage != good testing. I can write 100% test coverage that doesn't
really test anything other than physically executing the lines of code. Focus
on testing for errors that may occur in the real world, edge cases, things
that might break when another system is refactored, etc.

~~~
msclrhd
I now tend to focus on a black box logic coverage approach to tests, rather
than a white box "have I covered every line of code" approach. I focus on
things like format specifications, or component contract
definitions/behaviour.

For lexer and parser tests, I tend to focus on the EBNF grammar. Do I have
lexer test coverage for each symbol in a given EBNF, accepting duplicate token
coverage across different EBNF symbol tests? Do I have parser tests for each
valid path through the symbol? For error handling/recovery, do I have a test
for a token in a symbol being missing (one per missing symbol)?

For equation/algorithm testing, do I have a test case for each value domain.
For numbers: zero, negative number, positive number, min, max, values that
yield the min/max representable output (and one above/below this to overflow).

I tend to organize tests in a hierarchy, so the tests higher up only focus on
the relevant details, while the ones lower down focus on the variations they
can have. For example, for a lexer I will test the different cases for a given
token (e.g. '1e8' and '1E8' for a double token), then for the parser I only
need to test a single double token format/variant as I know that the lexer
handles the different variants correctly. Then, I can do a similar thing in
the processing stages, ignoring the error handling/recovery cases that yield
the same parse tree as the valid cases.

------
jacquesm
Unit tests are not a goal, they are a tool. Striving for 100% test coverage is
nonsense, not testing your software at all levels is bad. Middle ground and
moderation are where it's at, not a black vs white choice. Just like every
other tool you should understand it's strengths and weaknesses and you should
apply it properly, not dogmatically or it will bite you.

~~~
KingOfCoders
The benefit of 100% test coverage is that there are no more discussions on
what to test and what not. When in doubt, test it. In larger groups of
developers there are otherwise ongoing discussions if A needs to be tested or
not. I have seen culture wars around this, from people who don't want to test
and are in the eyes of others _always_ testing not enough and vice versa.
Especially with a diverse development force with different ages, seniority and
cultural background.

It's often easier to just aim for 100% test coverage instead (with excluding
some categories of files).

EDIT: I would not and did not start with 100% unit testing. But if there are
ongoing culture wars and discussions didn't lead to a workable compromise,
100% test coverage worked for me and after some days test coverage was a non
issue.

~~~
WrtCdEvrydy
> just aim for 100% test coverage instead (with excluding some categories of
> files).

That's where the 'gaming' comes in.

The tests start just going through lines without hitting a single expect
statement.

The ignore files start becoming battlegrounds in the PRs because people just
exclude half the damn project.

We just have a simple rule... if you wrote code, you have to write coverage
for it. If it breaks and your test doesn't catch the breakage, the bug fix
goes back to you. Some people will ask "but what about what I'm working on
now", you'll have to communicate that you feel your previous work was far more
important.

~~~
deleuze
> the bug fix goes back to you. Some people will ask "but what about what I'm
> working on now", you'll have to communicate that you feel your previous work
> was far more important.

this feels punitive, especially in the eyes of management. unless you're in a
safety critical area where fully testing every code path is a hard
requirement, people will eventually write bugs.

i'd rather work somewhere that recognizes defects occur and has a fast
iterative process to push out new changes rather than one based on shame for
having written a bug.

~~~
WrtCdEvrydy
> has a fast iterative process to push out new changes rather than one based
> on shame for having written a bug.

That is the fastest most iterative process we have found so far... as the
expert on the original code, you are able to deliver the best outcome.

You're not being shamed for writing a bug, you're being shamed for not testing
your code.

------
DougBTX
Perhaps the problem isn't mocking dependencies, but trying to hide the fact
that GetSolarTimesAsync needs two pieces of data to work: a date and a
location.

But the original signature is just this:

    
    
        public async Task<SolarTimes> GetSolarTimesAsync(DateTimeOffset date)
    

That introduces a lot of complexity:

* The SolarCalculator needs to be able to work out its own location, so it needs a LocationProvider

* SolarCalculator needs to be IDiposable since it owns a LocationProvider

* The SolarCalculator will need more methods if it ever needs to calculate the times in a different location

* If fetching the location is slow, but the application needs to calculate times for multiple dates (eg to build up a table of times), then the SolarCalculator will need an method that takes in an array of dates to be efficient

But all that could be solved by making the function take all of the arguments
it needs to return its value:

    
    
        public SolarTimes GetSolarTimes(DateTimeOffset date, Location location)
    

No location provider needed, no IDiposable, just one efficient stand-alone
method.

Unit testing this is now just:

    
    
        var calculator = new SolarCalculator();
        var actual = calculator.GetSolarTimes(new Date(...), new Location(...));
        var expected = new SolarTimes(...);
        actual.Should().BeEquivalentTo(expected);
    

...so, perhaps the issue isn't that unit testing is a bad idea, but that code
which is hard to use in a unit test might also be hard to use in a wider
application? And perhaps the fix is to make the code easier to use?

~~~
chrisandchris
I agree with you and that‘s most times also my observation: Unit tests tend to
show how decoupled and re-usable your code is. Is a function gets hard to test
with a unit test, this points usually towards an architectural issue.

~~~
fernandotakai
if i got a dollar every time i started testing a piece of code, realized that
it was waaaay to complex, refactored the code so the tests were easier to
write/understand... i would have a lot of dollars.

100% of the time, it was the right idea and the code became a lot better.

------
m12k
I disagree with the notion that making your code testable in isolation serves
no other purpose than to write unit tests. It very specifically forces you to
think about how and why each piece of code is coupled with other code, and
generally requires you to make this coupling as loose as possible, to make
testing in isolation possible. Loosely coupled code is also easier to reason
about and easier to refactor. So testing doesn't just provide you with the
value of tests, it also nudges you toward a saner architecture.

~~~
barrkel
I strongly disagree for the reasons listed in the article; it induces the
construction and testing of abstractions which exist solely to enable testing,
and do not enable simpler reasoning.

Refactoring is even worse. Refactoring after you've split something up into
multiple parts and tested their interfaces in isolation is far more work. Any
refactoring worth a damn changes the boundaries of abstractions. I frequently
find myself throwing away _all_ the unit tests after a significant
refactoring; only integration tests outside the blast radius of the
refactoring survive.

~~~
gonzo41
Maybe your unit's are too big. Unit tests are tricky because it's about coming
to a personal and team agreement on what a 'unit' of functionality is.

I find the same issue in throwing away tests when I'm writing small scale
integration tests with junit. Usually I'm mocking out the DB and a few web
service calls. So those tests become more volatile because their surface is
exposed more. But smaller level, function and class level tests can have a
really good ROI and they do push you design for testing which makes everything
a bit better imo.

~~~
UK-Al05
It's normally the opposite. Unit tests are too small.

If you unit test all of the objects(Because their all public) then refactor
the organisation of those objects then all your tests break. Since you've
changed the way objects talk to each other, all your mock assumptions go out
the window.

If you define a small public api of just a couple of entry points, which you
unit test, you can change the organisation below the public api quite easily
without breaking tests.

Where to define those public apis is a matter of skill working out what
objects work well together as a cohesive unit.

~~~
bpicolo
The notion of a public API is really more fluid in the context of internal
codebases as well. It's important to maintain your contract for
forwards/backwards compatibility when publishing a library for a world. When
you can reliably rewrite every single caller of a piece of code, you don't
have that problem.

------
notacoward
Unit tests have a purpose, which is mostly to protect _the programmer_ against
_future_ mistakes. Integration and system tests protect the _user_ against
_current_ mistakes.

I've been on projects that focused almost exclusively on unit tests and on
projects that focused almost exclusively on integration tests. The latter were
far better at shipping actually working code, because most of the interesting
problems occur at the boundaries between components. Testing each piece with
layer after layer of mocks won't address those problems. Yay, module A always
produces a correct number in pounds under all conditions. Yay, module B always
does the right thing given a number in kilograms. Let's put them together and
assume they work! Real life examples are seldom this obvious, but they're not
far off. Also note that the prevalence of these integration bugs _increases_
as the code becomes more properly modular and especially as it becomes
distributed.

I firmly believe that integration tests with fault injection are better than
unit tests with mocks _for validating the current code_. That doesn't mean one
shouldn't write unit tests, but one should limit the time/effort spent
refactoring or creating mocks for the sole purpose of supporting them.
Otherwise, the time saved by fixing real problems more efficiently - a real
benefit, I wouldn't deny - is outweighed by the time lost chasing phantoms.

~~~
inertiatic
What, no. That's exactly the opposite.

Unit tests protect you against current mistakes. They're tied to the exact
implementation.

"Right now my function X should call Y on it's dependency Z before it calls A
on it's dependency B. I know that my method should do this, because this is
how I designed it now. Let me write a test and expect exactly that."

Integration and unit tests will tell you whether in the future your code will
still work when you refactor.

"Okay, we rewrote the whole class containing the function. Does running my
thing still end up writing ABC into that output file?"

Otherwise I agree with you mostly.

~~~
notacoward
> They're tied to the exact implementation.

If unit tests are tied to an exact implementation, they''ll fail on correct
behavior and that's definitely wrong. It shouldn't matter whether X calls Z:Y
or B:A first, whether it calls them at all, whether it calls them multiple
times, whether it calls them differently. All that matters is that it gets the
correct answer and/or has the same final effect.

Unit tests should be based on a module's _contract_ , not its implementation.
This is in fact exactly what's wrong with most unit tests, that they over-
specify what code (and all of its transitive dependencies) must do to pass,
while by their nature leaving real problems at module interfaces out of scope.

~~~
inertiatic
a) Most code in the wild doesn't have an explicit output and instead is
orchestration code.

b) Even if you have an output, it's dependent on more complex input of
arbitrary types.

Assume that there's a method that returns an input based on summing the output
of a method call of it's abstract dependencies.

To do dogmatically correct unit testing you'd pass those 2 mocked
dependencies, and have those methods return the values when the right method
is called on them.

Then you'd assert that B was called on A, that D was called on C, and that the
method under test returns the sum of those returns.

As soon as you move into passing implementations of those 2 dependencies, to
anyone dogmatic you're doing integration testing.

Even if the tester isn't being dogmatic, in a lot of cases these inputs are
complex enough that building enough actual inputs that are consistent and
realistic to cover all the cases is prohibitively costly, so they opt for
mocks.

Now, suddenly you just have more code to maintain when making changes, but you
feel good about yourself.

~~~
kingdomcome50
The interface on our object (O) that you are describing is:

    
    
        O -> int
    

Your unit test is concerned with narrowing the interface above to:

    
    
        O -> int // of specific value based on dependencies
    

If Os only dependencies are A and C, this can be rewritten to:

    
    
        A -> C -> int // of specific value 
    

Of course if we assume both A and C, themselves, have dependencies we can
recursively rewrite the above until we have a very long interface, but instead
you have opted to mock (M) them:

    
    
        M(A) -> M(C) -> int // of specific value 
    

You then take it a step further and mock the method calls on each to return a
specific value:

    
    
        M(A) -> int
        M(B) -> int
    

becomes:

    
    
        M(A) -> 3
        M(B) -> 5
    

Okay. Now we can rewrite our interface to:

    
    
        3 -> 5 -> int // of specific number
    

and our test to:

    
    
        3 -> 5 -> 8
    

and make our assertion that the result is indeed the sum of the inputs (not to
mention the ridiculous assertions that specific methods were called within the
implementation). Yikes... No wonder OOP gets a bad wrap. All that for what
amounts to a `sum` function.

The designer of the above monstrosity could learn a lot from the phrase
"imperative shell, functional core". It sounds like dogma until you are knee
deep in trying to test the middle of a large object graph!

------
nordsieck
One interesting thing that is easy to notice about all of the examples of in
the article is that they are absolutely infested with objects.

I don't have anything against objects, per se, but I think they tend to make
unit testing much more difficult to accomplish. The closer your code resembles
pure functions, the easier it is to do dependency injection and unit testing.

~~~
loup-vaillant
> _The closer your code resembles pure functions, the easier it is to do
> dependency injection and unit testing._

If the only thing you inject is data, can we still call that "dependency
injection"?

~~~
Tainnor
I mean, if you pass a HOF to some other function, then that is also a
dependency.

~~~
loup-vaillant
Correct, but I haven't seen it happen often in practice. I mean, pretty much
every project uses HOF, but few have many of them.

I also tend to avoid HOF when I can instead pass data around explicitly.

------
shawnps
Early in my career I saw a large legacy project that was riddled with bugs
turned around after a senior developer insisted on having unit tests. No one
else believed in the value of unit testing, so he added them on his own in his
free time. Occasionally another developer would push up some code that broke
the senior developer's tests, and he gradually got the upper hand because he
now had proof that his tests were finding real problems.

Everyone started writing unit tests, and the code broke less. Developers
became more confident in deploying, and eventually most PRs looked roughly the
same: 10-20 line diff on the top, unit tests on the bottom. If there were no
tests, the reviewer asked for tests. It became a fun and safe project to work
on, rather than something we all feared might break at any moment.

I've since started insisting on having them as well, especially when I'm using
dynamically typed languages. A lot of the tests I write in Python for example
are already covered in a language like Go just by having the type system.

~~~
xtracto
I programmed the first 10 years of my life in compiled statically typed
languages (C, C++, Java, etc), then I needed to start programming in Ruby for
production environments and initially I felt "naked"; I felt so insecure when
building something and not having it compiling successfully. That's when I
really got into Unit Tests, bugs as stupid as "vlue" instead of "value" typos
can plague your codebase in languages like javascript, python, ruby, etc; and
testing is the only way to find them (other than... in production errors).

------
blackoil
Functional Code with no side effect should be unit tested. Integration Code
which glues various components together should have integration tests. If you
feel like you need unit tests but have to create too many mocks, you have
merged functional and integration code, separate them out.

------
goodoldneon
We initially only had integration tests, because many people think they're
better. I get it: itests use the real plumbing, so they're more representative
of your runtime. But they're slooooow -- especially the tests that involve the
DB (which is most of our itests).

So we started adding unit tests. Utesting code that wasn't written for utests
is painful: you often need to choose between refactoring or just patching the
hell out of it. The latter is highly undesirable, since it leads to verbose
tests, failures when you move a module, and the inability to do blackbox
testing.

But utests encourage our new code to be clean and readable. We've found that
functional programming is much easier to test than object-oriented, and is
easier for engineers to grok. We just sprinkle a little dependency injection
and the whole thing works nicely.

Itests have their place, but utests lead to faster feedback and more readable
code.

~~~
zarathustreal
Weird that you started using a functional approach, noticed that it’s easier
to unit test, and drew the conclusion that unit testing is what led to more
readable code. Consider that functional code is the source of readability.
Also we don’t typically call it “dependency injection” in the functional world

~~~
goodoldneon
You're absolutely right: functional code is the source of the readability. But
writing unit tests incentivizes engineers to keep things functional.

What's a better term than "dependency injection"? What should I call an
argument whose default is always used in production code, but is there to make
passing a mock easy? I'm not trying to be snide -- I'm genuinely curious.

~~~
bigmanwalter
I always just called it a "default argument"

------
solraph
I'm a massive fan of unit testing, but I mostly agree with the observations.
However, I (mostly) disagree with the conclusion. The problems with unit
testing I've seen to come from the following anti-patterns in various
combinations.

1) The use of unit tests as the exclusive automated test type. ie; No
functional tests, integration, etc.

2) Test doubles for most or every dependency, even purely functional
dependencies like math libraries.

3) Not using the appropriate kind of test double for the test at hand.
(Dummies vs Fakes, vs Spies, vs Stubs, vs Mocks)

4) The overuse of mocking libraries.

Mocking libraries have their place, but in opinion, are used approximately a
hundred, perhaps even a thousand times more often than they should be. I use
them to create test doubles in exactly three scenarios:

1) A dependency that does not have an interface, usually a third party
library. This usually happens in one place only, and is used for writing the
wrapper code test.

2) A dependency that has an incredibly large interface and/or dependency graph
where building a set of stubs or spies is simply not worth the effort.

3) I want to test weird edge cases that's not available any other way, such as
theoretically unreachable code.

These should not be the majority of your unit tests!

------
GiorgioG
Code is a liability. Unit Tests are code obviously and are no less prone to
contain bugs than the code under test. And of course they require maintenance
just like any other part of the codebase.

It feels like the industry has blindly pushed for unit testing everything and
80% or more code coverage as the gold standard.

I’ve given up arguing about the cost/benefit of unit tests at work. I feel
that the software the teams I’ve worked on over the past couple of decades
still produce about as many bugs as before unit testing came along. I’m not
building pace makers or aviation software, mostly LOB applications.

Unit tests provide a false sense of security (especially to management.) Yes
sometimes they help catch refactoring bugs, but at what cost?

------
gregdoesit
The article emits a key point, when talking about any practice: the context,
in which unit testing is performed. The size of the team, the type of company,
the technology, and the impact of product defects.

For a startup with a small team and few customers building an MVP? Unit
testing is overrated.

For a company with 50 engineers in 10 teams building a product, that moved
$500,000/day in revenue? Unit testing could or could not be overrated.

For a company with 1,000 engineers working in the same repo, shipping a
product that moves $50M in revenue per day? Unit testing is most likely
underrated - and essential.

You cannot ignore how the organization works, and the cost of a defect that a
unit test could have caught. I happen to work at the third type of
organization, and while unit tests might not be the most efficient type of
safety net, it is a very big one. We have other types of testing layers on top
of unit: integration and E2E tests as well.

Also, one more fallacy in the article: "If we look back, it’s clear that high-
level testing was tough in 2000, it probably still was in 2009, but it’s 2020
outside and we are, in fact, living in the future. Advancements in technology
and software design have made it a much less significant issue than it once
was."

This is not true everywhere. High-level / E2E testing on native mobile
applications in 2020 is just as bad as it was on the web in 2009.

~~~
2rsf
> Unit testing is most likely underrated - and essential.

You are right, but it still doesn't mean aiming for high coverage. In the big
company case you'll want to cover the interfaces and dependencies and less of
your team's code.

I know that part of this will fall under "integration" but definitions are
sneaky.

------
ChrisMarshallNY
You may find this article interesting:
[https://medium.com/chrismarshallny/testing-harness-vs-
unit-4...](https://medium.com/chrismarshallny/testing-harness-vs-
unit-498766c499aa)

I think unit testing definitely has its place. I use it a lot; but I have
learned to moderate my reliance on unit testing.

I tend to prefer test harnesses and manual (or automation-assisted) testing.
I've sometimes written my own unit-testing frameworks, because the "canned"
variants didn't give me what I needed.

The term "unit testing" is quite old. It seems to mean something different,
these days, from what it used to mean.

As far as I'm concerned "not testing my software" is out of the question.

For most of my projects, the testing code is vastly greater than the actual
product code.

------
candiddevmike
Unit tests are insurance for later refactoring and library upgrades. This
let's you avoid premature abstractions as you can easily swap out lines of
code and verify you didn't break the app. This is especially important when
you aren't the person doing the future refactoring.

~~~
barrkel
Tests are insurance for refactoring which doesn't change the interface that is
being tested.

Refactoring usually changes interfaces. Things are factored differently. The
clue is in the name.

The higher up the stack your test is, the more insurance it gives you for
refactoring. The lower downs the stack it is, the more likely it is to be
thrown away or heavily rewritten after refactoring.

~~~
throw_m239339
> Refactoring usually changes interfaces.

No, refactoring shouldn't change public interfaces. The very definition of
refactoring is rewriting code without changing interfaces.

> Things are factored differently.

internally

> In computer programming and software design, code refactoring is the process
> of restructuring existing computer code—changing the factoring—without
> changing its external behavior.

[https://en.wikipedia.org/wiki/Code_refactoring](https://en.wikipedia.org/wiki/Code_refactoring)

You got the definition of refactoring wrong, please get it right, it's
important. If you are breaking a public API, you are not refactoring anything.

Any piece of code meant to be private shouldn't be unit tested at all, only
the behavior of a public interface.

Now internally you might call a third party lib, but that third party lib is
then a separate "unit" itself.

I don't like the term "unit" because it's yet another word that is easily
misunderstood and lost its original meaning with time.

unit testing should really mean "public interface behavioural integrity
testing" or something like that.

~~~
UK-Al05
People expose too much of their code as a public api. The public interface
should be small, and that's what the unit tests should test against.

If users should not access it directly, you don't have to unit test it
directly, you test them indirectly via the public interface.

------
conatus
It's a perennial but: unit tests are more about how you think and rigourously
approach a problem than preventing regression. I'd say if you wrote unit tests
and then deleted them, your code would be better for it. Indeed, I often don't
commit some of the tests I've written in order to write the code! I'll ship
some subset of them. It's a notebook.

They serve as a form of living documentation for the code and help increase
velocity in a build under the right conditions.

For example, you do need to know a certain function does what you think it
does because the rest of the system isn't even in place yet. You might have to
approach this from the outside, via integration, but the speed of doing this
is quite slow. Versus a unit test.

This is not to mention refactoring!

The piece seems to be more about the value of mocking and how far to go with
isolation. Which is a slightly tangential issue. I agree that in particular
styles of object orientated programming this becomes absurd.

This article, which is linked provides a more convincing case, on the grounds
that unit testing foregrounds the system as software as opposed to the
software as a useful and functional thing in the world, meeting user needs.

However, it omits the being about to _react_ to user needs is actually a
central party of agility. Unit testing allows maximum reactivity to changing
requirements without regressions in the code and makes this code navigable.
Changing customer needs means your carefully built functional tests are going
to be just as useless and rotted. This has been the case with large test
suites of functional tests - say in something like Cucumber - I've seen.
Better test at a lower level, which moves slightly less rapidly.

[https://www.sacrideo.us/the-fallacy-of-unit-
testing/](https://www.sacrideo.us/the-fallacy-of-unit-testing/)

------
drbojingle
Unit Testing, yes, it can be a waste of time. Depends on what you're doing.
Unit Testing fails when you sink more and more time into trying to make a test
for something because 'duh, unit test everything'. Fact is some code changes a
lot and some code doesn't change much. Some code is also hard to unit test and
some isn't. The intersection between code that doesn't change much and code
that's hard to unit test should NOT be unit tested, especially if another form
of testing works better. There's just no need to sweat over a test that wont
run enough to justify the work that went in to writing it.

The issue here is developers don't have a sense for economics. Diminished
returns, marginal utility, and opportunity cost should be studied by ALL.

------
exdsq
What’s with the recent HN push against unit tests? Yes you need other tests
too, but they serve a purpose. You can’t build on bad foundations! And the
search space of integration tests is larger so it’s much harder to have good
coverage of non-happy paths

~~~
zaphar
I think it's a critical mass of people experiencing the "unit tests pass but
the code is broken" problem. When unit tests are used to test glue code you
end up testing your mocks and nothing else. Mocks are often done using a
framework which encourages one off implementations per unit test. This
introduces a maintenance burden where all the mocks have to be kept in sync.

Unit tests are very useful but we somehow landed ourselves in a place where we
have lots of line coverage in our unit tests but little confidence that the
code actually works.

There are at least 3 different applications at work where they unit tests are
green while the code is broken or red when it's not. The cause is almost
always the mocks. They either presume too much, making then fragile or they
are flat out incorrect. Despite having a lot of test coverage there is little
confidence for the developer that their change is correct.

In a sense the writers of the tests were "doing it wrong". The burst of
articles on unit tests and their failure modes are a reaction to the
prevalence of this in our industry.

~~~
commandlinefan
> people experiencing the "unit tests pass but the code is broken"

Arguing against unit testing is like arguing against type-safety (and,
usually, anti-unit-testing people are anti-type-safety people, too). The
presumption always seems to be that if it doesn't solve every problem, it's
unnecessarily slowing things down.

~~~
zaphar
In my experience it's the pro-unit-testing people who are more likely to be
anti-type safety people. The argument against type safety typically goes
something like this:

I already have to write unit tests so why would I bother with types they don't
add any real value.

Both camps are wrong. Unit tests are an unambiguous good. Types are also an
unambiguous good. Both have some rather common failure modes though and
guarding against those failure modes is useful.

Unit tests of purely functional code where you only need to provide an input
and validate an output provide tremendous value. The unit test can treat the
code as a black box and as a result the unit test is robust and resilient to
changes in the black box while verifying that the box still produces the
correct answer. Unit tests of code with hidden dependencies that need to be
mocked require a lot more care to construct properly. Mock scripting
frameworks encourage a number of bad habits. Things like "How many times did
this method get called". Or "Always provide the same answer when this method
get's called." The result is a hundred reimplementations of the same interface
that are at best correct for the current version of the code they are testing
and at worst completely incorrect reimplementations of whatever they are
mocking. They all need to be kept in sync and maintained over time.

A shared in memory Fake will in general provide more value and be less fragile
over time while also ensuring that your tests are actually testing the code
and not the particular script you defined for the mock.

------
david_draco
Knowing that some component provable works correctly is underrated though. I
would favor

\- 33% unit tests (of well-defined units, such as functions that compute some
subtle math logic, or read/write marshalling)

\- 33% integration tests (requiring multiple components to work correctly to
achieve the result)

\- 33% business tests (automatically steering the entire application like a
user would and testing the result like a user would see it)

~~~
david_draco
And the remaining 50%: making it easy for people to give you useful feedback,
such as error messages giving a web link to a user-friendly bug tracker.
Ideally the link already fills in the stacktrace and system info.

~~~
kemiller2002
I always argue for this and get told, “that’s not for the user. It’s adds
screen bloat to the UI.” I never understand why people say that. How about we
make it easier for us to help people. I mean at the core that is kind of our
job.

~~~
chha
This! Not just for the systems development side of things, but just as much
for normal usage. I tend to do work for fairly small organizations (<500 ppl),
but with layer upon layer of bureaucracy and management, not to mention
cultural and geographic differences.

The main goal of the systems I work on is to provide technical documentation
of complex industrial processes. If things break, it can be pricey and/or
dangerous. Having good information is a must.

However, if a user sees that something is off or just plain missing in the
sometimes 30+ year old documentation, the easiest way to deal with it is to
make a note of it and adapt to it for his or her work. Reporting the problem
back in order to get it fixed is....difficult. There probably is a process for
it, you probably don't have an account where you can log the time spent on it.

Having a quick and low-threshold way to report problems would be of enormous
value in the long run.

------
spacekitcat
I'm a big fan of unit tests and from my perspective, this article
misunderstand what unit tests are for. Unit tests verify that the central
assumptions you've made about some module's behaviour hold water. Unit tests
aren't supposed to "exercise user behaviour" or "verify business logic", it
verifies the theoretical behaviour of a carefully isolated module under
specific conditions. A well crafted suite of tests makes it easier to reason
about your application's behaviour.

~~~
theshrike79
This is the correct way, but in my experience less-traveled programmers start
striving for that elusive 100% coverage and end up writing unit tests for
completely obvious stuff like constructors (which do nothing else than setting
values) or setters/getters.

~~~
spacekitcat
Developers always have to estimate the cost/benefit of how they're spending
their time and overly strict coverage targets get in the way of that.

Test coverage targets make very little sense for a statically typed language
like C#, a fools errand almost. In dynamically typed languages, it's hard not
to almost hit 100% if you're doing some honest TDD. Just for example.

------
je42
How about this, instead labelling something as overrated, start with a more
positive and differentiating point of view:

\- when to use unit-tests ?

\- when are other tests more useful ?

\- which bugs can unit-tests find ?

\- in what situations is changing code to make it testable some-times
beneficial ?

\- when is it a complete waste of time to make code unit-testable ?

Things to consider when answering these questions above:

\- what is the customer impact if there is bug ?

\- what is the developer impact if there is bug ?

\- what is the developer impact writing tests ?

\- how quickly can we fix it and release a new version ?

\- how quickly can we fix the hotfix in case we messed up ?

\- how can we find out if there is a bug ?

[...]

I think it is great that he starts thinking about the value of unit-tests and
testing in general in his particular context.

But i think it helps keeping an open mind knowing that some approaches work
better than others depending on the context.

------
bencoder
> There is no formal definition of what a unit is or how small it should be,
> but it’s mostly accepted that it corresponds to an individual function of a
> module (or method of an object).

And there-in lies the problem. Remove the idea that the unit is a single
method/function.

I subscribe to the idea that a unit is a unit of functionality. Nothing to do
with the code implementation.

Only mock where you're reaching out outside of your codebase (filesystem,
network, operating system (time, for ex.))

You can still do unit tests for individual functions when you need to work on
a complicated algorithm, but those functions should have no dependencies or
side effects - pass in all the data you need

------
wanderr
I've worked at really small startups (<20 engineers) and pretty large
companies (>1k engineers).

At small companies we have absolutely been able to get away with 0 unit tests
while maintaining agility - being able to do major reactors quickly even when
working in dynamic languages, while maintaining a high level of quality, even
when operating at reasonably large scale. The key is clear, well written code
and strong ownership from senior engineers who have a deep understanding of
the code they own. On the other hand, at large companies, extensive unit
testing has been invaluable. Code bases are older, ownership changes hands
frequently, new engineers join all the time, old ones move on to other
projects, dozens of teams are calling each others code, refactoring is done by
people who had no hand in writing the code in the first place, dependencies
are higher and harder to track down completely, and it's not realistic to
cover all important functionality with integration tests. Engineers often must
rely on unit tests to prevent others from breaking their code, and to ensure
that they are not breaking someone else's. Yes, unit tests can be highly
problematic and costly to maintain, and they do add friction and time to
initial development, but in these scenarios the benefits outweigh the costs
considerably.

------
squirrel
The units here are poorly designed - for example, I'd expect the
LocationProvider to be responsible for choosing how to get the location, not
the caller - and this makes them hard to test. The solution is fixing the
design, not throwing out unit tests entirely.

The standard reference on this is [http://www.growing-object-oriented-
software.com/](http://www.growing-object-oriented-software.com/) .

------
moomin
The example is a straw man. He refers to a doubling in code size, but that's
just because he's got hardly any code. In a real class, you still have one
method (or two! or three!) on the interface but you've probably got 300 lines
in the implementation. The only thing being "duplicated" is the external
interface.

Might as well retitle this "Why I don't like the Interface Segregation
Principle".

------
makkesk8
Engineering a good code base is very hard and is unique to every project and
the mentality especially in the .net community of "loosely coupled code is
good" usually ends up over engineering and adds more complexity than
necessary.

I've seen far too many cases where added complexity has killed the project's
time budget because things just take so much longer to do. You need to
maintain a healthy balance and actually evaluate if writing this particular
system within the project in a way where we can "swap it out" later on is
actually a use case.

Every project is unique and "unit testing all the things" might not be the
solution for your project. Where I work we shifted our mentality when we
highlighted this problem to only unit test bugs and write less decoupled code
and in bigger services that are more mission critical we do integration
testing instead. This works very well for us but as highlighted earlier, every
project is unique and you should experiment what works for your particular
team because what works for others might not work for you.

------
mnm1
This was obvious to me as soon as I wrote my first unit tests. I was all about
uncle Bob's philosophy till I realized it doesn't work and discourages
creativity and problem solving. Now I remove most old unit tests as they are a
liability. They break while the code they supposedly test works perfectly.
Maintaining them is a waste of time. I do have some unit tests for specific
algorithms written in functional languages. They can be helpful but are no
substitute for proper testing that actually catches bugs, whatever way that's
achieved. In software, people love fashion. Unit tests were and are to some
extent fashion. Like microservices and other ideas that only apply to the top
0.1% of companies, unit testing is generally useless in apps. There is some
use when used with specific algorithms where the inputs and outputs are always
constant and known. Libraries, programming languages, etc. But for apps, they
are mainly a liability.

------
gchamonlive
Little late to the party, but I wanted to cross-reference another post from
earlier this week:
[https://news.ycombinator.com/item?id=23749676](https://news.ycombinator.com/item?id=23749676)
(Code Only Says What it Does)

I believe unit and integration tests, apart from checking whether code does
what you think it says, it serves both as a sort of executable documentation
and, most importantly, it highlights development intent. If you TDD your code
iteratively, reflecting not only the unit in your code, but the intent in your
tests, you get a much more healthy testing base.

Code coverage is not really a good metric. Intent coverage is more
interesting, but a whole lot more subjective and elusive. All in all, tests
should be written not with your own self in mind, but with whoever might come
later to maintain your code.

------
woah
The article isn't about this, but in my experience, a lot of unit testing is
driven by loosely typed languages, and tests for things that would be a
compile time error in something stricter. Rust in particular has a way of
making many unit tests redundant.

------
spion
Like all things taught and learned by cargo-culting, unit tests have long lost
the original intent.

No, a unit is not a function, or a method, or a class, or a file.

A unit is a clearly separated, non-trivial software component with a minimal
and stable interface to the rest of the software system.

A unit is a good if it requires little to no mocking to test and a very small
number of messages to test. It should also do something non-trivial - think
small library size, not class size.

The unit should probably have a README which explains the small interface and
the small number of necessary dependencies. The unit tests should largely
treat the unit as a black box and be based on its promises in the README.

------
kelvin0
Testing should be an exercise in risk assessment.

If module A is very expensive to fix and has a high probability of failing in
production, well of course you want to have it rock solid and should be
thoroughly tested.

You could build a priority index, with something like :

priority for testing module A = cost of fixing A x probability A fails

The other problem is getting management onboard and demonstrating a ROI
justifying the time spent building these tests and using them. I personally
failed at that and still am trying to figure out how to get them to understand
the benefits. I've progressed, but it's one heck of an uphill battle for me.

------
DrBazza
A couple of ways of looking at unit tests:

* a 'journal' \- at the date this test was written, this is how we expect the system to behave

* an ELI5 - if I'm trying to use this method, why am I passing all these complex objects?

Unit tests declare expected behaviour, and should make the developer think
about their methods.

For example, why pass complex objects to just to print a string or a count or
similar? And why pass _those_ objects? could a method be generalised to take a
lambda, or an interface instead? Could the method be pure? And so on.

Unit testing isn't overrated, just a bit misunderstood.

------
kccqzy
> Unit tests, as evident by the name, revolve around the concept of a “unit”,
> which denotes a very small isolated part of a larger system. There is no
> formal definition of what a unit is or how small it should be, but it’s
> mostly accepted that it corresponds to an individual function of a module
> (or method of an object).

Heh I feel like this is the crux of the issue. Because there's no standardized
definition of what a unit is, people sometimes tend to choose the wrong unit
to test.

I personally believe it is usually wrong to test a single function or method
in a class. I tend to test the behavior of a whole class at the same time.
Testing each individual method is too white-box, and makes your testing code
too coupled to the implementation. Basically, don't test internal details
(unless the internals are very complicated), just test the externally visible
behavior, which is usually presented as a whole class.

I also don't agree with always mocking out dependencies. If your "dependency"
is just an instance of a different class, then just use the real deal. Sure,
now your definition of a unit now encompasses not just your code, but your
dependency's code, but if your dependency's code affects the externally
visible behavior of your code, it remains _your responsibility_ if your
dependency changes things and breaks your code. That's what abstraction means:
you present an interface, your client doesn't need to care about how it's
implemented, and what dependency your code needs.

------
monksy
> focusing on unit tests is, in most cases, a complete waste of time.

Someone is pretty inexperienced by making that claim. Unit tests help to
isolate the expectations and verification to very small levels.

> While these changes may seem as an improvement to some, it’s important to
> point out that the interfaces we’ve defined serve no practical purpose other
> than making unit testing possible.

No it simplified the responsibility of the class. It also simplified your
tests as well. Now you don't have to have tests that test many different
scenarios.

> Note that although ILocationProvider exposes two different methods, from the
> contract perspective we have no way of knowing which one actually gets
> called.

Tests don't verify how you use or call other classes/methods. You can do
verification if you want via mocks.

> Unit tests have a limited purpose

Yes, as does integration tests, functional tests, system tests. Etc. You
shouldn't be trying to do unit level testing via an integration test.

> For example, does it make sense to unit test a method that calculates solar
> times using a long and complicated mathematical algorithm? Most likely, yes.

It does if you want to verify that the functionality is setting up the request
correctly.

> Does it make sense to unit test a method that sends a request to a REST API
> to get geographical coordinates? Most likely, not.

That's not a unit test. That's an integration test(if you use a mock) or a
functional test if you want to end a live end point.

> Unit tests lead to more complicated design

It highlights that the original design was complicated or the methods had side
effects (which you should avoid). In his own example he seperated the
resources from the functionality.

> Unit tests are expensive

No they're not. Mocks do not belong in a unit test. That's for integration
tests. If you have them there you have issues. They're cheap to write and
quick to run.

> Unit tests rely on implementation details

It's all about how you write you code, if you're trying to lump everything
together, that's what you get.

> Unit tests don’t exercise user behavior

Correct, unit tests don't. Functional/feature or above do.

I stopped reading after this. It feels like the guy is just trying to argue
that he doesn't like testing.

------
ddevault
Tests are most useful when they provide fast access to a code path for
repeated testing during development, when the code path would otherwise be far
removed from the normal course of user interaction with the software. This
closes the debugging loop while you're developing new features, when it's
applicable.

The second time, less often, when tests are useful is when making large
changes with broad implications, and quickly verifying that everything still
works and that you haven't overlooked some subsystem.

The least frequently useful application of tests is regression tests. 99 out
of 100 regressions only happen once. If you add a regression test, it won't
happen again (unless you overlooked something), but it was unlikely to happen
again anyway.

Generally I write the first category of tests when I feel that it would be
useful to solving the problem I'm working on. Then, when I finish the code, I
commit the test, because why not? It's written. It'll never fail again, but
hey. This creates a reasonably managable collection of tests which, more often
than not, test the more complex (and therefore more fragile) parts of the
codebase, and are a decent representative sample of all of the subsystems.
This provides sufficient test coverage to support the second case. The third
case is so rare that it can be addressed on a case-by-case basis.

The most stable software is software which doesn't change. Keep your scope
small and your complexity low, and don't be afraid to mark a finish line. This
is more effective than exhaustive automated testing.

------
yomly
My own take on this is that it is hard to crystallise why or what a "unit" is.
My cop out is it comes down to experience - once you've done this enough and
solved enough problems and worked on enough teams, you'll get an intuition on
this as it is more art than science.

I find it to be some mixture of importance and complexity, while always
balancing against the single responsibility principle. A simple `average`
function might be trivial but if it's important to your business logic you
probably want to test that separately, even if it is nested within a "unit".

I find that following some iteration of "functional core imperative shell"
helps here as it helps keep your core business logic to being data
transformations and transformations on data are easy to test and their
concerns are easy to reason about.

This then helps me reason about what is "implementation" and what is a
contract/interface which should be tested robustly.

I guess really the art of writing just enough unit tests is to identify the
seams and boundaries of your abstractions in your codebase, and potentially
accepting that business seams in your codebase may be different from the seams
of your actual software domain -- the latter being possibly more granular.

------
bob1029
I only use unit tests as a temporary critic when developing brand new
abstractions from scratch. With complex implementations, it is incredibly easy
to forget one out of hundreds of constraints and lose track of important
capabilities as you go.

That said, the moment the abstraction is in a clean state and consistently
passing all tests and has been integrated well with existing logic, the unit
tests are deprecated as far as I am concerned. I wont ever explicitly delete
them, but I recognize that the unit tests are potentially just as flawed as
what they are testing (the same developer wrote them after all), and coming
back into that abstraction after 6+ months elapses means I'd probably just
have to rewrite tests from scratch as a mental exercise and in order to
restore my own sanity.

I can recall at least one occasion where I wasted 2 days chasing down a
failing unit test only to find out the test itself was flawed - in the worst
way, random pass/fail based on a race condition between multiple threads that
were part of the testing code. I think that's the biggest danger with unit
testing. Other developers assuming the tests you wrote are foolproof and
sending them on pointless errands.

------
owaislone
Depends on what kind of software you are building. If you are a 5 man team
trying to move very fast and deliver new features, I think rigorous unit
testing can take enough time that it won't be worth the effort but if you are
working on something with a couple dozen other developers where no one person
might be able to fit the whole system in their head, I think unit testing is
invaluable. A few days ago I saw another HN post that discounted testing as
well stating that you already need to know what to test in order to verify the
behavior but that is not even the purpose of tests in large projects
maintained by large teams. Tests serve two purposes in such projects. They act
as internal documentation for contributors that is far easier to keep up to
data as it complains loudly when it fails and they detect breaking that would
otherwise go completely unnoticed and end up shipping. This is invaluable for
large projects and software that is shipped to be installed like libraries and
installable binaries. If you are writing a service that you deploy in your own
environment to serve some API requests then sure, too much tests will have
diminishing returns.

------
friedman23
I initially reacted very negatively to this article but decided to entertain
the idea anyway and I have to say I'm convinced of the idea that they are
overrated.

Now I'm not saying that they are unnecessary and i definitely believe they are
needed but i do think they are overrated.

I've worked on many buggy systems that had very good unit test coverage. It
was only with sufficient integration testing that were able to prevent
constant regressions.

~~~
catdog
Unit tests have their place but in my experience there are also a lot of
places in most codebases where they don't deliver enough value compared to the
effort put into both writing and maintaining them (they can be a huge PITA
while refactoring). I agree that in many cases integration tests often are so
much more useful in catching regressions.

~~~
mytailorisrich
> _they can be a huge PITA while refactoring_

I see that as an useful feature: This shows you the cost of refactoring and
the fact that this cost includes re-testing everything.

------
zoomablemind
> The primary goal is not to write tests which are as isolated as possible,
> but rather to gain confidence that the code works according to its
> functional requirements.

Very long article that leads to a trivial conclusion that unit testing is not
a substitute to higher level testing, and carries the overhead effort.

It seems to me the author is addressing cases where unit test coverage is
blindly used as dev project metric. PMs are often not familiar with code
internals, but they need some assurances that project is on-track. Thus trying
to collect insights from whatever output provided by automated tools.

Unit tests can have 100% coverage, but still testing the wrong thing. A common
misuse is testing to the actual code, instead of testing the expected
behavior.

Unit tests is a developer's tool, not PMs metric. On the other hand Acceptance
tests should be the common ground, which may be the Integration tests or a
whole separate suite altogether tailored to user requirements.

When coding a function, one needs to test preconditions and assumptions prior
to following the main logic. Unit tests serve the very same purpose by
enforcing assumptions on Unit level at the granularity that makes sense. Noone
needs to test trivial getters and setters, but one needs to ensure that
objects remain in valid/known states. Thus Unit tests untie developer's hands
to recraft the "unit" without fear that the pearls of correctness would get
lost.

Paradoxycally, Unit testing is a productivity tool, just as diagraming, or ...
sketching prior to coding. Some developers can maintain a perfect mental
picture of their code without any need for such tools. Some devs see design
and implementations right away. If I were to inherit their codebase, I'd
rather see their assumptions validated, so I don't need to do the coverage by
eyeballing the code.

------
nojvek
I only believe that theres only one kind of testing. Black box testing. And
that is broken into two types. Integration and unit.

Tests that validate the interface of the big black box and assert that it does
the right thing (integration tests). I.e assert the behavior customers depend
on.

That black box is made up of many smaller black boxes that talk to each other.
So you test the boundaries of each of those smaller black boxes. (Unit tests)

The ROI for testing the big black box is much higher. But when something fails
it’s hard to know what exactly caused it and how to fix it. If you have a
decent number of tests for the smaller black boxes then you know what box
needs to be fixed.

But all good tests are black box tests. I.e they don’t test the internals but
the interface of using that box.

A well designed system is made up of small boxes that do one thing very well
and work with other boxes. Once you’ve tested them you can forget about them
and work on higher abstractions. They will continue doing what they promised.

------
kazinator
Thorough unit tests are required for code that is open to unforeseeable new
uses. The existing application may not provide enough coverage. Unit tests can
exercise all the requirements of a module that are not yet being used now so
that in the future, someone can do new things with that module without having
to fix it first (thereby risking the existing application).

The argument that it's more productive to test modules through the application
only applies to private, dedicated modules that will only ever be invoked in
support of the use cases arising in the application.

Basically, it boils down to whether or not the piece of code is an independent
product (where "product" could refer to something with "customers" internal to
the organization).

In engineering, all building blocks that are separate products to be
integrated into other products are rigorously tested on their own, whether
they are integrated circuits, or steel cables or whatever.

------
boothby
A plug for an old-is-new-again feature I hacked together in an evening...

[https://github.com/boothby/dissert](https://github.com/boothby/dissert)

I like assertions. They're a really good alternative to unit tests because
they can be used in a real environment without the need for maintenance-
intensive mocking etc. But assertions have a significant performance cost,
especially when they involve consistency checks on large datastructures.

So, a pattern that I've found useful is heavyweight asserts that can be
enabled / disabled through external means, in conjunction with high-coverage
integration testing. This is really easy in some languages (c/c++, for
example, using the preprocessor) and can be more fragile in languages like
python (hence the plug above -- which isn't the best way, but is a way, to
achieve this 'best of both worlds' testing).

------
chrischattin
Yes! Thank you!

I've been saying this for years.

The whole point of testing is make sure you aren't breaking something when you
add a feature/refactor/delete old code. Its purpose is to speed up
development. Excessive unit testing just creates a brittle test suite, and
adds more work without much benefit. It slows you down.

As a Rails dude focused on startups, iterating rapidly and what the user sees
is what you care about. Therefore, I focus on integration tests that run the
whole stack. That lets me mess with the implementation code without re-writing
the test suite. At the same time, it provides regression protection and a good
place to start troubleshooting. Plus, Rails already has tests for the
"plumbing".

Tests should serve the developer, and speed up the iterative process, not add
work to the project b/c of a dogmatic adherence to TDD or unit test all the
things design.

Just my (unpopular) opinion.

------
mlthoughts2018
This is comically bad. Way too long and myopic but the points aren’t even
useful points.

> “ unit tests are only useful to verify pure business logic inside of a given
> function.”

Yep, that’s one of the most important things you need to verify. You should
_also verify_ integration test success, and then the unit test allows you to
immediately observe where is the failure: is it pure business logic failure
when all external factors were mocked? Or is it integration failure? Or is
your dependency flaky & untrustworthy?

Without unit tests you can’t (a) develop business logic in isolation from the
external resources it will integrate with or (b) easily isolate what is a
business logic error vs what is an integration error.

It’s just so dumb to use language like saying unit tests “are only useful” for
this. That’s a hugely valuable thing to be useful for!

> “ no practical purpose other than making unit testing possible.“

This is incredibly bad circular reasoning. You must first already agree that
unit tests confer no value, only then is this considered a point by the
author. But if unit tests do add value (and they really do) then refactoring
to facilitate unit tests also adds value!

The point about testing a hidden implementation is mixed. On one hand the
example used here is just a bad example. On the other hand, testing a hidden
implementation can be a very good thing because it assists the act of
development in the first place. The test helps the person writing the business
logic to write, by factoring into a test that's proof of correctness. Maybe
it’s debatable that it should be removed like scaffolding when they are done
if it’s a hidden implementation, but that’s really a local decision for a team
to make. Sometimes it’s good leave those tests of hidden implementations
because they add extra protection for changes that can have unintended
consequences.

> “unit tests are expensive”

yawn.

------
jb3689
Seeing Java examples after this does not surprise me. Testing is hard in Java
and it forces you to change your design. Maybe better put - testing stateful
objects is hard

I do a lot of dev in Ruby and testing there is super easy and powerful. Say
what you will about monkey patching, open classes, and reflection, but it does
make it very easy to write great testing libraries. I'd argue that testing
libraries in Ruby lead to better design (e.g. using more methods, making
things single purpose, thinking about the interface first can lead to better
naming etc)

That said, I don't test as much as I used to. Unit testing is really good for
testing algorithms (e.g. transform this complex JSON; rebalance load). Outside
of that, some light end-to-end testing will catch most other things, and you
can use staging and gradual rollouts to derisk bigger changes

------
x87678r
Software is not all the same. Some people are writing Auto-Pilot software,
others are writing some useful tool for a few people to speed up some data
cleaning for a few months. There are no rules that you _must do this for
software_ when we're all doing different things.

------
dmos62
I see unit tests as executable documentation. So it's nice when code you're
getting familiar with has good tests. That said, I don't write them myself,
except when the code is complicated and difficult to predict, but then I'm
more likely to refactor it.

~~~
disgruntledphd2
The author agrees with you, in that he makes the same recommendation around
testing the functionality rather than the functions.

------
Dowwie
How much wasted effort is expended by testing policies? Consider, for
instance, managing development by setting an arbitrary target % coverage for
unit tests. As team refactors code, unit tests will break, and the test
coverage requirement will compel someone to revisit old tests and refactor
them. Refactor more code and the same tests are breaking again! Must refactor
the tests -- yet again -- to reflect the latest changes. Test coverage
policies are very costly..

On the other hand, unit testing does present some benefits. Consider a
function that cannot be easily unit tested. This tells you that it's probably
too complex and would benefit by refactoring into more manageable parts. Also,
business logic unit tests are beneficial.

------
throw_m239339
More like unit testing in some languages is a freaking chore because it
involves complex unit test frameworks.

Unit testing should be as easy as writing a few lines of code, just to ensure
that an interface behaves as expected, but it's often not the case because
well many languages make it incredibly difficult, for various reasons, or that
you end up writing walls of code (thus behavior ironically) with mocks, stubs,
fakes and what not, just to test a single method...

"Testing experience" should absolutely be a core concern for any modern
language. More generally the notion of "developer experience" should be a core
design aspect in any new language and not be delegated to third party tools.

What is the language with the best testing experience?

------
ak39
_Most_ systems in production already have key use cases that stand in for unit
and system testing. Find those and use them after every new feature or change
to the system.

I support an existing reporting system with several hundred tables and
thousands of interdependent stored procedures. This system has a fairly often-
used "global report" which touches nearly 90% of the tables. We run this user
report after all changes to the database with a few swaps in parameters
(history vs current etc). If this report and a few others "pass" (results are
the same as previous reports), we know that the changes are safe. When the
reports show differences, we confirm that the changes were intended. If not,
something broke.

~~~
renke1
Reminds me of snapshot testings.

------
travisgriggs
Giggle. How things evolve. "Unit Testing" came out of the Smalltalk work
around 1997. I was there.

It was Kent Becks brain child. The first NUnit framework was SUnit.

What was a unit? Kent was never absolute about this, I always felt because as
a consultant he wanted to peddle the theory far and wide. But the early
examples all had a pretty strong trend towards "a unit is an object." Nowaday
"what is an object" is pretty loose; I just finished some Dart tutorials where
an object is a way of "organizing our code into smaller reusable pieces" which
ironically, I did in Fortran77 with common blocks and well factored files. But
in the Smalltalk world, which was "objects all the way down", an object was
small amounts of imperative behavior bound to data, where computational
results were achieved via an approximation of the way cellular biology solves
problems: lots of little glumps of data that achieve a larger result by
sending messages to each other.

This process of turning behavior and algorithms into things, or reification,
was sometimes easy and sometimes hard. An object for Point, obvious. An object
for SortCollationPolicy, less so.

What Unit Tests did was help programmers design good objects. Beck said this
in eXtreme Programming eXplained. He said that traditional QA departments
would laugh themselves silly at what unit tests did. But that the value was
that it drove good design. And that in a collaborative (pair programming)
environment, it helped communicate the design intents around objects to fellow
developers. I did the Smalltalk koolaid fest for 20 years. I found Unit
Testing to be immensely effective. It made my designs more cellular again and
again. When my designs were solid, I had less bugs.

As a mechanical engineer, I still see similarities between unit tests and
geometric dimensioning and tolerancing, a practice in the mechanical world
that also swam against the current of conventional testing practices and left
some shaking their head.

In todays world where OO design is more of a "small unit of organization" I'm
not surprised that unit testing also seems meh.

------
keithnz
These kinds of articles criticising unit testing always seem to work out more
to do with the way people go about composing their software and the design.
Interface explosion and unnecessary abstractions as presented in the first
part is often the nightmare people get into early in the their software
development journey. But that's a problem with design. Possibly unit testing
lead you down that path, or perhaps that path was shown to you by some
advocate of unit testing. In the article even the final code seems too messy
for my tastes, also could be simpler and more modular. Which then makes the
testing more straight forward and you'd pretty much the same thing in the end.

------
yeahgoodok
I've been saying this for years and suffice it to say it has been a career-
limiting move for me. If any new developers are reading this be warned that
these are considered dangerous ideas. Know your audience before sharing
controversial opinions.

------
pfdietz
Unit testing reveals latent defects, defects that can't be triggered by the
system as it currently exists. Each such defect is a problem waiting to pop
out in the future, as the system changes and the latent defects become
exposed.

~~~
alecbenzer
Yes but discovering those hypothetical defects is usually less important than
discovering the actual, currently present defects.

------
pachico
I can imagine you can argue in favour of any statement, given your custom, ad-
hoc anecdotical use case. However, in general terms, unit testing (together
with a very clear domain design) are the most solid pilars of your software.

------
danans
> Does it make sense to unit test a method that sends a request to a REST API
> to get geographical coordinates? Most likely, not.

Isn't a "unit test" which sends a request to a REST API almost by definition
an "integration test", or even a "live server smoke/staging test", since it is
testing the integration between separate live systems (the requesting code and
the server).

If anything, what should the unit tested if anything is the code that
generates the request, which should be separate from the code that sends the
request to the server. But only if that code is non-trivial.

------
cheez
I use unit tests for complex units of code and depend on frameworks to do
simple things correctly. Like I don't need to write unit tests for inserting
things into a database. Unless there are complications around it.

------
roca
One argument made for unit testing which this post doesn't address is that
it's easier to understand and debug unit test failures.

To which I say: yes it is, but quality debugging tools (which don't exist in
many domains, and aren't used nearly as much as they could be where they do
exist) can mitigate this issue. I'm talking about tools like [https://rr-
project.org](https://rr-project.org) and (self-promoting!)
[https://pernos.co](https://pernos.co).

------
unbendable
I read it and I already knew that I had this discussion a bazilion times, just
like every comment in HN probably won't be anything new to me.

Some topics are just made to be discussed forever

------
exdsq
I’m a software test engineer and spend quite a bit of time performing code
reviews. One thing I always ask for is proof that the code works - this
requires a combination of unit, integration, and e2e tests.

Unit tests are great to show that some code works in isolation, and then a few
integration tests can cover the functionality. You should at a minimum cover
each path through a function with boundary cases, which would be far too much
effort to do via integration tests.

------
KingOfCoders
"however many find it encumbering and superficial."

Author doesn't want to write unit tests. There are two factions of people on
the internet, those that write unit tests and those that don't like to.
Nothing has changed in the last 15 years.

Whenever I write many unit tests I feel happier and more confident. If I don't
and catch up writing unit tests, I always find bugs in my code. Never haven't
I found bugs when increasing test coverage from a low start.

------
bpyne
Excellent, thought-provoking piece that comes at a good time for me. I've been
looking at Python's unittest for a specific system I've been developing over
the course of 12 years at work. It became clear recently that we need more
automated tests. However, as I started to design a test suite in unittest, the
scope of it became overwhelming. Looking at it as a suite of functional tests
makes it more reasonable to write.

------
APhoenixRises
What's missing from the title of these articles is "to me" or "in the context
of my project". The approach to testing a CRUD/line of business app is tested
is different than the approach for testing something like EC2, which is also
different than how a library/package is tested. Without context, it's really
easy to argue exceptions or counter-examples to any opinion.

------
alexandercrohde
[https://blog.alexrohde.com/archives/178](https://blog.alexrohde.com/archives/178)

------
vannevar
It's worth noting that what the author describes as functional testing was, in
fact, what unit testing was originally intended to be. "Unit" referred to a
unit of functionality, not a unit of code. But that was an ambiguous concept,
and it was simpler and easier to just write tests for each unit of code
(typically a method), and so that's the approach that has prevailed.

------
KingOfCoders
Unit tests are a tool to solve a problem. If you don't have the problem, don't
use the tool.

If you can do major refactorings without impact on productiviy, don't have a
QA department doing manual tests, junior developers can deploy to production
with confidence on their first day and customers are happy about the quality
of your product, don't solve a problem that isn't there.

------
alexandercrohde
One thing I've learned, is that unit testing seems much much more painful than
it has to be when you are using too many classes.

When you are writing code as pure functions (i.e. stateless), it's actually
much less painful. In the provided example, I would _never_ write a class to
curl a website and parse json.

------
royosherove
Unit tests have a lot of value, in an overall testing strategy:
[https://pipelinedriven.org/article/a-pipeline-friendly-
layer...](https://pipelinedriven.org/article/a-pipeline-friendly-layered-
testing-strategy-amp-recipe-for-dev-and-qa)

------
DominikD
I used to think that unit tests are a waste of time. But they find tons of
bugs in my code so I simply can't ignore the reality by posting some random
untestable code and claim that it proves that tests are (most of the time -
let's cover our bases folks!) a waste of time.

------
mytailorisrich
Unit tests are very important and very cheap. They safe a huge amount of grief
later on.

As for everything, the pros and cons should be weighted to reach a practical
and effective approach. For example there is rarely a need to test every
single function.

Overall, I don't find this piece very insightful.

~~~
sys_64738
> For example there is rarely a need to test every single function.

And that's where the bugs end of being.

People suck at writing code so it needs to be reviewed by peers and thoroughly
tested. If you said that in an interview for code you'd written then I'd point
to the door.

~~~
theshrike79
So you would write test cases for all the methods and constructors for a class
like this:

    
    
      class Foo{
        public Foo(initialValue) {
            this.value = initialValue
        }
      
        public setValue(value) {
            this.value = value
        }
      
        public getValue() {
            return this.value;
        }
      }
    

What's the point, what are you testing? That the language's most basic
operations still work?

~~~
eithed
You'd test those cases. This way, if you'll change your code to be

``` public setValue(value) { this.value = value + 5 } ```

then your tests will start to fail.

Treat tests as a contract.

~~~
alecbenzer
Can't tell if serious...

~~~
eithed
I'm serious. Of course you don't test that language operations work, but
you're testing that given method does what it's supposed to do. In this case
your method sets value for property on model. It doesn't matter you're doing
it via assignment - you could be doing it by any other ways. You want to test
that for given model after calling that method your models property will
change to given value. This way, if you'll change the implementation of
setValue your test will still succeed. If it'll start doing something else,
they will fail. And of course, this method can be used in your feature tests,
so those will start failing too (but that's beside the point, I guess)

Of course it's also a balancing act - should you immediately write test for
this? I try to.

------
tasubotadas
I feel that this is relevant [https://dev.tasubo.com/2020/06/underappreciated-
superpower-a...](https://dev.tasubo.com/2020/06/underappreciated-superpower-a-
look-into-useful-software-tests.html)

------
Marciac
I believe unit tests have a fundamental flaw: you're trying to solve a code
problem by using code. Yes it can work out in the end. It also blows like code
blows: waste of time, over-engineering, legacy code, too-big codebases, bugs,
etc.

------
throw_this_one
I think Unit Testing is good for when you have big teams and changes that can
go across knowledge boundaries... so the tests can keep people in check from
making changes that mess up other parts of the codebase inadvertently.

------
ulisesrmzroche
I like the writing but he’s making shitty unit tests to make a point. The
criticism needs to be applied to well made unit tests.

I do agree with the general idea that integration and e2e tests are more
important

------
mberning
That’s all well and good until you need to upgrade some massive project to a
new JDK or your Rails app from one major version to another. Then you will be
wishing you had significant coverage.

------
smitty1e
Word of the day: autotelic

"(of an activity or a creative work) having an end or purpose in itself"

A slightly improved word choice over "autoerotic" when the design choices are
getting too self-indulgent.

------
jasonlhy
I think the value of unit test is to test the logic in software abstraction
instead of relying on external dependencies, but I believe many unit test is
just for the sake of unit test

------
burlzad
Most of the problems defined in the article tend to go away with an
interpreted language such as JS. In JS unit testing is super cheap and super
easy, with very common patterns.

------
bumby
Assuming 100% coverage is a well-intentioned goal but pragmatically
unattainable, what are some ways to prioritize what gets unit tested?

E.g., unit test every identified hazard in a hazard analysis

------
maps7
I read most of the article and I am not convinced. I'll stick with the unit
tests. In my experience they improve code quality, maintainability and reduce
bugs.

------
robertlagrant
Code that's written a certain way just to make it unit testable is less common
in something like Python, where mocking is super powerful.

------
billman
Any good idea in software can and will be abused to the point where it seems
like a bad idea. It's all about context people.

------
DeonPenny
Try reformatting a large code base without unit test. You'd pull out your hair
trying to rename a variable.

~~~
pjmlp
Thankfully using strongly typed languages and IDEs keeps my hair in place.

------
pelasaco
wow, I keep reading those "test is overrated" (second article, this week), and
I don't see how would I survive, maintaining a 10K LOC rails application,
without unit testing my models, services, finders, etc.. I guess Unit Testing
is Overrated, Long live to Unit Testing!

------
slim
unit tests are the equivalent of repl for languages that are not designed. you
need unit tests to close the feedback loop. you can't simply write code for
days without having any feedback. some code simply does not have an obvious
immediate visible feedback on the ui

------
mikl
By whom? Compared to what?

~~~
alecbenzer
Compared to integration testing and end-to-end testing.

------
NCG_Mike
I'd suggest that someone who thinks that UTs are over-rated are over-rating
their ability. I'd also suggest that those people are the exact people that
should have extra testing done on their code.

------
edpichler
This title is wrong. It's not the tool's or technique's fault.

Unit Testing is not overrated. It can have been overrated by some of us
sometimes or most of the time, depending on who you are.

------
mdoms
I weep for the state of our industry.

------
js8
I agree with the article a lot.

First of all, I also do not condone the idea that developers shouldn't worry
about other tests than unit tests. I think the developers should be
responsible for producing working code, end to end, and having automation and
QA is just a nice bonus, additional verification.

Anyway, regarding unit testing. Lot of the objections against unit testing
becomes clearer once we start talking about it in a formal way, best in
functional terms.

Let's have a function y=f(x) that we want to test. In unit testing, we
generate some examples (x1,y1),.. that we run this function with and compare
the output.

If we have two functions, f and g, where z = g(f(x)), even if we unit tested
each of them separately, we still can fail because what we didn't test was if
the domain of g is indeed a subset of range of f. In fact, that cannot be unit
tested, since unit tests only verify logic, not the domain and range of the
functions.

That's the first objection to unit tests, there are holes in integration. This
is especially insidious because if two different people wrote the two
functions, they can each have different assumption on the domain and range,
yet they won't detect it by unit testing, because they both wrote the tests
that only operate under their respective assumption. (BTW, this also shows
that the code coverage metrics are meaningless, unless you can coverage all
the code executed down to libraries, because you can always leak the coverage
through the datatype and vice versa.)

Second objection to unit tests is how do you generate the test cases, in
particular, the output? Well, the unit to be tested has to be reasonably
small. This means more work (more mocking) and more holes in the integration
assumptions, as above.

Personally, I believe property based testing is superior in all respects and
should replace unit testing. Property based testing forces you to write down
assumptions that you have written your code with, and it scales better because
you decouple generation of test cases from the assumptions themselves.

So formally in property based testing, we would create a generator of input
cases for x, and also a property - a function that verifies what the y looks
like (possibly even given x). In fact, this approach completely subsumes unit
testing, because for each of the test cases (x1,y1),.. as above we can just
write a property that checks if the output of f is y1 given the x1, and so on.

However, property based testing is stronger. We could also produce a property
that would state that input to g has to be in its domain. Then we could easily
detect the above problem with composing f and g, just by verifying the
properties for our generated test cases. So it resolves the first objection.

That's the beauty of the properties, they really test the data, not the logic.
I believe that what we really need to test as developers is the assumptions
that we put on the data structures we work with, rather than logic that the
functions do. If you want to verify the logic, read the code you have written
again.

The second objection to unit testing is also somewhat resolved, because we
don't have to produce complete test output, we only verify some of it's
properties. Also, separating generation of inputs from properties let's you
naturally reuse outputs from the other parts of the program for testing. You
only to get the input generation right, the other things will sort of test
themselves. So for instance to test g, we don't need to create an extra input
generator, we can just live with the outputs of f. In essence, property based
testing just makes tests themselves to compose better.

It always bothers me a lot when writing unit test, how little bang I get for
the buck. I need to come up with all these test examples, and they usually
only cover a small piece of code. What often happens is that I actually know
(in my head) the testing generator and properties, it's just instead of
writing them up so that computer would understand them as well, I just write a
few examples. It feels totally wrong.

Ideally, I would love to see framework that would let some of the verification
properties (from property based testing) to live in the code as additional
runtime assertions. I think that would be much more practical approach than
having a lot of unit tests, and it would tie nicely with defensive
programming.

Finally, I would like to point out to my earlier, more general comment I made
about testing:
[https://news.ycombinator.com/item?id=23470259](https://news.ycombinator.com/item?id=23470259)

------
korijn
opinions on blog posts are overrated

------
palotasb
Well thought-out article with good points and deductions! I have a few
counterpoints and I think the example is not the best way to argue against
unit testing as there are better ways to implement the same features and write
better unit tests without bloating the code.

Looking at the _SolarCalculator_ class, I would go another way first and
refactor it like so:

    
    
        public class SolarCalculator
        {
            public static SolarTimes GetSolarTimes(Location location, DateTimeOffset date) { /* ... */ }
        }
    

1\. Made the method static (make it a free function in other languages) 2\.
Take an explicit Location parameter 3\. Return a SolarTimes object directly,
not async, not a Task<solarTimes>, and remove Async from the name 4\. Drop the
now unnecessary class LocationProvider member

This becomes more easily unit testable without any excess Arranges steps at
the beginning.

    
    
        public class SolarCalculatorTests
        {
            [Fact]
            public Task GetSolarTimes_ForKyiv_ReturnsCorrectSolarTimes()
            {
                // Arrange
                var location = new Location(50.45, 30.52);
                var date = new DateTimeOffset(2019, 11, 04, 00, 00, 00, TimeSpan.FromHours(+2));
    
                var expectedSolarTimes = new SolarTimes(
                    new TimeSpan(06, 55, 00),
                    new TimeSpan(16, 29, 00)
                );
    
                // Act
                var solarTimes = solarCalculator.GetSolarTimes(location, date);
    
                // Assert
                solarTimes.Should().BeEquivalentTo(expectedSolarTimes);
            }
        }
    

The GetSolarTimes function is now purely computational and has no plumbing at
all (stealing terms from chimprich). I think the original author would also
agree that unit testing _SolarTimes GetSolarTimes(Location location,
DateTimeOffset date)_ has none of the problems that unit testing _async Task
<SolarTimes> GetSolarTimesAsync(DateTimeOffset date)_ had.

(Added benefit: the interface is more flexible, it can be reused in more use
cases without modification but that is not the point.)

I find that such a refactoring often solves all of the problem entirely. It
might seem like we just swapped the problem under the rug and forced the
calling code to add the complexity (and the tests, interfaces, mocks etc.)
that we discarded, but in practice this is often not the case.

    
    
        // Uses original async implicit-location interface
        // (assumes an existing solarCalculator instance)
        var solarTimes = await solarCalculator.GetSolarTimesAsync(date);
    
        // Uses proposed non-async explicit-location interface
        // (assumes an existing locationProvider instance)
        var solarTimes = SolarCalculator.GetSolarTimes(await locationProvider.GetLocationAsync(), date);
    

The reason we can often get away with this in practice is that the complexity
increase in the caller is small. We did _not add additional state_ to the
caller, we did not push more testing/mocking complexity to the caller. The
assumed _locationProvider_ instance in the caller _replaces_ the
_solarCalculator_ instance in the caller. If testing/mocking
_locationProvider_ is required, testing/mocking _solarCalculator_ ought to
have been tested too. We require the caller to test/mock something else, not
something new.

If the original _async Task <SolarTimes> GetSolarTimesAsync(DateTimeOffset
date)_ interface is required nonetheless, it can be implemented as a pure
"plumbing" function. As such I would agree that unit testing it would provide
less value than integration testing. A simple pattern that can be applied here
instead of an _ILocationProvider_ interface and all the baggage that comes
with it is using a _Func <Task<Location>>_ or lambda instead. This allows both
testing the instance with custom location providers and unhindered usage of
the _SolarCalculator_ class without always needing to inject a dependency.

    
    
        public class SolarCalculator
        {
            private readonly Func<Task<Location>> _locationProvider;
    
            // default constructor for normal usage
            public SolarCalculator() {
                internalReaLLocationProvider = LocationProvider();
                _locationProvider = async () => internalReaLLocationProvider.GetLocationAsync();
            }
    
            // constructor for custom locations and testing
            public SolarCalculator(Func<Task<Location>> locationProvider) {
                _locationProvider = locationProvider;
            }
    
            // Gets solar times for current location and specified date
            public async Task<SolarTimes> GetSolarTimesAsync(DateTimeOffset date) { 
                return GetSolarTimes(_locationProvider(), date);
            }
            
            public static SolarTimes GetSolarTimes(Location location, DateTimeOffset date) { /* ... */ }
        }
    

(Sorry, I haven't implemented the IDisposable pattern for
internalReaLLocationProvider and I might have misplaced an async keyword or
two because C# is not my most recent language.)

To support the "pyramid-driven" paradigm I argue that the most complicated
part of this feature is the solar time calculation and it would well deserve a
large test suite containing many test cases like
_GetSolarTimes_ForKyiv_ReturnsCorrectSolarTimes_ above (edge cases, diverse
locations, etc.). Conversely, the higher-level functions don't need this level
of testing. Since they contain no complex logic, I usually assume that if they
work for one input, they will work for any other. Testing the async,
automatic-location version with the simple Kyiv-based input is enough, there
is no need to test it with midnight sun and all the same edge cases as the
base function.

The point I'm making is that unit testing is not as overrated as the original
example suggests. The code can be modularized better ( _not_ by making
everything an interface), with a well-unit-testable "computational" part and a
part which is mostly "plumbing". I agree with those saying that the second
part benefits more from integration testing than unit testing, and I can agree
with keeping them "as highly integrated as possible, while keeping their speed
and complexity reasonable". But I insist that unit testing the 1st part and
writing it in a way that it is unit testable is important.

~~~
dorianh_
I totally agree with your refactoring. The code you wrote is simpler, less
surprising, reusable, easily testable, and so on.

Unfortunately I think that the article only shows that the OP made poor design
choices, (which he probably wouldn't have if he had used TDD, ironically).

Even if the article is well written, the code shown in the three first blocks
kind of invalidate the whole argumentation :/

------
agentultra
I think what a lot of people tend to miss with this discussion is that _TDD_
is _Test-Driven_ development. It doesn't have to be unit tests. The point is
that you think first about what a valid specification is and you write a test
for it.

There's an anecdote from Djikstra that I'll paraphrase:

 _Djikstra was working on a problem where two programs running on a shared
memory computer were not allowed to enter their critical sections at the same
time. He tasked his graduate students with finding the algorithm that would
guarantee this. A student would provide a program. Djkistra would review it
and find that it contained errors. The student would take the feedback and
produce a longer, more complicated program._

 _Tired and unable to find more time to review the increasingly complicated
programs, Djikstra tasked his students to submit with their program a proof of
its correctness. Djikstra then need only verify that he understood the proof
and that the program implemented it faithfully._

Shifting the burden of proof from the reviewer to the author made Djikstra's
work much easier and the programs more robust.

I'm not suggesting we need to start writing proofs. We're practical industry
programmers who aren't working with such high-assurance software most of the
time. However _unit tests_ , weak as they are, are at least a form of proof.
Proof by example. For trivial code where a few examples would suffice to
convince you of its correctness I would say they are quite useful.

Overrated though? I don't think so. They're also useful as a design tool. The
OP gives an example of testing business logic that makes a bunch of HTTP API
calls. The author claims unit tests are of little value here.

Well how would you test that?

Me, I would defunctionalize the calls the execute the HTTP requests. I'd
provide an interpreter that the user can run their program in. The production
code could use an algebra that makes the HTTP requests. The test code could
use an algebra that stores requests made and returns canned responses.

It might seem like "extra code," but it gives us the ability to decouple our
logic from how its executed. Not only can we test this without having to mock
our HTTP library (which is, itself quite well tested) but this decoupling
opens new avenues for managing our program. We can imagine affixing to our
algebras some logging actions. We could write a development version of our
interpreter algebra that logs out everything and a production version which
masks secrets and sensitive information from the logs.

I've met programmers who can "just write the code," and they manage well for
themselves. However I've also worked with such programmers who don't. The
former are quite rare. And working with either group is difficult to say the
least. If I am reviewing a piece of code I need to check whether your thinking
is sound: did you consider the essential properties, edge cases, and did you
spend time proving that you've thought about them? I don't want to read 600
lines of code and try to understand it... it's too much. But a proof, even an
incomplete hand-waving one, I can understand.

That being said... sorry for the wall of text. Unit tests aren't the be-all-
and-end-all of testing. They're a beginning. And they can be a useful tool
when you're starting out. Look to property based tests. Think about the tests
before you write the code. And run the tests frequently and often.

And refactor, refactor, refactor.

------
sub7
Functional testing is underrated.

------
zulgan
this is our take on it: [https://eng.rekki.com/unit-testing-at-
rekki/t.txt](https://eng.rekki.com/unit-testing-at-rekki/t.txt)

TLDR:

    
    
        * test your core, make sure your core is strong
        * don't test your http api
        * don't mock
        * don't test writing and reading from the database
        * don't complicate your code to make it testable
        * ... unless you deem fit

~~~
kentosi
"don't mock"

How are you meant to write a unit test for a class without mocking out
external dependencies? Wouldn't that make it an integration test?

~~~
bencoder
Not the OP but I count integration testing as testing integrations /between/
systems - wherever your code depends on something outside your codebase. You
can mock this for unit testing.

Don't mock things that are already in your codebase. Use the actual object.

Common argument against this I've heard is "But then when something goes wrong
it is harder to figure out where the problem is" \- I have never actually
experienced this myself, but I have, very often, experienced being reluctant
to do any refactoring because I'd have to rip up all the unit tests because
they are testing only implementation details

------
sys_64738
If it's not tested then it's broken.

------
cjfd
No, it is not overrated. The author is just using a not-so-handy definition of
'unit'. When a 'unit' is a single method/class/function 'unit' testing is not
in general a very wise thing to do. In most cases it is better to unit test a
set of related classes/methods/functions as a whole because the communication
between these things also needs to be tested. In that case it is quite
impossible to overrate the importance of unit testing. I also don't think the
definition the author uses is the original definition of 'unit'......

