
Software Testing Anti-patterns - kkapelon
http://blog.codepipes.com/testing/software-testing-antipatterns.html
======
agentultra
> Writing tests before the implementation code implies that you are certain
> about your final API, which may or may not be the case.

How does this myth continue to persist?

Writing the test first has nothing to do with knowing the final API. When I
write tests first I am _looking_ for an API. The tests help guide me towards a
nice API.

I personally find TDD in this manner works best when taking a _bottom-up_
approach with your units. I start with the lowest-level, simple operations. I
then build in layers on top of the prior layers. Eventually you end up with a
high-level API that starts to feel right and is already verified by the lower-
level unit tests.

Although this style of development has become less prevalent in my work with
Haskell as I can rely on the type system to guarantee assertions I used to
have to test for. This tends to make a _top-down_ approach more amenable where
I can start with a higher-level API and fill-in the lower-level details as I
go.

~~~
kalecserk
> I start with the lowest-level, simple operations. I then build in layers on
> top of the prior layers.

I was discussing this with a coleague this week: starting by the lower level
details vs starting from the more abstract/whole api. I prefer to start by
writing a final API candidate and its integration tests and only write/derive
specific lower level components/unit tests as they become required to advance
on the integration test. My criticism on starting with the bottom-up is that
you may end up leaking implementation details in your tests and API because
you already defined how the low level components work. I have even seen cases
where the developer end up making the public API more complex than necessary
due to the nature of the low level components he has written. Food for
thought!

~~~
amzans
Your point of view resonates with me.

Over the years, I've learned to test "what a thing is supposed to do, not how
it does it". Usually this would mean to write high level tests or at least to
draft up what using the API might end up looking like (be it a REST API or
just a library).

This approach comes with the benefit that you can focus on designing a nice to
use and modular API without worrying on how to solve it from the start. And it
tends to produce designs with less leaky abstractions.

Of course YMMV.

~~~
finaliteration
I just updated our “best practices” documentation to include the
recommendation that tests be written against a public API/class methods and
the various expected outcomes rather than testing each individual method.

I think the latter gives an inflated sense of coverage (“But we’re 95%
covered!”) but makes the tests far more brittle. What if you update a method
and the tests pass but now a chunk of the API that references that method is
broken, but you only happened to run a test for the method you changed?

I like to think I’m taking a more holistic view but I could also be deluding
myself. =)

~~~
newfoundglory
Why doesn't your best practices document include a requirement to run the
whole test suite?

~~~
dllthomas
In some contexts there may be valuable tests that take long enough to run they
should be run out-of-band. That said, I don't see where the parent says they
_don 't_ run the whole test suite.

~~~
newfoundglory
> now a chunk of the API that references that method is broken, but you only
> happened to run a test for the method you changed

~~~
dllthomas
Ah, yeah, not sure how I missed that. Maybe they meant the test for the broken
method isn't run because it doesn't exist, but I very much agree that that's
not the most natural interpretation.

~~~
finaliteration
Sorry, this wasn’t totally clear and I thought “run all tests” was implied.
What I was trying to get at was the difference between “we have a suite that
includes individual tests for every single separate method so we have great
coverage” vs “we have a suite of tests that run against public APIs that still
manage to touch the methods involved”. The former may test “everything” but
not in the right way, if that makes sense.

I should have said “you’ve only written a test” rather than only having run a
test.

------
antoineMoPa
In small companies where there is no time to "waste" on tests, my view is that
80% of the problems can be caught with 20% of the work by writing integration
tests that cover large areas of the application. Writing unit tests would be
ideal, but time-consuming. For a web project, that would involve testing all
pages for HTTP 200 (< 1 hour bash script that will catch most major bugs),
automatically testing most interfaces to see if filling data and clicking
"save" works. Of course, for very important/dangerous/complex algorithms in
the code, unit tests are useful, but generally, that represents a very low
fraction of a web application's code.

~~~
rdsubhas
Yep. There is a kind of goldilocks zone:

\- Company starts out, lots of change and RFCs, testing at a shallow depth
(high-level tests wherever possible) helps a ton

\- [Goldilocks zone] Company and team settles down, predictable workflow, "not
changing the world" anymore, unit tests help

\- Project goes into legacy, needs to be split/ merged/ refactored/
rearchitected/, again unit tests don't help. Back to shallow tests and
stranglers

Only very small part of programming today is doing hardcode datascience and
algorithms. Rest are all just plumbing from A to B to C. I _always_ argued in
teams to focus on higher level tests, but then, its easy to get shouted down
because of "TDD". Sadly, very few people seem to focus on objective rather
than process.

~~~
murukesh_s
Yea exactly, it's often painful to see when all you need is an API level test
which will anyway cover all the use cases instead of the low level unit tests
written for the sake of TDD.. leaner the code more maintainable it is

------
d_burfoot
I don't understand why people are so enamored of the distinction between
"unit" tests and "integration" tests. That dichotomy has never seemed useful
to me. In my mind the relevant distinction is speed. There's one fast test
suite that you run before every code commit, and another slower suite that you
run every night on a cloud instance or testing machine, and review the results
in the morning. The fast/slow distinction is roughly comparable to the
unit/integration distinction, but I don't have any problem with testing a DB
interaction in the fast suite - and in fact consider it to be a good practice
- if the interaction is fast enough. In general the fast suite should check as
much as possible within the constraint of needing to be fast.

~~~
kkapelon
>The fast/slow distinction is roughly comparable to the unit/integration
distinction

I think you answered the question yourself. Speed is also shown in the table
that defines the types of tests.

You can name the categories foo tests and bar tests if you wish.

Anti-pattern 1 - is companies that have only the "fast suite" you are talking
about and anti-pattern 2 is companies that have only the "slower suite".

~~~
nwatson
I'd prefer that all testing be done with live data in a real database ...
maybe not at the very beginning but as soon as practicable. I came into a
project recently with so many issues around differences between real data and
assumed data, and unrealistic assumptions about real data that it's been
hugely painful, with lots of back and forth between Dev and QA and Ops. With
Docker now there's no excuse for not setting up soon in a project's lifecycle
a realistic environment (infrastructure topology-wise) with real data on
development and QA laptops.

With this you still have fast and slow perhaps, but everybody can run all but
the most extreme scaling tests.

~~~
kodr
That's E2E test. What's interesting with unit tests is that you can run them
locally, offline. They are fast enough so that you can run them and see if you
broke anything. If your tests take too much time to start or relies on a live
env (which may be down or broken), you'll spend more time for maybe nothing.

~~~
nwatson
With Docker I can run my E2E "offline".

~~~
joshuamorton
If you're touching real data, your test isn't offline.

~~~
AstralStorm
Not true. You might be touching local sample of real or realistic data. It
does not need to be static either.

The performance of such a test is still a problem.

------
al2o3cr
Got to quibble with Antipattern #9 about test code - there's certainly a place
for extracting common patterns from test code (factories, complex setup) but
too much DRY in tests can make for hard-to-modify spaghetti. Sometimes it's
easier to just repeat a few lines of code in the name of clarity.

~~~
arthurdenture
Similarly, overly elaborate test code can itself be buggy. A specific instance
of this anti-pattern is when the test code duplicates the same logic as the
code under test, logic which turns out to be incorrect.

In test code, I am a lot more tolerant of straight-line repetition in the name
of simplicity, in a manner that would be worth abstracting were it in non-
test-code.

~~~
rdiddly
That's a sign that you need to write tests to test your tests. It's part of
treating your test code as a first-class citizen. I always write a test suite
for my test suite.

Just kidding.

~~~
squeaky-clean
I know this is a joke, but I do have a few "don't panic" tests that do
extremely simple stuff, like just run a test and return True immediately,
connect to db, connect to mock db, etc.

Sometimes I do something stupid like put an extra comma somewhere without
noticing, or update my db driver library without also updating my mock
library. I think we've all had that moment where tests fail or it doesn't
compile because you thought you copy-pasted some text into an email, but it
got pasted into your editor window.

So when I see that literally every test has failed, I know it was some really
dumb but likely simple mistake on my part. If 180/195 tests pass.... Shit.

------
V-2
_" I have seen projects which have well designed feature code, but suffer from
tests with huge code duplication, hardcoded variables, copy-paste segments and
several other inefficiencies that would be considered inexcusable if found on
the main code."_

I agree with "code duplication" and "copy-paste segments" (although I fail to
see how they're two different things. It looks like example duplication to
me?)

I _don 't_ agree with hardcoded variables. In tests, they're okay. Tests are
_not_ production code. In this case I side with Misko Hevery - see
[https://youtu.be/jVxmk-tVo7M?t=2m54s](https://youtu.be/jVxmk-tVo7M?t=2m54s)

Tests _should_ hardcode values, and (ideally) not contain _any_ logic, not
even of the simplest sort. That's the point, that's how we can reliably
confront them with production code without risking bugs that fall off off our
radar because the same faulty logic leaked from production into tests. Tests
should be kept all dumb, all naive, all hardcoded.

That's the thing which proves to be the most difficult to convince fellow
developers about, as they're typically used to mechanically transferring over
all the usual best practice from production code into tests.

"Ooh, this is so hardcoded" \- "good". "This method name is so verbose" \-
"good; will we ever call this method _from code_?". "This could be private" \-
"it's a class _with unit tests_ , how will this code possibly be called from
some other class while it executes? Visibility modifiers are waste of space
here". And so on

------
koonsolo
I don't agree with Anti-Pattern 10 - Not converting production bugs to tests

I used to think this was a good idea too, until I saw the real statistics on a
project on this.

This project (50+ developer team) tracked all bugs, and also if they were
regressions or not. Almost 0 regression occurred of bugs that were fixed
before.

All testing needs to consider return on investment. The reality, at least for
that project, was that testing time was best spent elsewhere.

~~~
hamstercat
Doesn't that just mean that the tests were effective? The goal of doing a test
for each bug is to prevent them from happening again. Unless you meant that
this project didn't apply this particular advice but was getting away with it
just fine?

I personally believe that this advice is one of the most important one
actually for a very specific reason: it leaves a very bad impression to client
when a bug keep happening again and again after being fixed. Unfortunately
I've lived this experience when I was just starting out in my career on a
similar project (around 45-50 developers) with basically no tests at all. It
wasn't fun explaining to the client, even if they were internal to the
company, that the bug we fixed last month had to be fixed again.

~~~
coldtea
> _Doesn 't that just mean that the tests were effective?_

Only in the sense that:

\- What's this?

\- A device that keeps tigers away.

\- But there are no tigers in Los Angeles!

\- See how effective it is?

> _The goal of doing a test for each bug is to prevent them from happening
> again. Unless you meant that this project didn 't apply this particular
> advice but was getting away with it just fine?_

No, he means that they had code to test for the presence of fixed bugs, but
nobody reintroduced said bugs and triggered that bug catching test code ever.
So even if they didn't have the code, the end result would have been the same.

~~~
closeparen
How would you know? If code is failing tests, it doesn’t leave my laptop.

~~~
coldtea
Because parent said they tracked the test suites.

Perhaps the tests don't run on the devs laptop, but on an integration system
(we have such a setup).

(And of course you can make test suites report failures centrally, whether
they tests run on a laptop or not).

~~~
closeparen
They said they tracked _bugs._ A test failure during development doesn’t
generally go in the bug tracker.

------
falcolas
A small quibble with #8 (no manual testing) - I can't recall the number of
times a program has passed all the tests the _creator thought to write_ , only
to fail when someone actually looked at the output.

Automated testing should not be the end of the testing. It should be the
beginning. Manual testing should be the last step, even if every manual test
is immediately automated - there's always more to test.

Also, we should be sure to include combinatorial and fuzz testing in that
pyramid as well, since skipping them leads to someone coming in with AFL and
exploiting the hell out of your app.

~~~
scaryclam
Agreed. The number of times that manual tests have unearthed completely
unthought of situations is huge. A good QA tester will do weird and wonderful
things that no developer ever thinks of doing, never mind testing for. Saying
you shouldn't manually test is like saying you shouldn't shower. Sure, you can
get away with it for a time, and it saves time! But eventually someone's going
to point out you smell bad. Having a customer have to tell you your product is
broken (smells bad), because you didn't know, because you didn't bother
testing, is seriously not what you want.

------
grosjona
>> Anti-Pattern 2 - Having integration tests without unit tests

I strongly disagree with this one. I think that unit tests only make sense
when the project's code has really settled down (not likely to change in the
future) and you want to lock it down to prevent new developers on the team
from accidentally breaking things.

Unit tests severely slow down development. I've worked on projects where it
takes 2 to 3 days to update a single property on a JSON object on a REST API
endpoint because changing a single property means that you have to update a
ton of unit tests. The cons of unit testing are:

\- It locks down your code, so if your code is not structurally perfect (which
it definitely is not for most of the project's life-cycle) then you will have
to keep updating the tests as you write more code and more functions around.

\- It encourages you to use certain patterns like dependency injection which
might make sense for some (e.g. statically typed) programming languages but
are unsuitable for other (e.g. dynamically typed) languages because they make
it difficult to track down dependencies.

\- It only makes sense for parts of the project that have strict reliability
requirements and where any downtime/failure in that part of the code would
result in some loss of business. It's important not to underestimate the
maintenance cost of unit tests. More unit tests means much slower development
(cuts productivity to half or sometimes even a quarter of what it was without
tests for small teams), which means that you need to hire many times more
developers to get the same productivity that you could get from a single
developer. Sometimes it's OK if a part of the code breaks in non-critical
parts of the system; especially if you have some kind of user-feedback system
in place.

~~~
kkapelon
> I've worked on projects where it takes 2 to 3 days to update a single
> property on a JSON object on a REST API endpoint because changing a single
> property means that you have to update a ton of unit tests

You just described anti-pattern 5. Did you read the full article?

~~~
grosjona
Anti-pattern 5 is contradictory to the rest of the article...

>> Tests that need to be refactored all the time suffer from tight coupling
with the main code.

What is the author proposing? To write unit tests that are only 'loosely
coupled' to the code that they are testing? In my entire career, I've never
seen a single unit test case that matches this description.

If it's loosely coupled with the internal code then by definition, it's called
an integration test.

Anti-pattern number 5 is basically the author admitting that internal unit
testing is a problem in terms of productivity but then they fail to offer an
actual solution which doesn't contradict the rest of the article.

Sometimes your code needs refactoring, you need to change the fundamental
structure of how some objects interact with each other and when that's the
case, unit tests actually discourage you from making the necessary changes of
pulling the whole class definition apart (thereby invalidating all the unit
test cases for that class) and moving the code to smaller or more specialized
classes.

~~~
kkapelon
>In my entire career, I've never seen a single unit test case that matches
this description.

That is not an argument. The fact that you have been doing something your
entire career does not make it correct

>If it's loosely coupled with the internal code then by definition, it's
called an integration test.

That is your own definition. The article defines an integration test right at
the start. It is ok if you have your own definition but that does not mean
that everybody has to agree with you.

>Anti-pattern number 5 is basically the author admitting that internal unit
testing is a problem in terms of productivity but then they fail to offer an
actual solution which doesn't contradict the rest of the article.

The article has an example and shows both the problem and the solution. The
solution is to make your tests not look at internal implementation. What more
could I do there?

>unit tests actually discourage you from making the necessary changes

you are just describing again what anti-pattern 5 says.

>What is the author proposing?

I am the author, so I know what I am proposing, that is for sure.

~~~
grosjona
>>In my entire career, I've never seen a single unit test case that matches
this description.

> That is not an argument. The fact that you have been doing something your
> entire career does not make it correct

I've worked for many different tech companies in my career (both startups and
corporations) and the vast majority of these unit tests were not written by
me.

Also I've worked on many open source projects. Same story.

~~~
kkapelon
>Also I've worked on many open source projects. Same story.

Let me put this way. I am writing an article on how to keep your body healthy
and provide a list of common mistakes.

Anti-pattern 5 is "you should stop smoking".

And your argument is "I have been into too many companies (startups and
corporations) where people have been smoking all the time. So anti-pattern 5
is wrong."

Is this more clear?

------
adamlett
I had hoped that this would be a catalog of concrete code examples of common
or easy to make mistakes, along with concrete prescriptions for what to
instead. Instead it turns out that it is mostly a long winded collection of
opinions passed off as insights along with a few general principles that
basically amount to _good tests are good_ and _bad tests are bad_. Who reads
this and is any wiser as to how to write valuable tests? Writing tests that
add value is hard, is a skill that must be acquired. In my experience, most
developers who have given up on testing have done so, because they’d been sold
the false notion that testing is trivial and always adds value. What they’ve
found instead is that _testing is difficult_ (because they lack the skill),
and that the tests they did write, were of questionable value. No wonder they
gave up on it. What these people need is not another article talking in loose
terms about the wonders and virtues of testing.

~~~
kkapelon
>of concrete code examples

Examples of what? Python? Ruby? Java? C++? You cannot please everybody. The
article is written in a way that touches all developers. And judging by the
feedback I got it has succeeded in this way.

>that basically amount to good tests are good and bad tests are bad

And the article helps people to understand which tests are good and which are
bad in the first place. If you are already an expert on the subject then maybe
you are the wrong audience.

>Who reads this and is any wiser as to how to write valuable tests?

If you think you can do better then by all means I am expecting your article
on the subject

>What these people need is not another article talking in loose terms about
the wonders and virtues of testing.

Please write the correct article then.

------
UK-Al05
I thought we got past the idea that a unit is a method or class.

The original interpretation means a unit of behaviour. When you start thinking
like that, intergration tests become less important. But still important.

The idea that a unit in unit testing was a class was a misinterpretation.

~~~
kalecserk
Totally agree, and this misinterpretation got crystallized into your tools
that often recommend/generate tests that are named
TestSubjectClassNameTest->eachMethodTest. We have to pay attention to that ;)

------
pimmen
"Anti-Pattern 2 - Having integration tests without unit tests"

The first project I worked from after graduating had this problem when I came
onboard, all the tests were slow as all hell and 80% of each test did the same
thing (clearing the entire database and seeded in new test data, for _every_
test). I felt like I made a mistake becoming a programmer because of how
gruesome this anti-pattern is when you work with big Java applications. Then
we hired a senior developer three months later who promptly started breaking
up the tests after checking with the team if that was ok. The productivity of
the entire team increased by many orders of magnitude!

~~~
kkapelon
Yes this is exactly what I was talking about. The anti-patterns are mentioned
in order of appearance, so this is very common.

Don't let problems like these disappoint you. There are companies that don't
suffer from any of these anti-patterns

------
jamesmccann
Really enjoyed reading this article as a Sunday afternoon long-read. Well
structured and covers a lot of things I have experienced but haven't formed
into such a clear description. Also really appreciate the realistic examples!

~~~
kkapelon
Thank you! Yes it took me a long time to write and thinking about good
examples is always hard...

------
rdsubhas
Superb article (some small nitpicks but they are ignorable). I would like to
add just one more anti-pattern:

 _Focusing too much on fast feedback_

Nope. One should get priorities straight. The #1 purpose of tests is safety.
The "sleep well in the night" test. The "deploy with eyes closed" test.

Performance of the test suite, while "nice to have", is _not the core
objective_. If push comes to shove, if it comes down to safety vs performance,
safety should just win hands down as a principle.

Doing a bunch of mocks for speeding up 80% of unit tests? Great, but its a
borrowed debt, and must be balanced out with 20% of higher level tests.

~~~
keithnz
I find this a bit of a strange statement.

I started with Extreme Programming in 99 and grew and evolved through all the
refinements to the process around testing. It's always been about providing
safety. Fast has always been about not running silly tests that take too long

Fast Feedback is a core objective. It is not in competition with safety,
safety is an integral part. Safety is the feedback you are expecting. We want
to know about that safety fast.

A popular term that came around in the early 2000s was "Brain Engaged",
meaning you needed always to be aware of why you were doing things and not
following blind rules. Meaning you need to know the purpose of going fast.

The whole point is to go as fast as possible safely.

Some of the biggest challenges is how to get things quick while maintaining
safety. Kind of makes no sense to have fast tests with no safety.

Now you mention mocks, and I have seen people mock in very strange ways that
devalue tests.

I like the general guidance from Kent Beck "I get paid for code that works,
not for tests, so my philosophy is to test as little as possible to reach a
given level of confidence"

------
sorokod
"Linux command line utility ... In this contrived example you would need:

    
    
        Lots and lots of unit tests for the mathematical equations.
        Some integration tests for the CSV reading and JSON writing
        No UI tests because there is no UI."
    

As with all command line apps, the command line is the UI and requires testing
( input parameters, error messages, etc... )

~~~
kkapelon
You are right. I will change the wording to say GUI instead of just UI

------
amelius
In cases where manual testing is the only option, is it an anti-pattern to let
the testing be done by the person who developed the code?

As a developer, I feel that others should test my code. Is that reasonable?

~~~
mic47
Why should other test your code?

~~~
ben-schaaf
Same reason you don't review your own code, or pair program by talking to
yourself.

~~~
stinos
This seems a bit of a false generalization to me. For a long time I've been
the sole developer on multiple large-ish applications. So of course I review
my own code, and I write my own tests. And yes as another commenter mentions:
if you don't pay attention this can lead to tunnel vision, and yes it can be
hard to be consistent about it.

I discovered those drawbacks fairly early, so quickly made it a habit to be
super-strict and critical about pretty much every single line of code I write
(both with regards to style, functionality, how it adheres to the standard
practices etc).

What I also do consistently is review older code: if I have to add a function
to existing code, it happens more often than not that I'll re-read and review
the surrounding bits. Often to come up with better code. If time permits, else
I take note and go back to it later.

In practice it turns out that if I haven't touched code for months, reviewing
it is almost the same as reviewing someone else's code, i.e. starting with a
clean slate. Which is also why I'll sometimes write something, commit it, but
not yet merge it to production. Then a week or so later I'll come back and
review every bit again.

~~~
ben-schaaf
Oh, I completely agree, though I wouldn't call writing code carefully and
methodically a "code review" if that makes sense.

I personally find that me a couple months down the track is pretty much
another person anyway. In a sense you're not reviewing your own code, you're
letting it be reviewed by future you. But if you're in a team with multiple
developers, having someone else review your code would be more efficient.

~~~
_asummers
I tend to review my code with a reviewer hat, not with the author hat, before
I open the PR. Find all the places where I'd call myself out for lazy names,
poor abstractions, missing docs, etc. then go in and fix those. Repeat until
satisfied, then open PR.

------
kalev
These are exactly the posts I need to level-up my game. More abstract, top
level views without going into detail on setting up system-x to make it work,
i can figure that out myself! Any more of these types of articles you guys and
girls recommend?

~~~
kkapelon
Yes I wrote it because it is my main complaint. The internet is full or
articles that deal with a specific topic that is very narrow and never give
the big picture away.

Frankly the only good source of such material is books (especially the initial
chapters where they set the stage for everything)

If somebody else knows where these types of articles exist, I would also like
to know where they are.

~~~
avinium
Great article. I don't agree with 100% of your points but testing is a field
where everyone has their own opinion shaped by their own experience, and I
think you've covered every approach (along with its pros and cons)
exceptionally well.

OOC, how long did this take you to write?

~~~
kkapelon
I started in November. You can see the full history here
[https://github.com/kkapelon/kkapelon.github.io/commits/maste...](https://github.com/kkapelon/kkapelon.github.io/commits/master)

Most of my time was spent on thinking the examples and trying to structure the
content. The actual writing was very easy when I knew in my mind what I wanted
to say (as is always the case with technical writing)

------
bni
Unit tests is a tool to help you write the implementations and verify it works
like you intended to.

You could do the same thing by "F5"-ing it, that is running the full app, but
the problem with that it is slower and above all NOT repeatable later, without
significant setup time.

Running the test, sets breakpoints and debug implementation by running the
unit test, is by far the fastest way to develop, while also having atleast
some assurance that the code you are writing is "correct"

------
BeetleB
Reminding me of too many headaches at work.

We have all integration tests, and no unit tests. I and several others have
pushed for unit tests, but with little success. Our full test suite takes 10
hours to run. We have split it up so we can test what we think is the portion
we are modifying, but we're never 100% sure something in the rest of the suite
isn't dependent on our changes.

I disagree a little with his complexity multiplying for his anitpattern about
no unit tests. In theory, yes, you would multiply them. In practice, it is
rarely the case you need to test all combinations - real code rarely looks
like his diagram. In our experience with our project, the final number of
branches we need to test is much closer to adding than to multiplying.

I very much agree with the "don't test internal implementation". If the
primary reason your tests fail is because you refactored or made API changes,
your test suite is not robust.

Am having to live with flaky tests right now. Horrible. Our team doesn't want
to prioritize fixing them.

One antipattern he left out: Making unit tests a 1:1 match with code, and
insisting that a unit test should not test more than one funciton. I know the
community is split, but I am very much in the camp that "unit" should not be
tied to a function. Don't make it that granular.

------
emilsedgh
So we have a REST API created using node and most of the job happens on
Postgres and sql files.

Basically, 95% of the methods depend on database [and it's current state].

How can I unit test this?

I've given up on unit tests. They wouldn't make any sense. E2E tests are
helping us a lot but I had several attempts to have unit tests but they don't
make any sense at all.

~~~
abritinthebay
Those methods are still data in, data out tho right?

So test that with mocked data.

Unit tests should be able to run with no network connection, no other service,
just your test file + data.

Otherwise at that point you’re doing integration testing, not unit tests.

~~~
emilsedgh
Let's say I have a `SomeEntity.get` function.

Basically it gets an array of id's and calls the database and fetches those
entities.

~10 lines of Javascript code.

40 lines of SQL. Which also depends on the state of the database.

So what do I gain by mocking the database? Making sure those 10 lines of
javascript work fine? In a vacuum? They are the 10 easy lines anyways which
are covered by my E2E test.

What matters is the SQL and how it behaves in different states of the
database.

~~~
abritinthebay
And that’s why you test the DB with an integration test (and likely a
functional test too).

For your unit test you’d just mock the dB call and make sure that function
called the “db” and returned the faux data. That’s it.

No one said unit tests had to be complex or test the whole stack. Quite the
opposite actually.

------
dwheeler
Most of this article I agree with, such as the need for BOTH unit and
integration testing, the need to focus on automation, and the need to turn
regressions (bugs) into new tests.

I also agree that "Paying excessive attention to test coverage" is not good.
However, I completely disagree with much of its supporting text. If your test
code coverage is only 20%, then by DEFINITION your tests are awful. That would
mean that 80% of your code is completely untested. I agree that for many
programs 100% code coverage is not worth the effort, because those last few
percentages cost more than their benefit, but that doesn't mean that such low
coverage makes sense. Most organizations I've worked with recommend at least
80% statement coverage, as a rule of thumb. I haven't seen any studies
justifying this, but this essay doesn't cite anything to justify its claims
either :-). You'd want much higher statement coverage, and also measure branch
coverage, if software errors are serious (e.g., if someone could be physically
harmed by an error). You should focus on creating good bang-for-buck tests
first; code coverage is then a useful tool to help identify "code that isn't
getting well-tested at all." It's also useful as a warning: 100% coverage may
still be poorly tested, but low coverage (say less than 70%) means the program
_definitely_ has a _terrible_ test suite.

This statement is misleading: "You can have a project with 100% code coverage
that still has bugs and problems." That's true for ANY program, regardless of
its testing regime, because any testing regime can only test an astronomically
small fraction of the possible input space. A program that just adds 2 64-bit
numbers has 2^128 possible inputs; real programs have more.

~~~
kkapelon
>That would mean that 80% of your code is completely untested

That is not always a bad thing. Maybe depending on the application this 80% is
trivial and never breaks. It is explained in anti-pattern 4 that you should
start with the critical code first.

>Most organizations I've worked with recommend at least 80% statement
coverage, as a rule of thumb

This number is making MANY assumptions.

I would demand different code coverage from an application that runs on a
nuclear reactor and from an application that is used as point of sale in a
small pizza restaurant.

------
Toine
Refreshing article. We hear so many religious arguments about one side or the
other in these days, we really need more of these.

I think there's a deep issue that causes all the misunderstanding, the
elephant in the room : the definition of a "unit". Words have a meaning in a
certain context : if people don't mean the same thing when using the same word
they're doomed to misunderstand each other forever. Just ask 5 different
people what is a unit and you'll have at least 3 different definitions. The
most common one is : in OOP, a unit is a class.

From my experience, a unit should be defined as a much higher abstraction
level than that. A better definition would be : "a set of use cases that
belong to the same module". In other words, unit tests should be we written in
a language as close as possible to your domain language. Or : "test your use
cases, not your classes". When you do that, you usually write tests for a few
major classes that use all your other classes that are just implementation
details. This leads to tests that are far more reliable and easy to maintain,
because they have very low coupling to the rest of your application.
Typically, this means testing classes at the very edge of your app, classes
that directly communicate with the end-user, usually services or something
like that.

Let's say you "unit test" a simple car with a steering module. It has all
sorts of internal complex mechanism that IMO you generally don't need to unit
test directly. What you need to know is if the business value is correctly
delivered to the driver, ie :

\- when he turns the steering wheel left, does the car turn left

\- when he brakes, does the car stop

\- when he presses the gas pedal, does the car accelerates

\- etc

Under the hood there are dozens of other classes that perform actions that you
don't really need to care about when you test. They will be tested indirectly
anyway, because they're used by the high-level classes you do test. I think
many people blindly try to unit test almost all the classes they write and it
leads to code duplication all over the place and all sorts of other problems
that make projects fail, people angry and think unit tests are bad.

~~~
TheCoelacanth
I have observed the same thing. I have seen way too many test written with the
class = unit philosophy that look more like tests of the VM that is running
the tests than they look like tests for the code that was supposed to be
tested.

------
sethammons
> Anti-Pattern 5 - Testing internal implementation

I run up against this the most. The internal state often does not matter. What
is important is the bevhavior. Refactoring internal state should not break a
bunch of tests.

~~~
rimliu
Then you will probably like this:
[https://www.youtube.com/watch?v=URSWYvyc42M](https://www.youtube.com/watch?v=URSWYvyc42M)

~~~
sethammons
Blast from the past; I had forgotten about that one. Thanks

------
keithnz
One thing that seems to be not really commented on is "Play Testing /
Exploratory Testing" ( and semi related, "dog fooding"). This is the manual
test process where someone actually uses the software and looks for problems.
Problems are then captured as a new automated test case. This is about finding
undesirable unforeseen consequences ( sometimes you get desirable ones! though
if it is desirable, you will want to capture it in a test also ).

------
johnwatson11218
I have also noticed that many of the frameworks for writing unit/integration
tests are so flexible that developers will easily develop a mini-framework
that makes their specif tests able to be expressed in a declarative manner.
The downside is that their setup and system isn't really usable for other
tests.

I think devs need to keep it simple, even it leads to more code, so that a
wider range of tests can be written with the same abstractions.

------
amleszk
I personally don't like the word 'antipattern' because it sounds like a TDD
religion term.

None the less the article does a good job covering lots of the reasons writing
tests will result in a bad outcome. Much like the reasons developers dislike
certain architecture patterns, it's not the pattern that's at fault but the
application of it by individuals and teams

~~~
kkapelon
I thought about using "common pitfalls". You think this would be better?

------
luord
Great read, and many points I'll consider, particularly the first anti-
pattern. I consider that writing too much code that interfaces with other
services (which in turn leads to more integration testing) can come from
suffering NIH, as it's likely that that code might just be repeating code
already provided by the api/library of that service.

------
fadzlan
I understand that if the test hits database, then its an integration test.

But what if a whole load of the logics are in stored procedure? Surely those
needs test too.

Of course, one can make an argument that depending too much on stored
procedures is an anti pattern....

~~~
atom-morgan
The stored procedure should be tested within the domain of the stored
procedure. The consumer of its result just tests its ability to use the result
it should be given (fulfilling its end of the contract).

------
steindavidb
This website is unreadable on iPad, the font size changes every time I scroll.

~~~
kkapelon
I have an iPad 1 and it shows just fine. Can you post a screenshot/video
somewhere with the problem?

~~~
hcs
Here it is on a 5th gen iPad with iOS 10.3.3:
[https://youtu.be/5gqbL_fhy0U](https://youtu.be/5gqbL_fhy0U)

Works fine with Reader View, though.

------
Yizahi
Anti-pattern 14 - generalizing all testing in the industry to e-shops and
websites

~~~
kkapelon
I am open to other suggestions. I have really tried to spend a lot of time to
another example and cannot seem to find any.

The other alternative would be a loan approval application.

Remember that the definition of a developer is very broad nowadays. You have
hackers working with C/Assembly on firmware, all the way to AI/ML with high
level languages.

Do you have another suggestion for a good example?

------
amarant
I can't say I agree with this. Partly because it's too generic. I'm of the
opinion that you should focus on testing what you deliver. If you're building
a framework or library, then unit tests are probably the way to go. If you're
developing a micro service that's primarily going to be called by other
services, built by other teams for example, you should probably focus on
integration tests. If you have an external API, you should focus on e2e tests,
as your clients won't know or care that you have several individually well
tested microservices behind your API or if it's a monolith. They only care
that a given API call has the expected result. The pyramid thing is pretty,
but also meaningless IMHO.

~~~
kkapelon
You are just repeating __exactly __what I am saying as anti-pattern 3

~~~
amarant
Upon re-reading, I think I may have misread large parts of your article..
Sorry about that, these are actually great tips unless you misunderstand them
;-)

at first I took your comments about "the shape is _not_ a pyramid" as there
being something wrong with that and that the shape should be a pyramid..
reading a bit more carefully I now see that's the opposite of what you're
saying...I blame everything on english being my second language O:-)

~~~
kkapelon
It is ok. English is my second language as well.

