
TDD, where did ‘I’ go wrong - hashx
https://frankcode.wordpress.com/2014/07/01/tdd-where-did-i-go-wrong/
======
chatmasta
The author suggests redirecting the scope of "TDD" to testing his app's public
API endpoints, rather than internal functions. His logic is that public API
endpoints represent the contracts for business logic with customers, and
therefore are the highest level that can be accurately tested.

I am not convinced of this argument. Obviously, it's important to heavily test
the endpoints of any API you make public. In fact, that's a situation where
you want 100% coverage. However, testing the endpoints of an API does not
allow you to forego testing its internals. At best, a passing suite of API
endpoints should give you _reasonable_ confidence that the internals are
functioning properly.

An API is an abstraction over many moving parts. Any good API will consolidate
thousands of lines of business logic into a few endpoints. There is a lot of
room for error in the layer of abstraction between API and internal logic.
It's entirely possible that a API endpoints could appear to be functioning
properly, but actually be relying on broken internal code.

For example, consider a fruit basket API. You can insert fruit into the
basket, and check what fruit is in the basket. A suite of tests for the API
endpoints could insert fruit, and then check that it's there. In this case,
the API is hiding a lot of internal logic. Storage mechanisms, data
persistence, fault tolerance, and a slew of other logic decisions are
completely opaque to the API consumer.

What if the internal code incorrectly stores the fruit in a temporary file?
The API test will pass if it inserts fruit and then checks that it's there.
But it is not going to check for that same fruit in an hour. What if it's
gone?

Internal logic like data persistence is (rightfully) opaque to API consumers.
That means that no testing of API endpoints can validate all internal logic.
Therefore, you cannot ignore testing the internal logic in favor of only
testing API endpoints.

~~~
rpedela
In theory yes, in practice I mostly disagree. The ideal is to test everything,
the public API and all the internals. But there are only so many hours in the
day so if you have to choose (which most of us do) then the best place to put
the majority of the testing effort is in the public API. The reality is that
there will always be bugs...always. While 100% bug-free is a valiant goal, it
is pretty much impossible to achieve in a complex system. However what is
achievable (or more achievable at least) is being almost completely bug-free
at the public API level. From a business point of view, this is the only thing
that really matters because it is the only thing your users and customers care
about.

In your fruit basket example, there are two fixes for this. Either improve the
public API tests to verify that an hour later the file is still there. Or you
use a storage technology where you can assume it will work correctly
(Postgres, S3, etc) and then have a code review.

There is definitely an exception to "only need to test the public API". That
is when a very complex component has to be built from scratch or almost
scratch in order to support the public API. For example, I personally need to
build a Lucene-based search server (Solr and ES don't fit my use case). While
I shouldn't need unit tests for Lucene itself, I do need unit tests to verify
the threading model, file structure, and couple other things I write from
scratch are correct because the complexity is so high and I don't trust myself
or anyone else to write it correct the first time.

------
morgante
I'm not sure why the author attempts to redefine "unit testing" at a higher
level of testing. It seems pretty well-accepted that most API tests are, in
fact, integration tests. There's no need to use the term "unit test" in this
case.

Otherwise, I quite agree. I don't write unit tests. A complete set of
integration tests gives me about 80% confidence in the correctness of code.
Unit tests might bump that up to 90%, but you'll never get to 100% from
testing alone and the substantial effort necessary to get there just isn't
worth it.

Testing follows the classic 80/20 rule. Integration tests only take 20% of the
effort required for full unit test coverage, but can give you the bulk (80%)
of the benefits. For all but the most sensitive of applications, it's probably
not worth it to put in the additional 80% effort for a mere 20% gain.

~~~
BugBrother
I am not arguing against the integration test point, but I do see a value in
unit tests.

When I write/modify a piece of code, I of course must see so it works. I run
the new code, feed in some data and see the results.

If I separate UI and backend/model, I can just write code that feed data to
the model to see so it works (instead of doing it ad hoc, by hand). Then I
save that as a unit test. It is part of the documentation too (a use case).

Cheap, easy and with good value for effort. (Depending on problem domain.)

Edit: I can see where the "test induced design damage" comes from, mocking is
bad, but I think code also often get better when it is made testable (dep
injection, think about fan out/in, etc)

~~~
morgante
It sounds like you're writing functional or integration tests, not method-by-
method unit tests.

That's all easily accomplished with integration tests. Generally, any good
architecture makes a separation between the UI and the backend. You should
absolutely have integration tests for the backend by itself, but I don't think
it is necessary to get to the level of unit tests (testing every single method
used in the backend).

Personally, I try to separate the backend into its own separate service. The
first step is to write tests against the API for that service, and then to
make those tests pass by completing the service. This has the added benefit of
letting the tests serve as a spec documentation, which makes it very easy to
farm out implementation to employees or contractors.

~~~
BugBrother
Sorry, missed the answer. How can quick tests to see that API (and internal
stuff) work _not_ be Unit tests? :-)

But sure, it is a discussion of what we should call the useful tests. I have
(also) burned out on > 80% coverage for tests when the specifications aren't
written in stone.

------
corporealshift
I think there's a lot of value in balancing this approach with smaller sized
tests, especially around complex code.

The problem that the article doesn't address is if your test fails finding the
failure point isn't as straightforward as it is with smaller units.

However the points made here and in the discussions linked are valid and I
think this is the right approach to testing. But I think that there's still
lots of value in writing and maintaining tests around smaller units of code.
Balancing the two approaches brings us solid test coverage along with easy to
debug test failures.

------
derefr
When you have a service-oriented architecture with small, atomic services
("microservices"), unit tests are indistinguishable from functional tests:
each test suite covers exactly one unit, which performs exactly one function,
which maps to exactly one service with exactly one API. (Handily, in such a
scenario, the service-registry also doubles as your mocking framework.)

I think unit tests do have one important use: testing library functions, e.g.
cryptographic primitives. Most people aren't writing "library code", though;
they're employing it.

------
CHY872
Erm this smacks a little of absolutism. In many cases you'll definitely want
to test at a method level - if the behaviour of a single method is complex but
its not part of the API, if you want to test it properly, you'll likely still
have to have a similar number of unit tests (to properly pinpoint a bug),
you'll just need to move them up to the API level - and furthermore, the tests
you need to come up with are far more complex, as they have to anticipate the
outputs of each function call as well (in order to obtain the same coverage).

It's correct that by reducing your number of units and making them more broad
scoped, you potentially reduce the number of undetectable faults you are
catching (but need not), but you also make crafting the input that detects the
remaining faults much more difficult.

The correct approach is (it seems obvious to me) to mix the two. Don't force
tests to be on a per-method basis, but also don't neglect to individually test
those methods that can be better tested by individual testing. It seems like
common sense - following a policy such as this so rigidly clearly leads to
either wasted time and inferior APIs or missed test cases - the former I
believe the OP has discovered already, the latter I'm sure they'll discover in
due time.

[http://www.quickmeme.com/img/0f/0fb4fa35fad1b9ed112dc7584f47...](http://www.quickmeme.com/img/0f/0fb4fa35fad1b9ed112dc7584f47c531cf14a1c3c55d64bed93d33d5330dfcd1.jpg)

~~~
digitalpacman
He actually says in the talk that it's appropriate to test a method when it's
the flow of the method you're trying to describe. But they are throw-away
tests when the implementation changes.

------
stephenbez
This sounds a lot like Mock Client Testing:
[http://www.jaredrichardson.net/blog/2005/06/20/](http://www.jaredrichardson.net/blog/2005/06/20/)

(I think the writeup in the Ship It! book was better than this post)

------
CmonDev
When unit tests are your only tool for verification, everything looks like a
TDD target.

