

5 Questions Every Unit Test Must Answer - ericelliott
https://medium.com/javascript-scene/what-every-unit-test-needs-f6cd34d9836d

======
dalke
I disagree with some of the points made here.

> Design aid: Writing tests first gives you a clearer perspective on the ideal
> API design.

Unit tests are usually too low-level to give feedback into the API design. My
experience is that functional tests, based on use case scenarios, give much
better feedback.

Standard TDD-based design doesn't really have a way to specify which is part
of the "public" API vs. the testing API. Instead, it usually implies that
everything is part of the public API, which locks in the architecture rather
early. Are there common practices for distinguishing between the two?

> Feature documentation (for developers): Test descriptions enshrine in code
> every implemented feature requirement.

Unit tests rarely encode performance requirements. For example, you might have
a feature requirement that the code process 10 M records in 2 minutes, or that
a given implementation must have worst-case N log N performance. While it's
possible to use a unit test framework to write these sorts of functional
tests, they are not unit tests.

> Manual QA is error prone. In my experience, it’s impossible for a developer
> to remember all features that need testing after making a change to
> refactor, add new features, or remove features.

Manual QA is a skill in it own right. This is why you have QA staff, who
develop and implement a testing plan. Developers rarely have QA experience, so
it's not surprising that they would have more problems.

> Automated QA affords the opportunity to automatically prevent broken builds
> from being deployed to production

Agreed, but as James Coplien pointed out in "Why Most Unit Testing is Waste":

> You’ll probably get better return on your investment by automating
> integration tests, bug regression tests, and system tests than by automating
> unit tests.

~~~
ericelliott
> Unit tests are usually too low-level to give feedback into the API design.
> My experience is that functional tests, based on use case scenarios, give
> much better feedback.

I was talking about the API of the unit under test. In JavaScript terms, this
typically means the public API of the module being tested. APIs are developer
UX, and providing a good developer UX is great for code quality.

> Standard TDD-based design doesn't really have a way to specify which is part
> of the "public" API vs. the testing API.

You should only test the module's public API. If there are functions used
internally by that module that also need testing, those bits should be in
their own modules with their own unit tests.

> Unit tests rarely encode performance requirements.

Correct, but mostly because the biggest perf problems don't happen at the unit
level. They happen at integration levels where things like network
communications and disk access happen. Unit tests document the unit's
requirements, not the app's requirements. That's why it's also important to
have functional tests, and depending on needs, integration tests as well, but
even those usually don't measure perf. That's where load testing and app
profiling come into play.

All of that is important, and I'm not trying to downplay any of that. However,
none of those things can replace unit tests, and the benefits of documenting
unit functionality in code.

> Manual QA is a skill in it own right. This is why you have QA staff

Agreed, but it would be extremely inefficient for a developer to ask the QA
team for help testing every time they make a change to a unit under
development. Automating unit-level QA can solve time by automatically checking
the full behavior suite of the unit under test every time the developer saves
a file. Studies have shown that far fewer errors make it into the released
product when developers use a TDD process, which saves developer time, saves
QA time, and improves the user experience, because there are fewer product
defects.

> You’ll probably get better return on your investment by automating
> integration tests, bug regression tests, and system tests than by automating
> unit tests.

That's one person's opinion. The empirical evidence linked to in the article
says otherwise.

In the article you refer to, James Coplien betrays his misunderstanding about
the ideal role and use of unit tests by talking about refactoring already
existing functions into smaller, more testable functions. Using the TDD
process (test first), you write the assertions before you implement the
functions, so no such refactoring is necessary.

He also talks about the futility of large code coverage from the perspective
of somebody going back over code and adding tests later, which can indeed be
quite challenging, but if you're following TDD, you don't write code paths
that do not satisfy a test assertion, which guarantees 100% code coverage.

It would be silly to throw out a process that has been tried and proven
because of the opinion of somebody who doesn't even understand the process to
begin with. =)

