
How deep are your unit tests? - thallavajhula
http://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests/153565#153565
======
raverbashing
Several successful projects exist without TDD (or even unit tests for that
matter)

The whole of GNU+Linux for example (or at least most of it)

On the other hand I've seen several projects claim "very good TDD coverage"
and then crash and burned when put into production (usually posted to HN)

Real world, real usage testing is essential. TDD is good for keeping you on
track and avoid regressions

It's also good for complicated (small) pieces of software that do something
complex and is prone to having its behaviour adjusted with time (think:
reports, data consolidation or analysis, calculations, etc)

~~~
fruiapps
if rms read it, he would suggest editing GNU+Linux to GNU/Linux.

~~~
raverbashing
Yes, I wrote it like this to be clear that I mean the Linux kernel and the GNU
userspace

------
huhtenberg
I find that simply assert'ing all non-trivial assumptions and invariants in
the code is the form of unit testing that works the best in heck of a lot of
cases.

If there's need for explicit tests, just use the code in the right way and see
it exit cleanly. Then use the code in the wrong way and see it assert (first
replacing the assert() with a throw/longjmp to catch failures without
aborting).

Asserts also double as a concise context documentation. When an assert is
triggered, it's typically easy to see what its condition means and what has
gone wrong. So it's a win-win all around :)

~~~
FooBarWidget
I've found asserts to be useful _in addition_ to unit testing. The advantage
of automated tests is that you don't have to run tests manually. Asserts
merely tell you that something is wrong, they don't run test cases for you.

------
MojoJolo
I agree with the first answer.

I'm not a fan of unit testing. I'm just doing it because it's a requirement
(and we still have no QA).

Why I'm not a fan? Because I'm doing my own tests. And probably it has a bias
all over it. I know when it will have a successful run and I expect where it
will fail. This is not the scenario that I wanted. (I know this is not a good
testing scenario)

In my opinion, a programmer must write a good code. And leave the testing to
the QA.

~~~
selectnull
_In my opinion, a programmer must write a good code._

I _always_ write good code, but somehow that code turns to shit after six
months, and I promise no one touched it. So, I guess I don't need the tests
when I'm writing the code but I need them badly when I maintain it.

~~~
Ygg2
That's quite opposite for me, I write code with lots of tiny errors (most of
them 0,1 based array sizes or not understanding how API works), so I test as
much as possible. But tests true value is regression testing. I can refactor
something, and test knowing that if I messed something up, test will flare up.

I still try to avoid errors, but it's pretty much nigh impossible.

------
Negitivefrags
The thing I really dislike are unit testing styles that use mock objects a
lot.

I see some people say that you should test each object in isolation, mocking
out any dependencies. That seems wrong headed to me.

I prefer my unit tests to test everything all the way down.

The only thing I would mock is the file system, which is useful for testing
file loading code.

~~~
azylman
_I prefer my unit tests to test everything all the way down._

They already have that, that's called integration testing. Unit testing is
trying to test as small of a unit of code as possible - this helps you
identify exactly where the error is. If all of your tests are integration
tests and something fails, you have no idea which part of your stack the
problem is in.

Obviously different projects require different kinds and amounts of test
cases, but I prefer a mixture of both unit tests and integration tests (unit
tests for small, complex blocks of logic, integration tests for most of the
other stuff).

~~~
Silhouette
_If all of your tests are integration tests and something fails, you have no
idea which part of your stack the problem is in._

Not necessarily, because if you have a lot of integration tests then probably
multiple ones will fail given a bug in a certain module, and the _pattern_ of
the failures might be all you need to locate the source of the problem.

One of the most successful projects I ever worked on, in robustness/quality
terms, didn't really have any unit tests at all, but it had a comprehensive
suite of end-to-end test cases that could be run automatically. Many of those
didn't (and couldn't) have an absolute true/false outcome, either, but looking
at the results they generated and applying various heuristics developed from
experience, they were remarkably useful for similar reasons to unit tests.

------
fruiapps
Testing is something I do a lot, and like talking about a lot as well. There
are no hard and fast answers to testing, like other stuff, there are lots of
opinions(they are like assholes, everyone has them, and the others' stinks)
about testing. So here goes mine(not necessarily correct, but worth putting
them here). I am putting them as one liners, let me know your thoughts.

1\. What do you mean by unit in unit tests?

An implementation of very basic feature, probably not more than 100 lines of
code.

2\. What do you test when you write a unit test?

We test the basic correctness, eg. given a url, if it exists assert 200, if
not assert 404.

3\. What should not be tested?

The inbuilt libraries should not be tested. Eg. If you are using a library say
django, you should assume it comes pre tested and should not spend time
writing tests for them.

4\. What should be the depth of the tests?

There are no rules, but you should stop when you feel like you are humming
inception theme.

If you want to read in slight more detail its
[http://www.blog.fruiapps.com/2012/09/An-intro-tutorial-to-
te...](http://www.blog.fruiapps.com/2012/09/An-intro-tutorial-to-test-driven-
development-in-python-django)

------
chris_wot
Kent Beck's response is interesting. How does one know that you write correct
code until you realise there is a bug in it? I thought unit tests were
developed for when you refactor code - at this point you run the unit tests
and if any fail, you have refactored in a bug...

------
TeMPOraL
It's partially an answer to jiggy2011's question from
<http://news.ycombinator.com/item?id=4898861>, but I think it's worth
mentioning separately.

There's a very good sentence in SO thread - j_random_hacker's: "Every
programmer has a probability distribution of bug locations; the smart approach
is to focus your energy on testing regions where you estimate the bug
probability to be high."

So Kent Beck is not skipping some tests because he thinks he's awesome (as
jiggy2011 said it sounds like); he does that because he can anticipate, from
experience, that a _particular type_ of bug is of very low probability for
him, because he _knows_ he's unlikely to make it. Like, for example, I tend
not to put + instead of ++ in C++, so I can be pretty confident my code is
free of these types of errors. Everyone needs to estimate his probability
distribution for oneself (factoring for unknown unknowns) and test
accordingly. The more experience one has, the better the estimate.

------
clarle
A good strategy I've found was suggested by Reid Burke in response to this
answer:

<http://reidburke.com/2012/09/27/write-code-that-works/>

Your code isn't uniform, and so you shouldn't be expecting 100% code coverage
on everything either.

Instead, assign to particular sections of your code levels of stability, and
for the code that has the highest stability/least amount of expected future
change, prioritize writing the most unit tests for those.

------
jiggy2011
> "If I don't typically make a kind of mistake (like setting the wrong
> variables in a constructor), I don't test for it."

Hmm, that sounds like the old "You don't need to test , just be an awesome
programmer duh!" argument.

The kind of mistakes you never anticipate making are probably the sort of
stuff you should be testing for.

The amount of times I've had something fail on an edge case that turned out to
be a typo in a var name on some edge case somewhere..

~~~
jiggy2011
don't know why this has so many downvotes.

~~~
pekk
Your tests routinely fail because of _typos_?

~~~
TeMPOraL
In dynamic languages? Sure they may. The PHP script you thought works crashes,
the Lisp routine out of the sudden enters the debugger, things like that.

~~~
pekk
The question isn't what "may" happen, but what routinely does happen for you.

If typos are even a significant problem in your code then the solution is to
type more slowly, have your IDE flag unknown symbols, use autocomplete, etc.
As a solution to this problem, strong typing is both disproportionate and
inadequate.

------
kurjam
Well. I use a lot of unit testing because the idea of testing small pieces of
code works for the way I develop software at work - modified pair programming.
Instead of reviewing Holmes's code Watson writes tests (mostly unit ones).

At spare time, tho, I tend to be too lazy to write a lot of tests(most of my
"projects" are too small and never get released to the public anyway)

------
davidw
I try and cover the easy 80% where it's helpful, and then specific things in
the difficult 20% on an as-needed basis, or when bugs are discovered. It's an
asymptotic curve: you want to spend the effort for the low-hanging fruit, but
not get caught up chasing after that time-eating final bit of coverage.

------
mewmoo
A lot of the comments say tests are needed for edge cases and discovered bugs.

How does this compare with TDD?

Has anyone tried both? How did it work out for you? I implemented TDD for a
project and I thought it was overkill and took a considerable amount of my
time.

~~~
fruiapps
do it for more projects, don't you think the first time you did anything it
took helluva time??

------
ruggeri
Is it me, or does Bill the Lizard close every SO thread linked from HN?

