
Where unit testing fails - fafssaf
http://www.hmemcpy.com/blog/2011/06/where-unit-testing-fails/?utm_source=hacker-news&utm_medium=social-media&utm_campaign=typemock-where-ut-fails
======
peteretep
Whut?

Listen kids, testing is a tool for HELPING THE DEVELOPER, not for using to
engage in a "more pious than thou" dick-swinging my Cucumber is bigger than
yours idiocy.

Testing is about giving YOU THE DEVELOPER useful and quick feedback about if
you're on the right path, and if you've broken something, and for warning
people who come after you if they've broken something. It's not an arcane
methodology that somehow has some magical "making your code better" side-
effect...

The whole concept of "test driven development" is hocus, and I speak this as
someone who writes a lot of tests, and who charges a lot of money for fixing
test suites. Instead: developer-driven testing. Give your developers useful
tools for solving problems and supporting themselves, rather than disappearing
in to some testing hell where you're doing it a certain way because you're
supposed to.

~~~
Volpe
> The whole concept "test driven development" is hocus...

I've worked on numerous projects where TDD has (and continues) to prove
invaluable.

So regardless how much experience you've got, I've had experience to the
contrary, thus the "whole concept" isn't hocus.

Why have you come to believe this?

~~~
peteretep
It's hocus where contrasted to developer-driven testing, because Test Driven
Development - as a development methodology (as opposed to a tool) - espouses
that you Write All Your Tests first.

Have I had experience (and much value) out of sometimes writing tests for
certain problem classes before writing any code? Yes. Changes to existing
functionality are often a good candidate.

Does TDD as a methodology suggest you should ALWAYS write your tests first?
Everything I've read suggests so.

And "as any fule kno" [sic], this is idioicy during a design or hacking or
greenfield phase of development. Allowing your tests to dictate your code
(rather than influence the design of modular code) and to dictate your design
because you wrote over-invasive test.

tl;dr: Writing tests before code works pretty well in some situations. Test
Driven Development, as handed down to us mortals by Agile Testing Experts and
other assorted shills, is hocus.

~~~
Volpe
> Writing tests before code works pretty well in some situations. Test Driven
> Development, as handed down to us mortals by Agile Testing Experts and other
> assorted shills, is hocus.

What? So it works in some situations, but not all. Thus hocus?

I'm no 'Agile Testing Expert', but I have worked for the last 2-3 years doing
TDD exclusively (as in ALWAYS writing tests first). It isn't hocus. It is
effective. I can understand it doesn't suit all problems, and can potentially
be a hinderance sometimes, but that is the same for ANY methodology/practice.
I don't quite understand your hostility towards it.

What do you mean by 'developer driven testing'?

~~~
berntb
Excuse me for interrupting the budding flame war, but I have a question
regarding the subject, which you seem qualified to answer? :-)

My experience is the same as (what I believe is) the usual criticism of tests-
first.

I often need to rethink the API to my functions/methods, sometimes more than
once. This goes for both external and internal APIs, so it doesn't help with
detailed interface specifications between modules.

Test-first is a problem, since it needs a fixed interface before I'm ready to
lock down the interface.

I tend to instead write inside-out; a few pages of code and then write tests
for that code. That model isn't really kosher/halal/etc, according to TDD?

~~~
tychof
One benefit to TDD is that you become a user for your API much more quickly,
and so your API doesn't need to change as much.

TDD suggests 3 steps: 1\. Write one test 2\. Write the minimal code to make it
pass 3\. Refactor the code you've got.

Try not to think of tests as immutable. You can throw away tests when they
don't provide value.

Finally, make sure you're using an IDE with refactoring support. That helps
immensely when you make an API change which will affect your tests.

~~~
berntb
OK, thanks to you and zumda.

Interesting TDD argument. I'll buy a book or something and try again when my
present hobby project is done and I have time for experimenting.

(IDE refactoring needs a stricter language than I prefer. Also, without an IDE
and using my old eyes I have 2 A4 of code + 1 A4 of bash on a 24" monitor...)

~~~
zts
If you're looking for book suggestions, I'd recommend "Growing Object-Oriented
Software, Guided By Tests" by Steve Freeman and Nat Pryce. It's pretty
pragmatic, and the treatment of the subject is pretty thought-provoking (at
least, I thought so when I read it).

The examples are all Java, but the accompanying website (<http://www.growing-
object-oriented-software.com/>) has links to reimplementations in several
other languages.

------
Luyt
The exact Sudoku URLs mentioned in the article, just checked/updated:

Ravi's article: [http://ravimohan.blogspot.com/2007/04/learning-from-
sudoku-s...](http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-
solvers.html)

Peter Norvig's solver: <http://norvig.com/sudoku.html>

Ron Jeffries' attempts:

<http://xprogramming.com/articles/sudokumusings/>

<http://xprogramming.com/articles/oksudoku/>

<http://xprogramming.com/articles/sudoku2>

<http://xprogramming.com/articles/sudoku4>

<http://xprogramming.com/articles/sudoku5>

And, as dessert (Ron is very frank about his failures):

<http://xprogramming.com/articles/roroncemore/>

 _"This is surely the most ignominious debacle of a project listed on my site,
even though others have also not shipped. (Sudoku did not ship and will not.
Shotgun will go forward if the Customer wants to.)"_

------
latch
There are a couple tricks to effective unit testing. Ultimately though, the
goal is to write non-brittle tests. That is, tests that won't break due to
unrelated changes. Get this wrong and the cost of maintaining your tests will
outweigh the benefits.

Achieving this, in my experience, comes down to disciplined used of mocks and
stubs. I've seen people lean too heavily on them, or not heavily enough. The
biggest problem I've seen, particularly common in the Java world due to
jMock's datedness, is over specifying expectations on mocks. Stubs which can
provide canned responses (whether through a framework or manually rolled) are
absolutely and totally the most underused yet useful tool in your testing
arsenal.

~~~
jsdalton
Can you expand a bit on the distinction you're making between mocks and stubs,
and why you believe stubs are both underused and useful?

Perhaps I'm doing it wrong, but I've found in my experience that bugs I miss
in testing increase proportionally with my use of mocks or stubs. Invariably,
when I'm forced to stub something and replicate its behavior I miss something
subtle that comes back to bite me.

~~~
latch
I think there's 2 problem.

First, people rely too heavily on mocks/stubs/fakes/(whatever you want to call
them). This has gotten better in the past 4 or so years (in my mind, largely
because of Rails and it being "acceptable" to hit a DB in a test - though it
might actually predate Rails). I think this problem is pretty straightforward
to understand (again, especially when you look at the Rails way to test a DB
interaction versus a more "traditional" way).

However, there are instances where mocks/fakes/stubs are important. Some
outside dependencies might not be accessible during testing, might not be
predictable/deterministic, might be too slow or might simply involve too much
setup. Also, solely relying on "real" implementations can also make your tests
too brittle. Why should the logic for can_legally_drink? break if the DB
column is changed from DOB to dateofbirth (ok, that's an extreme and poor
example, but you get the idea, hopefully).

Anyways, assuming you agree that sometimes a fake is simply better, you get
into mocks vs stubs. To me a stub is dumb and a mock is strict. Stubs also
automatically reply with canned answers, mocks don't. A mock is used to assert
that a certain expected call was made. A stub is used just to get your test to
move over a line of code.

The problem is that people use mocks over and over again. Re-specifying the
same expectations in every test..making X tests break when you change the
behavior of the interaction, versus just 1 test. As semi-pseudocode (and yes,
it might be better to just hit the DB in this case):

    
    
      def self.login_user(name, password)
        user = Store.get_user_by_name(name)
        return user.nil? || user.password != password ? nil : user
      end
    
    

To me, you kinda wanna check that the above code properly interacts with
someStore. This is when you use a mock:

    
    
      it "gets the user from the store" do
        Store.should_receive(:get_user_by_name).with('leto')
        User.login_user ('leto', 'ghanima')
      end
    

You also want to make sure the password matching works. This is where people
go wrong. They'll use a mock (strict) again..which'll just repeat the above
code...except this test has really nothing to do with how store...we just want
to get over the line of code:

    
    
      it "returns nil if the passwords don't match" do
        Store.stub!(:get_user_by_name).and_return(User.new)
        User.login_user('leto', 'ghanima').should be_nil
      end
    

If you are familiar with jMock, it's kinda the difference between allowing and
oneOf. oneOf is very strict, allowing isn't. You really should use allowing
whenever you aren't explicitly testing the interaction (and you shouldn't
explicitly test the interaction in more than 1 test (dedicated to testing said
interaction)).

Some framework even return smart canned answers. So instead of returning nil,
they'll default to a new instance of the type, or an empty array, or some
default scalar value (like an empty string). This is particularly useful when
you just want to get passed a null reference (which is exceedingly common).

Does that make sense?

~~~
astral303
I've had similar experience with strict mocks. However, mocks don't need to be
strict. In Java, for example, the current mocking tool of choice is Mockito,
which returns "nice" mocks that return nulls, empty collections and such.

I can't seem to find a blog post discussing the merits, but IME strict mocking
leads to a lot of tests breaking just because you added one more piece of
functionality to a method. Now, this causes you to edit all sorts of existing
tests to cause them to ignore a call to collaborator. While you can extract
common expectations and such to eliminate duplication, you still end up with
many tests failing due to one small change in behavior.

Normally your tests tend to assert one or two behaviors per test. With strict
mocking, your tests become awkwardly cluttered by setting up
expectations/interactions for things that are unrelated to what they are
asserting.

With nice mocking, it's much easier to get away with only the
expectations/interactions that are directly relevant to what a single test is
asserting. This also means that you get more fine-grained failures, which
speeds up diagnostics.

------
paganel
Maybe I'm in a mad mood or somth (it's Monday, after all) but all this "let's
write a parallel world, called TDD, to which the actual production code has to
comply and praise and give sacrifices"-mantra has gotten a little over the
top.

Don't get me wrong, I'm sure that all these guys and chicks that live and die
by TDD are smart (probably smarter than me), but what they are building is
starting to get more and more out of touch with real-world requirements (I
wanted to say somth about how this resembles a schizophrenic world, but for
the moment I'll stop short of that).

My father was a civil engineer, at a time when they didn't have AutoCAD
available and all that fancy computer stuff. As he elegantly put it: "if I
mess things up in my work people will die, either in 2 weeks or in 20 years,
when the next big earthquake will hit us". Well, I never saw him "testing"
building apartment-blocks, or industrial buildings, or roads, he made sure
that what was on the sheet of papers where the construction plans were drawn
would get built as accurate as possible in real life. And keep in mind that he
was managing construction workers, many of them close to illiterate, even
former convicts, and not CS-graduates. That's the job of all civil engineers
from all over the world. So, if they can do it, why in the name of God do we
programmers write tests for mundane stuff like sudoku-solvers? Something is
not right.

~~~
peteretep
Civil Engineers these days (or this is my understanding, anyway) DO use
AutoCAD and 'all the fancy computer stuff' because testing allows you to code
more productively, when you do it right. That's why engineers these days do
automated simulations...

At the point where you're writing tests to impress other programmers (see
article) about how much of a 'craftsman' you are, you lose. When you're
writing tests that help you as a developer, and help get the software out more
quickly and with higher quality, then you're doing it right.

------
perlgeek
Ok, TDD didn't help with designing an algorithm for Lychrel number - because
you just don't get around implementing the loop that tests if a number is a
lychrel number, and it's very simple.

But to me that's not the whole purpose of unit tests - the most important part
for me is that it tells you very quickly when you have bugs in your
implementation. Somehow the author neglects to address this very central
point.

------
nickik
First write the high level test. Then start implmenting and when you writte a
helper function, write a little test for that function too.

Don't just write all the test for every little function in the beginning thats
stupid, How do you even know what helper function you need to implment.

------
damoncali
Here's my trouble with testing as espoused by proponents of TDD: I bought into
the Agile Manifesto. I believe in working code over specifications. How then,
am I supposed to write a bunch of tests that will in turn tell me what code to
write?

It is not a coincidence that one of the reasons TDD'ers love rspec so much is
that the "code reads like documentation" - it's even called r _spec_.

Perhaps that is ok for mature projects. But I often hear TDD and agile used
together as if they weren't in conflict and I get confused. Then again, I've
only ever worked on new projects.

Maybe I'm missing something?

------
chrislloyd
I can't help but appreciate the irony of a link titled "Where unit testing
fails" leading to a WordPress database connection error.

~~~
rimantas
So you think unit tests would help with that?

~~~
lhnz
That's the point. Most unit tests wouldn't stop this kind of problem. ;)

------
fafssaf
What are some cases that you would and wouldn't use unit tests? Igal is doing
a webinar <https://www2.gotomeeting.com/register/545851563>

------
augustl
I like to think of it this way:

When writing code it is imperative to to get great feedback, as often as
possible. In a number of cases, TDD helps you do that. In cases where it
don't, well, it doesn't, so don't use it.

------
colin_jack
I haven't had time to read them all yet so I was wondering if someone could
give me a summary of why the TDD based sudoku approach failed?

~~~
eru
The guys approach to the problem wasn't smart enough.

Testing doesn't help you come up with good algorithms. It can help you with
not messing up an implementation, though.

