
When TDD Fails - gambler
http://bitroar.posterous.com/when-tdd-fails
======
ctide
I got as far as: _Oh, and good luck mocking your database and HTTP request
objects._ and just stopped. If you don't have any experience with TDD, then
don't write about it. Mocking HTTP request objects is incredibly trivial and
tools are written in every language to do just that.

I don't know why these articles keep getting written. No one's forcing you to
write your tests first, and if you can't wrap your head around the benefits
(which the author clearly can't) then don't do it. It's fine. I'll still
continue to utilize TDD, and continue to pump out significantly more code
(with significantly less bugs) than the code I produced without TDD. Yes,
there's definitely code I write where I don't write the tests first. It
happens, sometimes it's just easier to bang something out because you aren't
sure what you're building. That doesn't mean TDD is a failure, it's a tool,
like anything else. Use it where it's appropriate, because the benefits are
massive.

~~~
pbz
You should have continued. His point is that while TDD is good for some things
it misses huge areas and has very little ROI in most simplistic cases.

Yes, you can mock HTTP requests, but there are numerous bugs you wouldn't
catch unless you went and hit your application with an actual browser.
Similarly, you can mock the database, but you're not testing all the "magic"
the database does and all the myriad combinations it could fail.

~~~
ctide
I have no intention of ever mocking the database, and I mock HTTP requests
solely so I can work on my app without an internet connection. Look, a better
approach is to use TDD for things it makes sense for. Spending a ton of time
to mock out your database does nothing for you. Is mysql going to fail for
you? Is the mysql driver that a million people are using going to fail for
you? No? So, don't mock it out.

~~~
phzbOx
You don't mock the drivers.. you abstract the database layer to test for a
particular data. For instance, you make getName() returns empty string, 1000
wide character string, strings with unicode, etc. It's not about MySQL; it's
about what mysql returns.

~~~
ctide
That's done via fixtures, not mocking your database, and is significantly more
trivial than HTTP request mocking.

~~~
pbz
Here's a trivial example:

Let's say you have a class that writes a string to the database. You abstract
out the actual writing to the database of this string thinking it's always
going to work, it's just dumping data after all. Your test passes, everything
is good.

Now you optimize your database and add a restriction to make the string at
most 50 characters. Your test still passes, but you now have a bug. OK, so you
should've had a restriction in your BO. You add that restriction and move on.

Your DBA comes along and adds an integrity check or a trigger that makes the
insertion fail if some weird condition is met. Your test passes, but you have
a bug.

This can get even more interesting when you hit some basic database rule that
you didn't even know it existed. You assume the insertion will work, but it
won't. You now have a bug.

You've tested that 2 + 2 = 4, and it works, but when that code is linked with
the actual database you realize that in some cases 2 + 2 doesn't equal 4.

I'm not saying don't have unit tests, but when it comes to bugs, the fast
majority of these bugs, in my experience, come from the glue, from the
assumption that the piece you're integrating with should work one way, but
reality begs to differ.

~~~
mojo85
Well this _could_ be solved with integration or acceptance tests.

Anyhow, TDD doesn't promise you bug free code.

------
blackhole
I'm currently building a test framework for my graphics engine, and have run
into the exact problems described in this post. A graphics engine, by design,
must be able to do millions of different things that are created by the
interaction between relatively few methods, none of which can go wrong at any
time. One of my new features is probably buggy, but attempting to brute force
test border conditions requires tens of thousands of tests because of all the
interacting elements, any two of which could interact in just the wrong way to
blow something up. It gets even more ridiculous when you have precision bugs,
where only certain numbers in certain ranges in certain cases will explode.
Testing that using inputs is impossible.

This occurs so often and with such regularity I am now convinced that
everything I write is riddled with bugs that I will probably never find
without beta-testing in real-world scenarios, plus many more bugs I will
simply never find because they never come up.

A much better design would be a test platform that analyzes the assembly
structure to pinpoint actual edge cases in the code itself, which could then
be used as a guide for finding bugs instead of relying on hundreds of
thousands of test cases.

~~~
gambler
Depending on the language you're using, there might be some tools to help you.

For example, MS recently released a library for Design by Contract (aka
Contract Programming) in .NET 4.0. It allows you to specify constraints on
your methods, such as things you expect to be true before the method is
called, after, and between _all_ method calls in a class. The library is
capable of static verification, but it's partly an experimental feature and
you need to pay for expensive version of VS. (Runtime checking is available
for free.)

But, here is the cool part: MS also released an automatic test generator
called PEX. It can do exactly what you've described - go into your code and
automatically find edge cases, and generate tests that cover each of them. And
it's free.

So, you can write contracts, run PEX, and if something goes wrong, you will
see which inputs generated exceptions.

D also has DbC functionality. I don't think it runs any static verification on
it yet, but you can use it to detect abnormalities during functional testing.

~~~
xentronium
Genuine question: doesn't such thing as "design by contract" make for the same
bloat as checked exceptions in java?

~~~
gambler
DbC doesn't need to be dealt with at every level of the call chain if that's
what you mean.

It may seem a bit verbose, but it usually expresses logic that would be in
your program or in your tests anyway. Difference is, you will be doing it in a
pretty terse and declarative manner. IMO, DbC is one of the coolest features
of .NET 4.0.

------
LeafStorm
While testing code is important (and I have tried to do better at writing
proper tests for my code), the reason that I have not adopted TDD is because
when I write a test and it fails, about half the time it's an actual bug in
the code, and half the time it's a bug in the test. I view testing as more
akin to going back and checking your work after solving a math problem - not
as the definition of what your code is supposed to do, but as a verification
that you did it right (and that it continues to work right in the future).

~~~
wnight
> when I write a test and it fails, about half the time it's an actual bug in
> the code, and half the time it's a bug in the test.

Well yeah... You'd expect otherwise?

Do you mean when the test fails that it's some sort of not-bug like you
changed your mind about what the method should do since writing the test?

------
LVB
I generally agree with the final sentence: "If you choose a methodology
without comparing it against alternatives, if you claim that it works all the
time without evaluating the results, than you're not doing engineering -
you're practicing a religion."

I've definitely seen some negative effects when a team is forced to create a
huge volume of low-level tests because they perceive that to be the only
acceptable solution. They get bored with the work, and worse, think less about
the larger integration issues.

I'm not arguing that TDD fails, but you'd better monitor the efficacy of
whatever testing regimen you employ lest you suffer process and quality rot.

P.S. Dude needs to make friends with a spell checker... wow.

------
yason
IMHO, good testing is _hard_. I think that something that's probably as hard
as writing programs in the first place shouldn't be commoditized into a
methodology. I don't particularly dislike TDD, but I certainly don't like it
either.

The best phase to write tests is when you've _locked down_ a part of your
program. A part such as one distinct submodule or function or the _sort.c_ or
_nodegraph.c_ of your latest project--a part that is relatively orthogonal to
the rest of your code. That sort of ensures that the basic blocks, once
finished, won't fail surprisingly. However, this can only be applied to basic
building blocks.

Testing the bigger parts of your non-trivial medium-size program is likely to
be so hard and complex that you have no chance at planning testing beforehand.
I think that a good programmer or tester can come up with a relatively
comprehensive testing suite that triggers execution paths up to 80-90% code
coverage if given a sufficiently finished program, i.e. the program structure
has mostly stabilized. Good programmer can also make changes to the same
program without decreasing the quality of code. Bad programmers are as able to
write good testing as modifying the code itself without letting entropy to the
driver's seat.

~~~
jamieb
_IMHO, good testing is hard. I think that something that's probably as hard as
writing programs in the first place shouldn't be commoditized into a
methodology. I don't particularly dislike TDD, but I certainly don't like it
either._

Completely agree.

 _The best phase to write tests is when you've locked down a part of your
program._

Completely disagree.

~~~
RodgerTheGreat
To elaborate, writing tests can very quickly make you aware of shortcomings or
clumsy aspects of your APIs, since it should be the first time they're
actually used.

------
fizx
The author argues that TTD fails in code that's largely wiring. I think the
opposite is true. I'm writing an application that's largely wiring, and for
the first time in a long time, using TDD. It's refreshing.

Most of my tests are two-liners, like:

    
    
        it "should do x when y" do
            obj_under_test.should_receive(:consequence_method).with(some_args)
            obj_under_test.do_something
        end
    

There are a number of reasons why this has helped:

1\. It ensures I'm using dependency injection, etc to write testable, well-
factored code. There's huge correlation between testability and
maintainability. 2\. I don't have to boot the thing all the time to confirm
that I didn't mess up in some obvious way. Covering the code paths prevents
typos. 3\. My test suite runs in under three seconds. I can sanity check what
I'm doing, without being tempted to browse HN/reddit/twitter/etc.

I like TDD _more_ in wiring-only code. If I'm writing wiring, I know what my
test case will look like ahead of time, and I'll write it first. If I'm
writing experimental algorithms, I have no idea what will happen, and I'd
rather write code and poke it.

~~~
shadowfiend
I know I'm pulling out raganwald's argument from one of his posts, but I'm
honestly curious: is there anything other than anecdotal evidence that
“there's huge correlation between testability and maintainability”? In
particular, here we're talking about _unit_ testability. I don't agree or
disagree—I've done both TDD and not, but I'm still not sure I've created more
maintainable code when testing than when not.

~~~
phzbOx
Maintainability is not just about you changing your code later on; it's about
someone else trying to understand what you were thinking when you write it..
and then change it. Tests make that process way easier. You can add your new
feature _without wasting time to understand everything else_ and still make
sure it's working. If it's your own project, meh, you know your stuff. You
even know all the hack you did to gain time. I see tests as a documentation
that shows me what's working. Reading tests (high level ones) is usually the
first thing I do when starting on a new project. Comments change over time..
documents aren't updated.. people leave.. but tests remain. If the test
passes, it doesn't mean everything is perfect, but at least you know that
_these_ things work.

~~~
gambler
Please note that the grandparent post specifically asks about unit testing and
unit testability, while you speak in much broader terms.

Automated regression tests can be created in may ways, including UI-level
tests that don't require any changes in coding practices at all.

------
Corrado
I've actually come to this conclusion on several of my Rails projects. After
struggling with mocking up yet another set of I/O objects I realized that it's
not really doing anything useful. I agree that there should be testing, its
just that TDD on MVC is very difficult to do properly. :/

------
plinkplonk
The real "problems" with TDD are

(a) in the "driven" part, not the "test" part. Tests are (in general) a good
thing. However, using a series of tests to _drive your design_ (aka "TDD is
not a testing method,it is a design method" idea) often gives you an illusion
of progress as you hill climb using conformance to an increasing number of
tests as a progress heuristic and end up on top of a local maximum (as for
example in the TDD sudoku episode).

(b)in conflating TDD with one or more of (1) testing, (2) automated testing
(3) automated regression test suites (4) developers adding more tests to the
automated regression test suite as they develop more features, refactor, debug
etc.

You can have (1) to (4) without either (5)writing tests _first_ (aka "don't
write a line of code without having written a test covering it") or (6)driving
your design with tests. The last two ideas are the real distinguishing
features of TDD and are of debatable merit. None of (1) through (4) are novel
ideas. (5) and (6) are where differences of opinion happen.

Even if you choose to use TDD, it is good to be aware it is just _one_ tool in
your toolbox and not necessarily the default tool to reach for.

(c) in the zealotry of some of its evangelists who insist that TDD is some
kind of moral imperative and is the only "correct" way of developing software
and anyone who doesn't follow that path or make respectful obeisance to it is
"unprofessional","dodgy" etc. This is often accompanied by conflating TDD with
more generic notions like "automated tests" etc as above.

For example, Rich Hickey, the author of Clojure, said recently at the Strange
Loop conference "We say, “I can make a change because I have tests.” Who does
that? Who drives their car around banging into the guard rails!?"

(and that is _all_ he said. One sentence in a keynote presentation)

For this Hickey was taken to task by a TDD advocate, Brian Marick, for not
being "respectful" enough to TDD and for his "tone" in daring to mildly
criticize it as a development practice. After some tweets complaining about
Rich Hickey's tone driving away people from Clojure etc he wrote

<http://www.exampler.com/blog/2011/09/29/my-clojure-problem>

"The dodgy attitudes come from the Clojure core, especially Rich Hickey
himself, I’m sad to say."

This kind of repeated whining and harassment over a few days made the normally
unflappable Hickey (who asked for references to his "disrespect" etc, to the
sound of crickets) lose his temper and say (on his twitter stream)

"If people get offended when their tools/methods are criticized as being
inadequate for certain purposes, they need to relax.",

and "Testing is not a strawman. It's an activity, it has benefits, but is not
a panacea. No 'community' should be threatened by that fact"

and later "Accusing people who merely disagree with you of being snarky,
intolerant, dismissive etc is both wrong and destructive."

and much later after being subjected to a barrage of tweets criticizing his
tone and 'lack of respect' for TDD, "If launching an ad hominem attack is the
product of a lot of thought, it _is_ time for you to move on. Good riddance."

postscript: the best criticism of TDD I've seen is at
[http://www.dalkescientific.com/writings/diary/archive/2009/1...](http://www.dalkescientific.com/writings/diary/archive/2009/12/29/problems_with_tdd.html)
. The responses at [http://dalkescientific.blogspot.com/2009/12/problems-with-
td...](http://dalkescientific.blogspot.com/2009/12/problems-with-tdd.html) are
(mildly) interesting as well.

~~~
JoachimSchipper
This comment should not be [dead] (if you don't want to give/dock me karma for
irahul's comment, down-/upvote my reply to this comment):

irahul 7 hours ago | link [dead]

> For example, Rich Hickey, the author of Clojure, said recently at the
> Strange Loop conference "We say, “I can make a change because I have tests.”
> Who does that? Who drives their car around banging into the guard rails!?"

Rich has spoken about it other time with an interview with Fogus:

<http://www.codequarterly.com/2011/rich-hickey/>

Hickey: I never spoke out ‘against’ TDD. What I have said is, life is short
and there are only a finite number of hours in a day. So, we have to make
choices about how we spend our time. If we spend it writing tests, that is
time we are not spending doing something else. Each of us needs to assess how
best to spend our time in order to maximize our results, both in quantity and
quality. If people think that spending fifty percent of their time writing
tests maximizes their results—okay for them. I’m sure that’s not true for
me—I’d rather spend that time thinking about my problem. I’m certain that, for
me, this produces better solutions, with fewer defects, than any other use of
my time. A bad design with a complete test suite is still a bad design.

He said something on the similar lines about development on CLR:

Fogus: Clojure was once in parallel development on both the JVM and the CLR,
why did you eventually decide to focus in on the former? Hickey: I got tired
of doing everything twice, and wanted instead to do twice as much.

His explanation on both fronts boil down to he doesn't find it(TDD and CLR/JVM
parallel development) a worthy investment of time, given there are only so
many hours in a day.

I don't understand why TDD advocates get all worked up when someone says TDD
doesn't work from them. Well, if TDD is silver bullet of software development,
they should be delighted that the ignorant singletons fail to see it and they
have an edge over the fools.

These reactions remind of this:

“You are never dedicated to something you have complete confidence in. (No one
is fanatically shouting that the sun is going to rise tomorrow. They know it's
going to rise tomorrow.) When people are fanatically dedicated to political or
religious faiths or any other kinds of dogmas or goals, it's always because
these dogmas or goals are in doubt.” ― Robert M. Pirsig, Zen and the Art of
Motorcycle Maintenance: An Inquiry Into Values

~~~
DrJokepu
It looks like irahul was hell banned yesterday for the following comment,
that's why his comments show up as dead:

<http://news.ycombinator.com/item?id=3059454>

------
recursive
No one at my work place, including me, understand unit testing or TDD. I was
recently asked to add a test suite to a service I wrote that is basically a
simple wrapper that returns the result of a stored procedure call. The only
test I could think of was to call it with all null parameters, in which case
there should be no output. Other than that, the results depend on the state of
the database. I'm familiar with the concept of dependency injection, but I
couldn't add it to this very simple service in good conscience, since I knew
that adding the necessary complexity would only increase the likelihood of a
defect.

~~~
petercooper
As shadowfiend notes, fixtures are a common approach here.

However, you could also use a mock. What you're _really_ testing here isn't
the connection to the database or even that the database contains certain
data, so you can rule that out of the equation. What you're testing is that
some service (let's just say a 'method' for simplicity's sake) turns an input
into an output in some particular way.

What you do, then, is mock the database connection for a particular test case
so it returns a guaranteed result to whatever's doing the request in your
method. You can then test that the method converts that input into the correct
result. You've now unit tested the method rather than the entire service (in a
nutshell - it can be more complex than that).

~~~
j-kidd
I don't get it. Why would one mock the database connection when the thing to
be tested here is a stored procedure?

~~~
petercooper
It's hard to tell without asking the OP for specifics, but..

"I was recently asked to add a test suite to a service I wrote that is
basically a simple wrapper that returns the result of a stored procedure
call."

I interpreted this as meaning that the unit test would be for the 'service'
and whatever it does with the stored procedure's result (and the arguments
passed in the first place) rather than on the stored procedure itself. The
reason for this interpretation was how he considered passing null values in
order to exact a null response to be an acceptable test. Such a 'wrapper'
might be a presenter system or simply convert data from the result to a
different form, these things could be unit tested by mocking what the database
returns.

If, however, he meant that the operation of the _stored procedure_ was to be
tested, then my previous post was moot.

~~~
recursive
That's a great question. It wasn't specified to me either, so I don't even
know. But even verifying that the stored procedure actually gets called is
problematic. In practice, one of the problems that actually occurred with the
service is that the service host somehow lost permission to exec the stored
procedure. That's a server admin issue.

In my experience it feels like that case is pretty representative of most
system defects, in that the majority of them seem to fall outside the space of
defects that are feasible to test. I've always assumed I'm doing something
wrong, given how many supporters unit testing has.

------
mojo85
The author is complaining that when requirements change that his tests need to
be updated/rewritten.

Is he joking? I mean if the updated requirement changes the behavior of the
code then the test better freaking fail and require the test to be updated,
otherwise the test (if it even exists) is terrible.

------
jrockway
I very rarely use TDD, but I am a fan of it. First off, absolutes like "always
write tests" are for people that are bad at programming but are still employed
as programmers and can't be fired. They haven't developed the judgement for
when to write a test or when not to, so in the interest of getting some
reasonably useful test suite, you say "you must write a test for everything".

Secondly, I don't really agree that these action methods are untestable. Sure,
"print hello world" is untestable, because it's so simple that you're not
going to fuck it up, and because there's only one execution path that can
possibly occur. But most methods are not like this; they need to reject
invalid data or state, they need to craft database queries, and so on. In that
case, you very well _can_ write good tests for this sort of thing.

Say I have some code that needs to accept an HTTP request that has a "foo"
parameter:

    
    
        def do_something(self, context, foo=None):
            if foo is None:
                raise context.UserError( 400, 'You must supply foo.' )
    
            context.get_model('Something').apply_foo_to( user=context.current_user(), foo=foo)
            return context.get_view('Something').render()
    

This is easy and valuable to test:

    
    
        container = DependencyInjectionThingie()
        context = container.get_fake_context( current_user='jrockway' )
        controller = container.get_fake_controller( DoSomethingController )
        something  = container.get_fake_model( Something )
    
        # ensure that empty foo is rejected
        raises_ok( lambda: controller.do_something( context ), UserError )
    
        # ensure that something is mutated correctly
        controller.do_something( context, foo='OH HAI' )
        compare( something.applied_foo, '==', ('jrockway', 'OH HAI' )
    

(Who would have predicted the day where I started writing my HN examples in
Python!)

In just a few lines of code, we get a little bit of extra security around our
do_something action. We are sure that a UserError is thrown when foo is not
provided (which our imaginary framework turns into a friendly error message
for the person at the other end of the HTTP connection), and we're sure that
the model is mutated correctly when foo is valid. In three lines of code.

I find that people that have the hardest time writing tests have poorly-
architected applications that don't lend themselves to easy testing. The key
point to remember is: if you don't pass something to an instance's constructor
or to a method, don't touch it. Then everything is easy to test, because you
can isolate your function (or class) from the rest of the application, and
then test only what that function is supposed to do. (In this case, the fact
that a UserError exception becomes an error screen is something you test in
your framework's tests. Same goes for the fact that view.render() renders a
web page; test that in your view tests.)

This style of development is also good for more than just testing. A few
months ago, I wrote an application that monitored a network service. Not
wanting to rewrite the service or mock it, I pointed my tests against a dev
instance of this service. Everything was great until the service blew up on a
Friday night and nobody was around to fix it. Faced with not being able to
write any more code until Monday morning, I knew I had to fake that service
somehow. 20 minutes later, I had a class with the same API as my "connect to
that service" class. I changed one line of code in my dependency injection
container (to create an instance of my connection-to-fake-in-memory-server
instead of connection-to-networked-dev-server), and then I was back in
business. That's the beauty of writing code to be flexible: you don't have to
get everything right on the first day.

(People will argue that tests should never depend on external services,
because they can blow up and then you're fucked. Yes they can, and yes you
are! But while I didn't do everything right on the first day, my design
allowed me to recover from this mistake without any code changes. And now I
just run the test suite twice before a release; once against the fake
connection and once against the real server, just to make sure that whatever
assumptions I made in the mock server also hold when connected to a real
server. I like releasing code that I know works in real life in addition to my
fantasy mock world, but that's just me, I guess.)

Edit: and oh yeah, it's easy to mock databases and HTTP requests. We've seen
the second one already; you let your framework translate between HTTP and
method calls, and you write the tests for that when you write your framework.
This frees up your application developers to Not Care about that sort of
thing, allowing them to write great tests with minimal effort. The first one
is also easy. You write code like:

    
    
        class UserModel(Model):
           def __init__(self, database_schema): ...
           def last_login(self, user):
               self.database_schema.get_resultset('User', user=user).related('LoginTime').order_by('time DESC').get_column('time').first()
    

Then when you're testing your controller, you pass in a fake UserModel that
just defines last_login as something like:

    
    
        class FakeUserModel(Model):
            def __init__(self, database_schema): pass # don't care
            @memoize
            def last_login(self, user): return datetime.now()
    

The code to ensure that last_login generates the right sequence of operations
on your ORM is somewhere else. The test that your ORM generates the right SQL
AST is somewhere else. And the test that tests that ASTs are converted to
correct SQLite SQL is somewhere else. You already wrote and tested that code.
Assume it works!!!

Yes, sometimes you will write a few end-to-end tests to ensure that when an
HTTP request that looks like foo arrives on the socket, you write a HTTP
response that looks like bar to that socket and your database's baz table now
contains a record for gorch. But that's not how you test every single tiny
thing your application does; it takes too long, it's hard to get right, and it
buys you nearly nothing.

So I guess I add: testing is hard if you write your tests wrong.

~~~
irahul
> I find that people that have the hardest time writing tests have poorly-
> architected applications that don't lend themselves to easy testing.

It's generally good to start with some framework which provides capabilities
to easily mock most of the objects. You sure can architecture your application
as such, and have your dependency_injection_thingie to mock objects, but I
don't think it's a worthy investment of time.

> (Who would have predicted the day where I started writing my HN examples in
> Python!)

So why is that? Working on a Python application? While you are there, check
out decorators and co-routines/generators. You already have checked out
decorators(which are basically function composition - same in Perl other than
the syntactical sugar) - I see your @memoize example.

Decorators along with Python introspection are super cool. I used decorators
to implement a small contract system -
<https://github.com/thoughtnirvana/augment>

I recommend this talk on co-routines <http://www.dabeaz.com/coroutines/>

EDIT: Perl has coro. But the language integration(generator expressions,
convenient yield) makes it a bit more natural in Python(YMMV). And Python has
gevent if you are looking for a threading equivalent.

------
jasonwatkinspdx
Unit testing is not the only form of testing.

~~~
shadowfiend
Which, of course, is part of the point of the article (“I'm not saying TDD is
a bad thing, but there are more tests than unit tests, and there are more ways
to verify software than testing.“)

~~~
petercooper
I think there's some confusion in the definitions which makes things a bit
muddier.

TDD is traditionally based around unit tests, but nowadays can pragmatically
include integration and functional testing practices. The former definition
seems to be the one accepted by the article, but the latter definition
somewhat solves most of the problems raised.

~~~
shadowfiend
That's absolutely true. Are there any good articles/posts out there on doing
test-first with something bigger like integration tests?

~~~
petercooper
I don't have any mindblowing articles to hand but this might be a start:
[http://programmers.stackexchange.com/questions/99735/tdd-
is-...](http://programmers.stackexchange.com/questions/99735/tdd-is-it-just-
about-unit-tests)

More specifically, the London school of TDD encourages thinking about things
from an integration testing level, although you quickly progress to doing unit
testing with a ton'o'mocks to flesh out the missing parts.

------
jarin
I don't know, it's not that hard.

Write acceptance tests (with Cucumber or something), use a good unit test
helper like Shoulda for your standard unit tests, and write unit tests for any
complicated methods.

The acceptance tests will cover pretty much everything, and the unit tests
will cover anything that's hard to test from a high-level point of view.

------
todojunkie
This is a difficult topic. I've found that adhering to TDD is just not
realistic in some cases.

For example, I'm following the Steve Blank approach of Customer Discovery,
building my MVP. If I take the TDD approach (which takes more time to code a
finished piece of functionality), and successfully iterate enough times during
the Customer Discovery process, I throw out all of the tests that I built and
am now moving onto something different that I've discovered a customer is
actually willing to pay for.

------
ocivelek
IMHO, there're two points here; 1\. Unit testing is not TDD 2\. TDD has
nothing to do with software testing

TDD is good for developing rapid changing codebases with short development
cycles with known requirements and a harsh time pressure, but it's a front
loaded way as well.

By which I mean, you have to invest some hours beforehand to make sure you're
not shipping crap because you didn't have time to see what would break or to
make sure you didn't skipped the controller your GUI developer needed.

TDD tries to ensure one thing only: "Awareness". You accept the initial costs
of TDD if awareness is a big issue for you. Otherwise, use something else
which serves you to solve it.

If you're aware of what you will be implementing and if there's a mechanism
you can check your code against, it'd be efficient, right? TDD uses unit tests
for documenting that "Awareness", aiming to utilize the benefits of unit
testing as well.

You're right, that it is not feasible trying to test every aspect of your
solution via unit tests and that's why there are integration tests, systems
tests and acceptance tests. So if you're planning to find defects via the unit
tests you're writing for your TDD cycle, think again. Unit tests are good for
checking "completeness" and a great tool helping with regressions. Nothing
more and nothing less.

As a good engineering practice, we adopt the method of working, based on how
we planned to work. If we find TDD fruitful to implement, we avoid putting any
logic to controllers. We implement our business logic behind the controller,
which also cleans it off of implementation specific crap. Besides us using a
completely detached GUI layer helps us write controller unit tests running via
HTTP.

We simplified our working environment and make it suitable for TDD, by ways we
found to provide more capabilities to us. If we're not to use TDD, we employ
other logics and structures, which would fit well with the method we'd use
instead.

Long story short, TDD gives what TDD is intended for and as long as you
cooperate. If you expect more than it can provide, you'd be disappointed.
Regardless of your development method, you have to make sure your
development/architecture models and your tool set complies with your method of
choice. The rest relies in the question "What do you need your development
model to solve for you?"

All the best

------
ludflu
"TDD is nearly useless when your code is the opposite of what I've described
above: specific and mostly trivial, with complexity coming from the sheer
amount of methods and their interactions. "

In other words, badly factored code is hard to test and maintain. Its just
that with TDD its more obvious why badly factored code is bad.

~~~
romaniv
More like wiring or integration code. Not everything is a framework. At some
point you have to write code that works with and inside of the framework.

------
jannes
Using TDD everywhere is stupid. I think, tests are most useful between layers
of abstraction, especially if you are the provider of abstraction. If you are
merely a user of abstraction, it is stupid to unit test that code.

If you have an interface to the outside, then that should be unit tested.

------
gregors
To the dev's who take the time to write tests, when I inherit your code base I
feel like buying you a beer!

For the dev's who don't - no matter how 'clean' the code. FFFFUUUUUU

------
pyre

      > Okay, maybe I'm going abit overboard with fake qotes.
    

In other words, a straw man?

~~~
mdonahoe
Why am I not surprised that there are a bunch of typos in this article?

~~~
cpeterso
No tests?

------
Volpe
This blog and some of the comments below are just crap. If you haven't TDDed
before (for a non trivial project) you simply don't know what you are talking
about, and it's apparent in your ignorant opinions.

Sorry if that's a bit harsh but I'm tired of seeing these posts by people who
don't get it.

I don't get quantum physics, that doesn't mean I'm going to write an blog
"when quantum physics fails"

If you don't understand something... Learn about it, practice it, then
criticize its actual flaws.

Don't just get frustrated at something and claim its crap. Because its clear
to those of us that do understand it, that you are talking crap

