
The Pragmatics of TDD - MattRogish
http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html
======
NateDad
I think the only reason TDD works as well as it does is because it forces you
to _actually_ write the tests. If you write the tests after the code, it would
be perfectly fine.... except no one ever (hyperbole) writes tests after the
code, because management/sales/support sees that the code works in the general
case, and now insists you work on the next feature and/or you get excited
about another feature and don't want to write boring tests.

If you write the test first, that can't happen, because you haven't written
the actual code yet.

~~~
scott_w
I think it's actually worse than that: people who write tests after code just
write their tests to "verify" that the code is doing what the code is doing.

So, the code provides a list of numbers 1, 2, 5. Non-TDD would code a test
that verifies the list is indeed 1, 2, 5.

However, the business wanted the list in reverse order: 5, 2, 1. TDD would
force the developer to state this up-front. A non-TDD developer is more
reluctant to change the code, since "it passes the test!"

I've done this, although I like to think I'm quite happy to rewrite something
if it's not good enough (or just wrong).

~~~
markhelo
I am not sure I agree with it being worse. Would you rather have code with no
tests or code with tests even if they do verify what it is doing? Most
regression tests are brittle because they never change and get hardened. In my
previous life, I was majorly behind model-based testing which is very similar
to concepts of TDD in that it forces people to think of your entire component
and then it generates tests or can walk through your reference implementation
to verify your actual code. However in smaller companies, I personally feel
that it is important to first test the right "it" before building it right.
(As Google Engineering Director Alberto Savoia used to put it).

No one is advocating releasing broken stuff, but whats the point of releasing
something perfectly tested that no one wants? I think the answer lies
somewhere in the continuum. And in my own startup (wello) we do go back and
write tests, because testing everything manually does hurt once the feature is
working well and users actually want what you are building. So we do carve out
time and write tests and often times then build it right. You could argue that
this is inefficient, but I could also argue that for many features we removed
because users did not want them, we saved the time needed in upfront TDD.

~~~
stcredzero
_> Most regression tests are brittle because they never change and get
hardened._

Regression tests should get refactored at the same time by the same tool that
refactors the rest. Code changes should be handled the same way.

~~~
markhelo
Agreed. I was only saying that mosts tests are written to codify what is
expected not that they should not be changed with changing code.

------
AlwaysBCoding
What has always bothered me, is that the ratio of people consistently
talking/blogging/tweeting about how important TDD is to the number of actual
resources that can help a junior developer learn and practice good testing
habits is a million to one.

I truly believe that TDD works. I truly believe that the jury is already in
and that anyone who is serious about becoming a software professional should
write tests for every line of code. I really really believe it and want to use
it.

That being said, it is so fucking hard to get started with TDD. Oh god, it's
so difficult. I've done the katas, done an apprenticeship, read the RSpec
book, watched every screencast I could find, everything you can reasonably ask
a learning developer to do, and I still find it incredibly impractical to
practice TDD when working on most projects, not because I don't want to, but
because it's so difficult and time consuming and the resources just aren't
there to help make my process quicker.

Here's an idea for the TDD crowd. Every time you're about to write a blog post
about why people should use TDD, instead write a blog post about a situation
where you applied TDD, the tests you wrote and the code it lead to. We need
more examples of TDD in progress, more code snippets, more screencasts. I'm
telling you the problem is that the resources just aren't there to encourage
these habits. Instead of continuing to have this debate at a semantic level,
if there were more testing resources available I think people would naturally
flock to it and TDD would win out. Until then, I think it's twitter fights and
bad habits for the foreseeable future.

*This comment applies verbatim to security best practices as well.

~~~
jdlshore
Shameless plug, since you asked for it: Let's Code Test-Driven JavaScript is
an _extensive and in-depth_ screencast series about doing TDD in practice. I
promise you've never seen a TDD screencast that goes this deep.

<http://www.letscodejavascript.com/>

And if Java and Swing are more your thing, Let's Play TDD is its less-polished
progenitor. <http://www.jamesshore.com/Blog/Lets-Play/>

You're right, by the way. It's much harder to do TDD for real than it is to do
all those toy problems that involve maybe one class, some calculations, and
nothing else. That's why I created the screencasts.

~~~
AlwaysBCoding
These look really really cool. I can't speak to the quality because I haven't
watched them yet, but I'll definitely check them out, this is exactly the kind
of thing that I'm looking for.

~~~
jdlshore
Thanks! Glad to hear it. Email me any time if you have questions. (My address
is in my HN profile.) There's a 7-day free trial so you can try out the show
and I'm more than happy to cancel it for you if it's not a good fit.

------
DanielBMarkham
I interviewed Uncle Bob on Monday (shameless plug: [http://tiny-giant-
books.com/blog/robert-uncle-bob-martin-int...](http://tiny-giant-
books.com/blog/robert-uncle-bob-martin-interview-audio/?id-7)). It was mostly
biographical stuff, but I did cover functional programming. There was a bunch
of technical stuff I left off because of time constraints.

The topic I really wanted to cover but didn't was TDD in startups. I have a
simple belief: the value of your code debt can never exceed the value of your
code. That is, if you code has no monetary value, it is impossible for you to
have any code debt, no matter who you are, what your code does, or what the
code looks like. Think about it. It makes sense.

It's interesting that Bob took a "saw the baby in half" approach here,
outlining the various things he'd throw away and the various things he'd keep.
While I think there are definitely shades of gray, it would also be useful for
him to directly address the question of code that has no value. If I write a
function that I save on my hard drive and never use, does it need a test? I
belief the ludicrously obvious answer is "no", but I haven't heard him say
that yet.

~~~
surbas
He did mention if it is throw way code he doesn't write tests in his second to
last point. "A few months ago I wrote an entire 100 line program without any
tests. (GASP!)"

------
swanson
Write a blog post about a concept or idea in the general sense => "I need
specific examples or this is just a religion/consultant-speak".

Write a blog post about specific examples => "It might work for this example,
but in my experience it didn't work in this other case."

If you write in the abstract, people will dismiss it as fluff. If you write
concretely, people will dismiss it because it doesn't cover their exact case.

There is no way to please everyone in the world of opinionated software
blogging, so stop trying.

~~~
jiggy2011
If you _really_ want to convince people you need to talk in concrete, not
abstract. But you need a large number of examples and it should come from a
neutral third party.

~~~
MartinCron
Trying to convince people is a losing battle in a misguided war. The fact that
people out there will continue to disagree with you no matter what you say is
part of life. Just make the best contribution that you can, and if someone
gets value out of it, great.

The fact that some software development organizations out there aren't using
any test automation* even though they would probably benefit from it might
make me a little sad, but it's not a problem that I'm trying to solve.

*or source control, or deployment automation, or incrementalism, whatever.

~~~
jiggy2011
You might not convince everybody all the time, but I don't think that trying
to convince people is necessarily a waste of time if you have a good argument.

There are plenty of things that most people take for granted now that people
previously would not have believed.

With enough supporting data it would be hard to argue against TDD, for example
an independent study of a sizeable number of software shops which showed that
developers using TDD shipped code 50% faster or with 60% fewer defects or
whatever. That would put the onus on the anti-TDD person.

I think Zed Shaw wrote something about this, but can't find the link.

The closest I can find is this, [http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd....](http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd.pdf).

~~~
MartinCron
I guess so. It's just so easy to be defeatist when any useful tool or
technique can be summarily dismissed by jaded technologists as "snake oil".
I've had to come to terms with the fact that from my perspective, lots of
developers are just "doing it wrong"

~~~
jiggy2011
The definition of snake oil is basically medicine that has a claimed benefit
but little/no supporting evidence.

Once you have strong backing data , it's not snake oil anymore and it becomes
increasingly difficult to make that claim.

~~~
MartinCron
Backing data that's strong enough to satisfy some critics is impossible to
obtain.

Anecdotes? Obviously not OK.

Small-scale controlled experiments can be dismissed as nothing more than
pointless toy exercises that have no bearing on real-world production code.

Large-scale controlled experiments are impractical. Blinding and placebos
aren't possible. Isolating confounding elements such as individual performance
would be difficult.

Burden of proof is burdensome, I guess.

~~~
jiggy2011
Basically yes. I'm sure it would be possible to do a good study but very
expensive. Although if the potential upside is increasing the productivity of
an entire industry maybe it's worth doing.

------
danso
I've only recently started to use TDD, the biggest roadblock being the
annoying steps it takes to set up the suite, directory structure (ok, that's
pretty minor), and then properly use mocks and stubs. I sometimes forego the
latter part.

I'm new to it but I find it an incredibly useful strategy. It forces
orthogonality on me because I have to think how a function can be tested
independently of the external objects that may use it...which causes me to
challenge my initial assumptions and instincts about the overall application
design.

In some sense, I guess, it is always frustrating to spend more time in the
design stage than working with an actual prototype...but I find the medium-to-
long term benefits to far outweigh the initial investment in time. And once
the tests have been written, the actual functional code is almost trivial to
write.

Even without the benefits TDD has in easing the maintenance/upgrade phase of a
product, I find its effects on the design/prototype stage to be worth the
effort alone.

~~~
glenjamin
Ideally mocks and stubs should be used only when the thing you're
mocking/stubbing is either slow or complex.

In general, just call the real thing and don't worry about it. Otherwise
you'll expend loads of time and energy setting up fakes to test relatively
simple code.

~~~
jarrett
Agreed, and furthermore: Mocks don't verify that you're calling external code
_correctly_. Suppose you think API method does X, but it actually does Y. You
won't detect that with mocks. Maybe you have integration tests (i.e. tests of
the entire system working in concert) to account for that, but I say, why not
catch these things at every chance you get?

~~~
LargeWu
>why not catch these things at every chance you get?

Because it's expensive. It takes time to write and maintain those tests. It
takes time to run those tests. If you want to verify that API method works as
assumed (and I assume you are talking about an API on an external system, but
ultimately it doesn't really matter) then write a test specifically testing
that API method.

Testing that method B also works while testing method A is, in my experience,
the number one culprit of people writing bad tests. These tests are harder to
set up, take longer to run, and are more prone to promoting brittle test code.

Test everything that's important to test, and nothing more. This is true
whether you are talking about your entire system, or a single method.

~~~
glenjamin
I agree, the thing I was trying to warn about is meticulously mocking out
everything you possibly can to test every single method/class in complete
isolation from the rest of the system.

It's very tempting to start doing this when you get into heavy unit testing,
but I find in practice this adds more cost than value.

As an example, lets take some python code I just made up which formats some
input into a message, throws the message onto the queue, and returns the
message ID.

    
    
        class DelayedJob(object):
            def __init__(self, queue):
                self.queue = queue
            def add_job(self, task, params, priority=1):
                m = JSONMessage({
                    task: task,
                    params: params,
                })
                self.queue.add(m, priority)
                return m.getID()
    

It's pretty clear that the queue service needs to be mocked out, as you don't
want to be talking to a real queue for a unit test (you'll probably want some
sort of acceptance/integration test to cover this somewhere).

However, I have in the past been tempted to make the JSONMessage class
injectable, and then inject a mock with a stubbed implementation of getID -
I've often seen other people do this as well. The code required to set-up such
a fake would be fairly verbose and add little in the way of extra clarity to
the test. Since the class is fast and simple (in this scenario it's just a
container) then I'd just leave the concrete class in use.

------
jfabre
I've seen big company devs so overwhelmed by messy production code that any
small feature change would require 2 weeks of work.

I've seen startup devs who couldn't cope with changing requirements fast
enough. There was just too many moving parts at the same time.

Hell, I've been one of those devs! Great code is the exception, not the norm.

When I learned TDD, it greatly improved the quality of my code. If TDD is a
bottleneck for you, maybe you need to learn to touch type.

My 2 cents.

~~~
Glide
The process of learning to TDD well makes a person code better, regardless of
actually doing it.

TDD is one of the few actual disciplines I know of around coding that can't be
just glossed over or faked. One either does or does not. This is doubly true
in a pair programming environment. Not using one of the established patterns?
One can argue around that. Not doing TDD? Justification usually has to tie
with UI in some way.

~~~
philwelch
How do you know whether I wrote the tests first by looking at my code?

~~~
jdlshore
The difference is pretty clear in practice. It's much harder to add tests to
existing code than it is to TDD it, and it takes a huge amount of self-
discipline to go back and add tests to "finished" code. So, without TDD, you
end up with a relatively small number of tests that test a lot of things,
whereas TDD (done well) creates a large number of small tests, each testing
one very focused thing.

~~~
philwelch
As long as I write the tests roughly the same time as the code, it doesn't
really matter which I write first. And I always adopt the practice of writing
a large number of small tests regardless. Going back and forth is the
important part for me, not necessarily writing the tests first.

In any kind of setting with shared code, it shouldn't be a matter of self-
discipline, it should be a matter of standards and best practice. You
shouldn't be integrating any code without test coverage. This is a standard
part of how many projects handle pull requests.

If you're only accountable to yourself and writing the tests first is the
technique that works for you, I can't criticize but you really don't have much
to say to programmers in general, either.

------
plinkplonk
I'm surprised at how much attention this bit of process dogma is getting on
HN.

This particular argument about the supposed efficacy of TDD depends on people
accepting the equivalence of the efficiacy of TDD with the efficiacy of
surgeons washing hands, just because the blog author says so.

Oh horror, how can you challenge my dogma just because I call it a
'discipline' and _verbally_ equate it to surgeons washing hands.

Saying something is true doesn't prove it is true whether you call something a
'discipline' or not. Cults are full of such 'disciplines'. Cult members will
vouch for them. A better word is 'ritual'. At best such rituals are cargo cult
practices. [1]

Surgeons washing hands is a practice with empirical, unchallengeable
_scientific_ evidence supporting it,while TDD is a dogmatic practice
evangelized by software process zealots, with next to no scientific evidence
backing it up.

Also TDD != testing and TDD != automated testing (though the evangelizers tend
to blur the differences. It is easier to argue that you should write tests,
than that you should write tests _before_ you write the code, which is a
somewhat more shaky assertion. If you can paint your opponents as opposing
_tests_ (vs TDD) you have set up an easily knocked down strawman).

Programmers have written tests, including automated regression tests for
_decades_ without the blind adherence to the 'write a test _first_ , write the
code, refactor, repeat' cycle that TDD consists of.

Insisting on this as some kind of _moral_ imperative [2] is snake oil, and it
is natural that experienced devs push back against religious preaching.

The best 'poke holes in the zealotry gently but firmly' writing wrt TDD is at
[http://www.dalkescientific.com/writings/diary/archive/2009/1...](http://www.dalkescientific.com/writings/diary/archive/2009/12/29/problems_with_tdd.html)
. Bob Martin's TDD 'kata' is dealt with there in some detail. The comment
thread at [http://dalkescientific.blogspot.in/2009/12/problems-with-
tdd...](http://dalkescientific.blogspot.in/2009/12/problems-with-tdd.html) is
hilarious too, with some familiar names popping up.

[1] <http://en.wikipedia.org/wiki/Cargo_cult>

[2] The author says as much here <http://news.ycombinator.com/item?id=5331108>

" [TDD] allowed us to go fast, and _keep_ going fast because the code stayed
clean. I have come to view it as a moral imperative. No project team should
ever lose control of their code; and any slowdown represents that loss of
control."

~~~
doktrin
This reaction is a bit visceral, and unnecessarily so. The "cult" and "dogma"
accusations sound fairly outlandish, to be honest.

Part of the reason experienced devs push back against TDD has to do with
patterns and habits. It's safe to say that TDD feels awkward to anyone with
other established work patterns.

TDD is of course simply a methodology. It works well for some teams, and
perhaps not for others. What it does do is impose a culture of testing and
rigorousness, which is rarely a negative.

Sure, you can decouple testing from TDD, but IME teams that apply TDD have
more extensive coverage than those who don't. Causation correlation caveats
apply.

I personally dislike TDD because it disrupts the thought pattern I have become
accustomed to using when developing software. I consider myself relatively
junior, so I can only imagine how much of a workflow departure it must be for
more senior engineers. However, just because it doesn't work optimally for me
(or, optimally at first) doesn't invalidate the approach nor make anyone who
uses it some brainwashed cult member.

~~~
thomasmeeks
I think the issue is that TDD is handy for a lot of people. They (wrongly)
assume it is equally as effective for everyone else. Then, they use it as a
metric to judge others. It has become the "moral imperative" Bob Martin spoke
of. Which is truly awful, and elicits a strong response from many who do not
find TDD that useful. Quite a bit of job postings list TDD as a requirement,
in fact, which is silly. It is a bit like requiring vim or emacs.

A couple months ago, in fact, I tripped on a blog post that said
(paraphrased): If you do not do TDD, then you should re-evaluate your career
as a developer. This is bullshit. It angers me because I keep meeting very
smart junior developers carrying around a load of guilt over TDD.

I'm not sure what it is about TDD that make people forget that these things
are all tools. You put it in your bag of tricks and pull it out when it makes
sense. You don't beat each other to death over what tools they use.

~~~
jdlshore
"Quite a bit of job postings list TDD as a requirement, in fact, which is
silly. It is a bit like requiring vim or emacs."

Sorry, no, TDD isn't like vim or emacs. Your choice of vim or emacs doesn't
change the code you produce. TDD _does_. Whether you agree with TDD or not,
it's completely reasonable for a job to require it, just as it's reasonable
for a job to require that you know and use MVC (in their app), or Rails, or
any other technical solution to a problem.

If I'm hiring somebody, they have to know TDD and apply it, or they won't work
for me any more. Period. Just like they have to know what local variables are
and use them. Right or wrong, it's a coding/design standard, and it's
perfectly reasonable for teams to choose and enforce their coding standards.

Now, that person might be able to convince me that they have something that
solves my problem (a need for easily-changed code that does what the
programmer intended) _better_ than TDD. Say, a fancy type system. If so,
awesome! I'll listen. But "I'll use TDD when I feel like it" ain't gonna cut
it.

~~~
doktrin
> _If I'm hiring somebody, they have to know TDD and apply it, or they won't
> work for me any more. Period. Just like they have to know what local
> variables are and use them. Right or wrong, it's a coding/design standard_

TDD is a workflow implementation and not a coding standard per se. This is to
say, code written with or without TDD may be completely interchangeable. This
is not true of a coding standard.

Coding standards measurably and objectively affect the end product (i.e. "use
local variables", "space/tab indentation", "no method greater than 500 lines",
"follow an MVC pattern"). Can the same be said for TDD or [insert workflow
implementation here]?

This may be a nitpick, but I feel it's important since it highlights why there
is such disagreement on this topic.

~~~
jdlshore
You and thomasmeeks both said this (that code written with or without TDD may
be interchangeable) and I respectfully disagree. Sure, in theory, you could
write the same code, but in practice it just doesn't happen. There's a
significant difference in the kinds of code that are produced.

Think of it this way. Imagine two different teams. One team has a workflow
that involves carefully considering design possibilities and creating design
models on paper, and only writing code after those design models have been
iterated and refined for a good month.

Another team dismisses that approach as waste, and instead prides themselves
on their ability to ship. They start coding on the very first day. Although
they care about design, their emphasis is on shipping code, and they would
never waste time on modeling.

These are workflow differences. Will the teams produce different code? Of
course! They approach the work differently! That's what workflow differences
_mean_.

TDD is a different workflow that produces different code. That difference
matters.

(See also this reply elsewhere in this thread:
<http://news.ycombinator.com/item?id=5334366> )

 _Edit:_ I don't mean to say that TDD will lead to good code. Far from it. TDD
done badly leads to some gawdawful messes. But TDD _done well_ does lead to
code that is qualitatively different than other workflows _done well_ and that
difference--the result of using TDD--is useful to me.

~~~
thomasmeeks
Well, yes, if you take it to the ship first vs what amounts to waterfall
extreme, there will be profound differences in the code. I suppose if you make
it your mission to analyze code to determine whether or not your programmers
are using TDD, you could figure it out. But then we're getting into a pretty
artificial situation.

Who cares if the code is clean and the project is successful? TDD may be a
useful way to get to clean code for /some/, perhaps /most/. But this is not
true of /all/. I'm not arguing against its usefulness for someone like you.
That's silly, you obviously value it. I'm arguing against the "moral
imperative," which is utter nonsense.

TDD is neither necessary nor sufficient for success, and we should stop
pretending that it is. I create a class, it has an interface with a purpose,
and I test that interface. I don't write tests first, and I don't have trouble
writing tests after I write the code. I organize my thoughts on pen and paper,
and use very light documentation to test the size and purpose of my methods.
The projects I work on are successful, everyone's happy. This is true of
several other programmers I work with on a daily basis, but not all. Several
also practice TDD, and it works for them.

~~~
jdlshore
You're moving the goalposts. I didn't say anything (and I'm pretty sure I
_never_ have said) about TDD being a moral imperative. Nor did I say it was
necessary for success, nor sufficient.

I said "It's reasonable for a job to require TDD... because the TDD workflow
results in a qualitatively different code base." I chose an artificial example
of how workflows change code to make the point obvious, but the real-world
effect of TDD is night-and-day obvious in my experience.

I value that difference to the point where TDD skill is a core part of my
hiring decisions. I'm okay with you making different choices.

~~~
thomasmeeks
Well, sure, I'm not sure I really care about what particular employers require
of their employees. I think it is silly, but I doubt you care much about what
I think, either. It was a minor point to me (not to you), which is why we've
diverged I think. My moral imperative comment is toward the overall topic, not
you.

------
tieTYT
"I don't write tests for one line functions or functions that are obviously
trivial. Again, they'll be tested indirectly."

The funny thing about this is when I read this bullet point I thought, "Bob
Martin would disagree with this". His rules for TDD (
<http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd> ) would force
that method to be tested whether he wants to or not. Then I scroll down to see
who wrote the article... Bob Martin

EDIT: But maybe he meant to emphasize "indirectly". Maybe it was under test at
first but then got extracted into a simple method under refactoring.

~~~
ollysb
I only test public methods. These are often composed of private methods which
I might pull out of a method once I get to green and I'm working on
readability. A particularly common source of one line methods is to extract
query methods(replace a calculated variable with an inline call to the new
method). A nice side effect of only testing public methods is that if you find
you don't have test coverage on the private methods you know you can delete
them(because the public methods that used to use them have all been deleted).

~~~
ngoede
In my experience if you find yourself wanting to test a private method it
probably means it is time to bud off a new class.

------
eaurouge
For me the greatest benefit of TDD is that I can first specify how a unit of
code is expected (by me the implementer) to behave. Then I write the code that
fulfills (just) those specs, without writing any unnecessary lines of code. If
you've read the specs then you know how the code behaves. Yes, you can write
the code first then the tests, and I do and have done that. But there's
something to be said for letting the specifications alone determine what makes
it into code. Of course, you need to know the specifications for this to work,
which isn't always the case.

------
cshipley
It looks to me like another blog posting by a TDD evangelist, most of which I
ignore because he ain't preaching my religion. He did, however, touch on what
the important part of the pragmatic vs dogmatic question. This jumped out at
me:

> In general I don't write tests for any code that I have to "fiddle" into
> place by trial and error.

It is congruent with a some general rules I follow to decide if I should write
a test for something:

1) Do I care if this code doesn't work on the non happy-path? Maybe I'm
writing some isolated prototype code that will probably be thrown away, or
perhaps I'm planning to rewrite it later. I'm not going to bother writing
tests.

2) How important is it that this code is bug free, and how soon should there's
a bug. If, say, the code is in a highly used part of the program that is
required by the rest of the app to function properly. Or it is part of the
main feature set. I definitely will write tests.

3) I'm under time constraints. If I don't have the time to write tests for a
particular part of the code, then I don't have time.

4) How solid is the architecture/interfaces? If I expect them to change quite
a bit, then I will not write so many tests, or perhaps any. I once worked on a
project that was heavy into unit testing/tdd. There was hundreds of tests
written very early on in the project, and since the code was changing so much,
we spent a lot of time re-writing tests. It eventually got to be a huge time
sink.

5) How much money does the project have? Writing lots of tests takes time, and
time is money. I've worked on some projects that have budge (or time)
constraints, so I don't have the luxury of such dogma.

All that said, I often dislike programming religions or dogma, because they
often advocate following a practice somewhat blindly, without understanding
when the rules/precepts should be applied.

------
jbrains
TDD is a fundamental learning technique. It teaches the principles of modular
design. Notice! One can learn modular design in a variety of ways. I make no
claim that TDD is "the only" nor "the best" of these, but I claim that it
works for enough people to merit attention.

Learning requires investment. Investment carries risk. Risk aversion/tolerance
is a very personal and contextual thing. There's almost no point arguing about
when it's good to be risk averse and when it's good to be risk tolerant,
because of this heavy coupling to the context. Better to be aware of the
phenomenon and work things out case by case.

Some people generally don't like to learn. Nothing you do will force them to
like to learn. You can invite them to try to learn; you can try to make it
comfortable and safe for them. That might work.

Some people find such value in a learning technique that they continue to use
it, even after learning 99% of what they will ever learn from it. Continuing
to use the technique provides them comfort. Whatever works. Others eventually
break free of the learning technique, knowing that they can fall back on it
when they feel pressure.

I care about this: people who want to practise TDD should be free to do it;
people who don't want to practise TDD should not be forced to do it.
Everything else is noise.

------
jbaudanza
_I usually don't write tests for frameworks, databases, web-servers, or other
third-party software that is supposed to work. I mock these things out, and
test my code, not theirs._

If your software has a dependency on a third party component, then you should
include that component in your tests. It's not about testing the component,
it's about testing your integration with that component. For example, if you
upgrade that component, and the API changes in a way that breaks your
integration, you want your test suite to break as well.

Sometimes if a component is too slow or requires network access, you have to
mock it out. But as a general rule, it's best to leave your dependencies in
place.

~~~
azurelogic
This gets into the differences between unit testing and integration testing.
Sometimes it is worth just making sure that your repository class can actually
insert, find, and delete data. It's just a different part of the "testing
pyramid" (<http://watirmelon.com/tag/software-testing-pyramid/>)

~~~
jbaudanza
A unit's dependency is part of that unit. If a unit test mocks out all the
dependencies, that test is running in a fantasy world.

------
ternaryoperator
My greatest reservation about TDD is one that's almost never referred to by
its practitioners: the need for strong refactoring skills. To do TDD right,
you've got to be really good at refactoring your code. But many developers
know only basic refactoring techniques. So if they do TDD, they end up
generating code that looks like it was written to satisfy lots of small
requirements, and it lacks the cohesion and clarity that it should have.

I think TDD is taught the wrong way around. First, they should teach
refactoring. And only when those skills are thoroughly mastered, should they
move on to teaching TDD.

------
njharman
>I don't write tests for getters and setters. >I don't write tests for member
variables. >I don't write tests for one line functions or functions that are
obviously trivial.

I don't disagree, but I often have at least one test that explicitly exercises
all of the public interface (of a class/module/whatever). The point is when I
change that interface I want tests to break and not have to rely on my memory
on what changed when writing release notes / incrementing version number. I
mostly test python, YMMV.

~~~
jdlshore
Those tests are less necessary in languages with static typing, which is Bob
Martin's background.

------
unclebobmartin
What I find fascinating in all this is the sheer amplitude of the invectives.
Apparently TDD pushes some people's buttons. I think that's a good thing.

------
codeulike
The measure is: Find a startup that uses TDD religously, find one that just
uses it when it suits them, find one that doesn't use TDD at all. Fix all
other variables. See which startup does the best. This is a hard experiment to
do but if anyone want to offer me a grant for the research, get in touch.
Thanks.

~~~
TillE
> Fix all other variables.

That's literally impossible to do in a straight one to one comparison. If
nothing else, you have different people working at each one.

A broader study might get enough data to make meaningful comparisons, but I
think strict TDD is too rare to get more than a few samples. It's a tough
problem. I think you'd have to set up a completely artificial environment if
you truly want to measure the relative efficiency of TDD.

~~~
codeulike
Apologies, "Fix all other variables" was supposed to be a joke. Not obvious
enough I guess.

------
jdlshore
The cost/value tradeoff of TDD keeps coming up. My comments on this last time
were well-received. The question was "Should I TDD an MVP?" but the answer is
really appropriate to any question of when and whether TDD is worth it:

This is a really good and interesting question, and it's one I've been
struggling with myself.

The problem boils down to this: TDD makes your software more maintainable (if
you do it well) and it lowers your cost of development. However, it also takes
significant time and effort to figure out how to test-drive a technology for
the first time. Everybody can TDD a Stack class; TDD'ing a database, or a web
server, or JavaScript [0] is a lot harder.

So the answer seems simple: use TDD for the parts you already know how to TDD.

But it's not so simple! It's much harder to add tests to existing code than it
is to TDD it from scratch. Sometimes, it's flat-out impossible. The expense is
so high, there's a very good chance that you'll never get around to adding
tests to the un-TDD'd code. It will hang around causing bugs, preventing
refactoring, and sapping your agility forever, or until you rewrite... and a
rewrite of any significance will halt your company in its tracks, so you won't
do that.

So the reality is that, anything you don't TDD from the beginning, you'll
probably never be able to TDD. Companies that go down this road find
themselves doing a major rewrite several years down the road, and that's
crippling [1].

There's another wrinkle on top of this: manually testing code and fixing bugs
is _expensive_. Once your codebase gets above a certain size--about six
developer-weeks of effort, let's say--the cost to manually test everything
exceeds the cost to TDD it. (The six weeks number is a guess. Some people
argue it's less than that.)

So the real answer is a bit more nuanced:

1\. If your MVP is truly a throw-away product that will take less than six
weeks to build and deploy and you'll never build on it after that, use TDD
only where it makes you immediately faster.

2\. If your MVP is the basis of a long-lived product, use TDD for the parts
you know how to TDD and _don't do_ the parts you don't know how to TDD. Be
creative about cutting scope. If you must do something you don't know how to
TDD, figure it out and TDD it.

3\. It's okay to be a bit sloppy about TDD'ing the edges of your app that are
easily rewritten or isolated in modules. But be very careful about the core of
your system.

That's my opinion based on 13 years of doing this stuff, including building
five successively-less-minimal MVPs over the last nine months for my JS
screencast. The first three MVPs were zero coding, the fourth was a throw-away
site, and the fifth was TDD'd with aggressive scope cutting to minimize the
number of technologies that had to be TDD'd.

[0] Shameless plug: I have a screencast on TDD'ing JavaScript.
<http://www.letscodejavascript.com>

[1] Rewrites are crippling: See Joel Spolsky's "Things You Should Never Do,
Part I." <http://www.joelonsoftware.com/articles/fog0000000069.html> (There is
no Part II, by the way.)

