
Is TDD Worth It? - rbanffy
http://blog.codosaur.us/2013/07/is-tdd-worth-it.html
======
anon1385
TDD won't help you solve problems that you don't already know how to solve:
[http://ravimohan.blogspot.co.uk/2007/04/learning-from-
sudoku...](http://ravimohan.blogspot.co.uk/2007/04/learning-from-sudoku-
solvers.html)

In fact it only gets in your way. Solving problems that are new (to you)
involves writing code to find out if a certain approach works, then throwing
it away and trying something else. TDD makes that process a lot slower, and
makes people more reluctant to throw away incorrect models of the problem that
they have codified into tests without even realising it.

Of course somebody is going to chip in here and say that if the above is
happening then you are writing your tests at the wrong level of abstraction
and that you should be testing at the point where you can create a stable API
that remains unchanged while you throw away the speculative implementations
underneath. But if you are tacking a new problem you have no idea where those
boundaries lie. It's hard enough to determine that for problems that you
already understand and have working solutions for, never mind ones you don't.
TDD forces you to decide about one of the hardest parts of development (API
design / where to place abstraction boundaries) up front before you have
worked out how to solve the problem at all. Either that or you just write vast
vast numbers of trivial tests for every single function that you write (which
does seem to be what some TDD tutorials advocate), but then you are creating a
huge maintenance burden for the future where any change to the code is going
to necessitate rewriting dozens of tests.

I would argue that TDD only works if you are solving problems that you are
very familiar with. By the time you are writing your 5th CRUD web app you
probably have decent mental model of the parts you need before you start. I
feel that a lot of the love for TDD comes from people who spend most of their
time working on problems they already solved dozens of times before, but they
learned how to solve them before they discovered TDD.

For people first learning how to code it is absolute death. It discourages
people from doing the most important thing they need to learn: throwing code
away.

~~~
benihana
> _In fact it only gets in your way. Solving problems that are new (to you)
> involves writing code to find out if a certain approach works, then throwing
> it away and trying something else. TDD makes that process a lot slower, and
> makes people more reluctant to throw away incorrect models of the problem
> that they have codified into tests without even realising it._

This is a giant strawman. Who is saying to write tests for your prototypes?
You seem to be confusing a bunch of different methodologies here. You're
talking about rapid prototyping to understand a domain before building a long
term solution. That isn't TDD, and it would be foolish to write tests for an
exploratory exercise that you're planning to throw away.

> _TDD forces you to decide about one of the hardest parts of development (API
> design / where to place abstraction boundaries) up front before you have
> worked out how to solve the problem at all_

How do you figure? If you've done a rapid prototype like you're advocating,
you should have a better understanding of where boundaries lie and how the API
should work. Further, you seem to be working under the impression having tests
make it hard to change things. If anything, I've noticed the opposite - tests
make it much easier to change code, especially when you test behavior, not
implementation.

~~~
halostatue
I interviewed with a company one time that ran with TDD and pairing like a
religion.

The guy I was interviewing with didn't like the fact that I _thought_ about
how the problem might be shaped (and therefore made assumptions as to what I
_might_ want to implement and refine via tests) prior to “writing” that first
failing test. It was absolute nonsense and I was rather shocked that the
company managed to ship anything, to be honest.

I'm a big fan of tests; I'm a big fan of pairing. They are tools; nothing
more.

~~~
rrouse
An interview recently felt like that. I was just throwing together little bits
of code to make sure I understood the problem clearly, but the interviewer
seemed to be a bit agitated that I didn't write a test right away.

I honestly don't work like that. So I'm kinda glad I failed that interview.

~~~
halostatue
Yeah. I've recently done a couple of interviews on the other side of the fence
(as the interviewer), and I've hit on a problem that can be worked out
collaboratively between the interviewee and the interviewer—and how far we go
depends on their skill.

It _always_ begins a discussion of the problem and what we want to solve, plus
a bit of how—so we can figure out what it is that we're going to be testing
for and how to build from that point.

------
spellboots
It seems like there are a sea of absolute assertions with few facts backing
them up - people seem to love either saying that TDD is better or TDD is
worse, and it seems that the only data ever really offered on the subject is
anecdotal opinion.

We can do better than this. This study[1] at Microsoft reveals that TDD
results in a defect reduction rate of between 40% and 90%, at a cost of
15%-35% increased development time. So, what's the best approach? It now
becomes a business question. If speed of delivery is crucial and quality less
so, don't use TDD. If quality is more important that speed, use TDD.

And finally, everyone involved in building software owes it to themselves to
take an hour to go and watch this excellent video by Greg Wilson entitled
"What We Actually Know About Software Development, and Why We Believe It's
True" [2], and then perhaps dive in to the list of references from it [3],
because you will find yourself more actually informed than anecdote and
opinion will ever get you.

[1] [http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd....](http://research.microsoft.com/en-
us/groups/ese/nagappan_tdd.pdf) [2]
[http://vimeo.com/9270320](http://vimeo.com/9270320) [3]
[http://www.gdb.me/computing/citations-greg-wilson-
cusec.html](http://www.gdb.me/computing/citations-greg-wilson-cusec.html)

~~~
RogerL
As the other poster pointed out, this paper is measuring nothing, or at least
it is not written in a way to let us know.

I find it unremarkable that a sudden focus on quality and testing reduced the
bug rate. It does not follow that TDD is the cause, nor that TDD is the best
of multiple options.

Its a well known effect in experimental design - measurement alters what you
measure. Get depressed people to watch and talk about a dog video. I can
predict that they will be less depressed. Not so much because dog videos are
great at reducing depression, but because all of a sudden so much experimental
interest is being directed at them. It would be almost _churlish_ to not feel
better, if you know what I mean. So, we don't design experiments that way - if
your hypothesis is that dog videos cure depression, you need to do exactly the
same protocol, but with cat videos, or I don't know, truck videos, or
whatever, so both groups get the same attention, both are trying to reduce
their depression and so on (I'm assuming unblinded here because obviously the
TDD paper was unblinded, obviously we have better methods for the depression
study than my suggestion here).

Kudos for bring empiricism into the discussion, but I do find the paper
lacking.

I wrote above that I don't use TDD yet achieve low defect rates and good
design. I think it is because, no matter how you do it, if you focus on those
things you will more likely achieve them than if you don't. I hypothesize
(don't be mad!) that TDD works for some simply because it brings the issue of
quality to the forefront of their mind. Whereas, I'm already always thinking
"is this line of code going to kill somebody"; I've tried TDD, and it always
seemed like it added little to actively inhibited me. Your mileage _will_
vary.

------
danso
TDD is often praised for its documentation effect, vital for teams working on
big projects with heavy moving pieces. But I've found TDD to be essential in
projects in which I'm the main or sole coder. OftenI can't finish every piece
of a project in a single day, so I'm switching context and code bases quite
frequently. TDD has been the only way I can keep on task, knowing exactly
where I left off by running the suite. It also not only forces me to write
more modular code, but it abets the reuse of code because if a module works in
one project, I can extract it and put it in the others and can use TDD to
immediately see if that raises issues.

Even for side projects, TDD has been essential...it's so easy to let side
projects die especially if you've forgotten where you left off. With some TDD,
I can discipline myself to at least write a few tests today if I can't muster
the energy to do real code for the project...and often that little push ends
up with lots of code being written....just as you can go on a five mile run as
soon as you motivate yourself to get off the couch

------
jacquesm
TDD is like any other tool. Abuse it and you'll get terrible results, use it
well and it will save you time and effort over the lifespan of a project.

The big tricks are: test modules on interface boundaries, tests should test
one thing and one thing only, don't overdo it and realize that tests are code
and will need some maintenance.

Then when refactoring time rolls around you'll be a happy person.

If you overdo it or test in other places than at the interface level you'll
find out that with every change half your tests will fail and that maintaining
the tests is more work than maintaining the code. Tests will be 'brittle' and
will test code at a level that is not useful.

Too much of anything is wrong.

------
gambler
I'm tired of people asserting that TDD results in better X, while
simultaneously stating that TDD does not _guarantee_ good X. What does that
even mean? It sounds like a non-falsifiable statement, that aims to blame the
person whenever your methodology fails.

Another issues is that no one ever qualifies what exactly is TDD better than?
Better than someone vomiting hundreds of lines of code into a single file with
no systematic approach? Better than test-last development? Literate
programming? Better than Design by Contract, advanced type systems and static
verification?

In blogs, TDD proponents often completely ignore the difference between unit
tests and integration tests, pretending that there is none, and that TDD will
automagically produce both, even when their methodology does not provide any
guidance for designing integration test. (The article does this too: _" The
test suite, that you grow along the way, will help get those features
implemented, and bugs fixed, without breaking other features."_ Uh, not
necessarily.)

And what I'm really, really tired of is the assertion that TDD produces not
just more reliable, but better-designed and otherwise better code. What I've
seen in real life is that TDD encourages developers to bulldoze through the
problem by constantly spitting out code and tests without bothering to think
at the higher level. That results in a lot of "enterprise Java" style
abstractions and fractured code that is extremely hard to read and debug.
(Yes, it's easy to understand what every particular unit does, but it's not at
all clear how they interact and why we need all of them.)

Here is a simple example of deficiencies of TDD that uses the infamous prime
factor Kata: [http://insideofthebox.tumblr.com/post/52002125683/prime-
fact...](http://insideofthebox.tumblr.com/post/52002125683/prime-factor-kata-
my-way) There is a great question in that article: which solution would you
rather work with if you had to modify it?

------
tommorris
One small problem I have with TDD _advocacy_ (as distinct from just TDD) is
that it can make it seem like an all-or-nothing proposition.

Either you start all projects and do TDD all the way or you eschew automated
testing altogether. And the former is the "right" way to do it and the latter
the "wrong" way, ignoring that in practice a little bit of testing goes a long
way.

Imagine this. You have just been asked to work on a project with a very short
deadline, on a codebase you aren't that familiar with, in a language you
aren't that keen on. You are capable with it, but you aren't a big fan. You
are going to have to go and add features to this already reasonably large
codebase and it has no tests at all because it was hacked together.

What do you do to ensure you don't break it? If it is a web app, write
yourself a simple set of automated integration tests that use a ghostdriver-
style API like PhantomJS. There's WATIR in Ruby, there's Splinter in Python
etc. etc.

You then have some emergency guard rails in place. They won't be the best
guard rails. They won't be as good as if you'd been doing testing from day
one. But they help you get your head around the application in the large while
you are tweaking away with stuff in the small. And because it's not your code,
you need some guarantee that it's not breaking.

TDD advocacy is fine but a bit too perfectionist. In the real world, there's
lots of untested code. Don't make the perfect the enemy of the good. Apply the
Pareto principle: write the simplest test code you can to cover 80% of the
existing features. Prioritise by importance ("how many people would the client
want to murder if this went wrong when being tested by the end user?"), don't
let theology get in the way of you getting shit done.

------
lucisferre
I was really hoping for some insight here but in the end it was just another
TDD post that amounted to nothing more than handwaving a feel-good statements.
I mean it wasn't even backed up logically forget empirically. This amounts to
a thesis without any supporting premise.

I've written software both with and without heavy TDD. TDD does not result in
higher code quality, being knowledgable about coding does (in fact amongst
more junior developers TDD results in _both_ bad code and bad tests which is
doubly costly). This is TDDs dirty little secret, not that it can take longer.

Don't get me wrong tests and test automation have tremendous value as a tool.
But TDD as a processes for building all software is frankly just silly and if
you want to convince me otherwise you're going to need real evidence.

------
tmoertel
It should be pointed out that what's actually being discussed is whether TDD
is better than not writing tests. What's not being discussed is whether, if
you commit to writing tests, there are more effective ways of doing it.

~~~
anon1385
Yes, this is the fallacy that TDD advocates use every time (even after it gets
pointed out to them repeatedly). They pretend that anybody who doesn't use TDD
isn't writing tests at all. It's incredibly frustrating.

------
danaw
Almost every point for or against TDD in this thread is based on factors that
go beyond TDD itself.

Many seem to have a bad taste in their mouths from experiencing bad tests, for
whatever reason. Often this is a result of programmers who have yet to learn
how to write effective, flexible tests. Proper TDD, just like everything in
software development, requires experience, dilegence and patience in thinking
through the problem space.

Developers who are for testing, at least in my experience, tend appreciate the
side effects of writing tests; simplified interface code, code base stability
on evolving projects and flexibility to change implementation without breaking
features.

That being said, I agree that for new problem areas, TDD is often a burden. If
you're approaching somethin new (a new tool, protocol or external api) it
makes a lot of sense to write throw away code until your understanding of the
problem domain evolves. At that point, when the problem is understood enough
and the feature is to be implemented into a production system, it is
appropriate to write a solution using TDD.

Ultimately everyone has their preferences and, depending on the problem and
situation at hand, it's up to the developer to decide if the long term
uncertainty of a code base without TDD is better than a short term gain in
productivity. Both approaches are appropriate, given context.

~~~
sambeau
I agree with everything you say here.

My only question is: is the process you describe for new problem areas
actually TDD?

EDIT: I've just been introduced to the concept of 'Spikes' which sound very
much like this.

~~~
danaw
If you definition is loose, then you could probably say that. Generally
though, using a TDD approach would result in some form of a test suite that
can be used to continually assert the correctness of a program.

Personally I attempt to write tests whenever possible, even in a new problem
domain. However, I generally only write some general "black box" style tests
to start.

------
RogerL
There are several claims on here, that I could address by replying, but I'll
try to top post. I see several claims about how much better the code is with
TDD. Well, can I see examples? I write tests, but I don't do TDD. By and
large, I am proud of my code; I think you could design things differently, but
rarely better. I spent 17 years doing this for the military, more for cancer
research, where results and bugs matter, and I strongly believe (absent proof,
I didn't do it with TDD after all) that my solutions were near optimal in
terms of resources used. Meaning, how long it took me to make the bug free
code, how understandable and modifiable the code was, and how defect free it
was.

So, where is the code that will blow my socks off? May I see some? I recognize
that is a difficult challenge to answer explicitly; I'm not _really_ asking
for a github link or whatever. I _can 't_ show you mine, for one thing, and
neither of us will be able to accurately measure who did it with fewer
resources. So, hypothetically, in words, show me your better code.

~~~
jdlshore
I make no claims about this being better code, but here are two repositories
that were developed with 100% TDD.

Let's Play TDD (Java):
[https://github.com/jamesshore/lets_play_tdd/](https://github.com/jamesshore/lets_play_tdd/)

Let's Code Test-Driven JavaScript:
[https://github.com/jamesshore/lets_code_javascript](https://github.com/jamesshore/lets_code_javascript)

I'm not trying to blow your socks off—I think the typical programmer will say
"yuck!" when looking at _any_ code they haven't seen before—but I thought you
might be interested in the examples.

Both of these codebases were developed "live" on camera. You can see the
screencasts here:

Let's Play TDD (free): [http://www.jamesshore.com/Blog/Lets-
Play/](http://www.jamesshore.com/Blog/Lets-Play/)

Let's Code JavaScript (monthly subscription):
[http://www.letscodejavascript.com/](http://www.letscodejavascript.com/)

------
quaunaut
So, at my old job, we had 'TDD', but we weren't particularly good about it.
Maybe it's because Python's testing frameworks aren't as worked out as Ruby,
maybe it was because we were lazy. I genuinely don't know, I wasn't trying
hard at the philosophy.

But when we finally got to work at my new job, our lead made TDD a
requirement. Every time. Red. Green. Refactor. I still screw it up
sometimes(as he'll attest), but God, it's like a relevation.

Probably the nicest part, is simply being able to just _know_ every time that
you don't need to be scared of a bad git merge. Scared of hunting down a bug
that was made 3 weeks ago but no one noticed till now. Scared of deploys.

All of that is mindless for us. Pull down master, merge it in, run the tests,
be happy. Pull Request, code review, merge to master, let Travis-CI handle the
deploy.

It helps keep code more simple, makes the API more useful, and to me, I don't
know that I can go back. The moment you're past week 2 or 3 of building with
it, it's invaluable.

~~~
joeblau
What were you using for python tests. I used nose on a project[1] and it
seemed to do everything I needed.

Your second sentiment is the exact same thing that happened to me. I got that
revelation moment where I'm like "holy crap" we made changes and merged
everything and it's actually working. Which, sadly, wasn't the case before we
went into the TDD model. I mean even if the test cases were added after the
code was written, I still noticed a lot faster development cycle and a lot
fewer bugs on the projects that were using TDD.

[1] - [https://github.com/joeblau/sample-url-
shortner](https://github.com/joeblau/sample-url-shortner)

~~~
quaunaut
I honestly don't remember. I was so green at the time, guh.

------
jsonmez
TDD was a step we took to learn how to create testable code that was decoupled
and had a single responsibility. It also taught us that we needed automated
user level regression tests. But, TDD is like training wheels. After you have
gotten those things from it, it does start to hold you back.

------
davidw
I think testing is very much something that follows a Pareto curve. You want
to be at the right place on the curve, or else you are either missing a lot of
relatively easy coverage, or you are spending a lot of time and money for
small gains in coverage.

On a tangent, one of my current projects involves a hardware system that
interacts with people. That makes testing things _much_ harder than a simple
Rails app where there is an abundance of helpful tools to use. Any ideas?

I was looking into this:
[http://quviq.com/index.html](http://quviq.com/index.html) \- and it looks
interesting. It's quite expensive, but it might be worth it if it can
significantly increase and automate test coverage.

------
moron4hire
I'm really rather surprised that we're still talking about "is TDD worth it?"
To me, it's a bit like asking, "is OO..." or "is functional..." or what have
you. I should hope any developer I work with would know TDD and would know
when to use it and when not to use it.

TDD is a style of development that emphasizes simplicity of code interface and
accuracy and precession of results. However, not all project have these three
things as requirements. In those areas where one has a distinct model of their
problem, TDD excels and will save you tons of time over a long run of forcing
you to consider corner cases and avoiding breaking existing code with new
changes. It's excruciatingly difficult to test things in the fuzzy logic and
statistical realms, as you always run the risk that your artificially
generated data set does not reflect reality very well.

And none of this should be news. This should be standard information about
TDD. It should be clear that TDD is a tool that makes sense when TDD makes
sense. Just as it doesn't work _well_ to write an OO abstraction of general-
purpose sorting algorithms, and it doesn't work well to write a purely
functional simulation of a car, so does it not work _well_ to TDD certain
things.

------
kops
I have committed professional suicide on more than one occasion by voicing my
doubts about the whole TDD and worshipping at the alter of "unit testing
school". TDD works when problem is well defined but complicated e.g. a
datetime library. TDD is a tool to get the grunt work out of way and take care
of occasional forgetfullness/regression, but beyond that it is a pile of code
that becomes more and more unmaintainable with each passing day. I have seen
unit testing code which grows into a snowball over a period of time and people
wasting their time to maintain tests that they don't know a thing about. IMHO
you shouldn't write a single line of code if you can get away with it e.g. if
you need to write unit tests for your add(int i, int j) then I guess you have
bigger problems to solve. I always take comfort in something a colleague said
long time ago while coming to my rescue in one such outburst "are there any
unit tests for *nix, vi, emacs etc, the kind of software I dont see crash very
often"

Disclaimer: My TDD worshipping friends seem to do better in their professional
lives, which I will shamelessly put down to industry trends/religion rather
than the true merits of the methodology ;-) It suits me better...

~~~
ratsbane
I think I've had the same experiences and made the same observations as you.
Also, +1 to this: "...you shouldn't write a single line of code if you can get
away with it."

------
sambeau
My problem with TDD is that it, fundamentally, isn't agile. It either relies
on a waterfall-like process where the full design is known before-hand and
therefore what tests have to be written, or it locks a programmer (and
designer) into a position where to iterate and/or backtrack would require not
only new code to be written but new tests.

It it is very hard for a designer to ask for changes once tests are written.

I much prefer a looser arrangement where code (and design) are gradually
locked down after prototyping, backtracking and iteration. Code gets thrown
away and rewritten but tests are only created once — after the implementation
(and by definition the design) is locked-down. It is the very generation of
the tests that tells the world that this design is unlikely to change and
documentation is safe to create, performance tests & optimisation passes are
able to be made.

Of course, after tests are created, a programmer is free to refactor safely
and replace _all_ the code as long as the tests continue to pass.

There are occasions where it makes good sense though. For instance, if you are
writing a Maths library or the core business logic of an accounting app.
However, these are both essentially examples of a waterfall-like development
process.

~~~
spellboots
It's not true to say that TDD isn't agile. They are orthoganal - you can be
agile and do TDD and you can do waterfall development and not do TDD. The fact
that you need to change tests to change the software is the whole point - if
you change the functionality, you change the tests. In fact, it's not wanting
to change the tests that causes the problem here.

Perhaps you have been on bad projects where TDD has been used improperly.
However every single agile development team I have worked with has used TDD,
and it certainly does not limit things in the way you suggest.

Ironically, Extreme Programming [1], one of the first of the agile
methodologies, and many other agile methodologies since, include TDD as a core
tenet. So in fact by saying "TDD isn't agile" you are perhaps demonstrating a
misunderstanding of what "agile" development was designed to be.

[1]
[https://en.wikipedia.org/wiki/Extreme_Programming](https://en.wikipedia.org/wiki/Extreme_Programming)

~~~
sambeau
_In fact, it 's not wanting to change the tests that causes the problem here_

This is the point I was making. Programmers never want to change the tests as,
often, it has taken as long to create the tests than it did to write the code.
It adds a layer of syrup to the whole process. You can no longer be nimble as
you are carrying the weight of the tests.

And, yes, I've worked with precious designers who don't want to change their
design when a first iteration clearly shows that it isn't working.

But to be truly agile (and I'm not talking about some written methodology here
— I mean agile as in nimble) you need to work quickly, try things out and make
mistakes. Writing your tests _first_ is a encumbrance.

 _Perhaps you have been on bad projects where TDD has been used improperly_

This is what everyone says the moment their dogma is questioned.

I've read the books, I've been on the training courses and I have certificates
in my drawer.

My experience is, if you want to create anything well-designed you need to
iterate and there is no point in writing tests until your design is fixed it
just slows everything down.

~~~
spellboots
> But to be truly agile (and I'm not talking about some written methodology
> here — I mean agile as in nimble) you need to work quickly, try things out
> and make mistakes. Writing your tests first is a encumbrance.

Perhaps it makes sense to be explicit about this when talking about it, as
"agile" has a commonly accepted meaning in software development terms and this
is not it.

> This is what everyone says the moment their dogma is questioned.

You have said "TDD is that it, fundamentally, isn't agile" and have asserted
that you can't use it to be agile. I have stated (in a sibling comment) that
TDD is a business decision, where you trade increased development time for
higher software quality, and cited a study on the matter that is non-
anecdotal. Your position is the dogmatic one by the commonly accepted
definition of "dogmatic".

~~~
sambeau
OK lets examine the commonly accepted meaning of Agile Development and see
where TDD fits in.

TDD is at odds with half of the Agile Manifesto, namely:

* Individuals and interactions over processes and tools

* Responding to change over following a plan

You should first try stuff out and discuss it rather than working out how to
machine test it before its been tried by a human. Having tests first is
clearly having a plan first.

TDD also rubs up against two principals of the Agile Manifesto, namely:

* Welcome changing requirements, even late in development

* Regular adaptation to changing circumstances

It's hard-enough to welcome changing requirements when you have to rewrite
software without having to rewrite all the tests too. Why write tests for code
you don't keep?

Changing and adapting is harder when you are encumbered by tests. Tests should
be written once you've agreed that what you have now is what you are going to
keep.

TDD adherents I have worked with in the past also rub up against this one:

* Working software is the principal measure of progress

Tests are not 'working software', they are a protective covering you put over
working software to make sure it doesn't get ruined as people work around it.

At its heart Agile is about being adaptive: evolutionary changing plans,
iteration, rapid and flexible response to change.

TDD brings overhead: tests that needn't ever have been written; tests that
needn't ever have been re-written. They hamper iteration and flexible response
as they make it more costly to change plans.

~~~
spellboots
Perhaps you haven't seen it work, I have seen it work many times, and like I
said, it is about tradeoffs. You trade increased development time for higher
quality software.

I think it's quite bizarre to be arguing that TDD is not agile when it is an
explicitly agile methodology. Kent Beck, the inventor of TDD, is an author of
the agile manifesto that you quote. The authors of the document clearly
disagree with your interpretation of it.

>You should first try stuff out and discuss it rather than working out how to
machine test it before its been tried by a human. Having tests first is
clearly having a plan first.

This is a strawman argument. TDD does not imply working out how to machine
test something before it's been discussed and planned by a human. If you need
to try it agile has a concept of spikes. This is not an argument against TDD
because this does not describe TDD.

>It's hard-enough to welcome changing requirements when you have to rewrite
software without having to rewrite all the tests too. Why write tests for code
you don't keep?

Tests are part of the code. Why write code you don't keep? Yes, you have to
throw away tests if you throw away code but this is not a TDD issue. The issue
is you're throwing away the code (of which the tests are a part).

> Changing and adapting is harder when you are encumbered by tests. Tests
> should be written once you've agreed that what you have now is what you are
> going to keep.

No, changing and adapting is easier when you have tests because you can be
sure that you haven't broken anything. If tests fail when you change things,
either the requirements have changed so you change the tests, or you've
accidentally broken something and the tests have caught it. Tests should be
written at the same time as the code to validate that the assumptions of the
developer at the time the code is written remain true.

> Working software is the principal measure of progress

The _working_ part is important - if comprehensive tests reduce defects in
software - which is a pretty uncontroversial statement, if you disagree with
this statement you are at odds with all the published research on the matter I
have seen - then writing tests helps your progress by helping to deliver
_working_ software.

> TDD brings overhead: tests that needn't ever have been written; tests that
> needn't ever have been re-written. They hamper iteration and flexible
> response as they make it more costly to change plans.

And again, I repeat, this is an engineering tradeoff. Is speed more important
that quality for your business case? Then TDD is a bad fit.

------
Swizec
Is TDD worth it?

Yes. It helps you code much faster. Tests run soooo much better than the
brittle look-and-poke method of testing your implementation.

Also much more reliable.

TDD might not be worth it if you're working in a strictly typed language with
no side-effects, or it might be a bit less worth it. But in a dynamic or non-
typed language you have to do that half of the compiler's work in tests.
Especially if there's more than one person touching your code.

------
joeblau
I think it depends on your development environment. If you're writing
something small and it's just you, you'll understand everything that you're
doing (hopefully). If you're building a complex system with multiple parties
and you're practicing separation and rotation of duties (no one owns any piece
of code), then YES. I've gotten to the point that even for interviews where
I'm asked to write code, I'll add test cases into whatever I'm building.

I've worked on projects where code isn't tested and I've made a simple change
thinking that's all I had to do to fix some "bug" only to find out by the QA
team that I've broken three other features. Features which test cases would
have been able to spot at build/test time. This took an hour fix and turned it
into a week long back and forth with QA and the original developer.

On the other side, I just wrote an iOS game and it has 0 test cases. I pretty
much know what's going on, I'm the only developer on the project and I don't
see it getting much bigger (famous last words right?).

------
Stwerner
I've been a TDD practitioner for a while now and I've lately started to
question the value of doing the refactor step within the red/green cycle, even
if the code to make the tests green is a horrible kludge. Sure it's going to
add technical debt to your project, but I can't shake the feeling like it is
usually premature optimization of the maintainability of the code. Especially
if it is a new feature, the requirements of maintainability seem like they're
probably pretty low at first since you could find out that the feature was
completely unnecessary in the first place for example.

In a talk once, I heard an offhand remark that a good signal that a method
needs refactored is if you're frequently fixing bugs in it because it signals
you don't have a good grasp on what the method is supposed to be doing, and
I've started trying to use that rule in side projects to optimize more for
getting things done inthe few hours on nights and weekends I have to work on
them.

~~~
epochwolf
The point of refactoring is to make the code cleaner and easier to understand.

~~~
Stwerner
Right, but if you're never going to look at the method again and/or remove it
next week for whatever reason, you've just wasted a bunch of time that could
be spent on more valuable things.

My thinking is definitely more focused on early stage start ups/personal side
projects than bigger mature projects.

But what I'm trying to get across is that I'm starting to wonder if "big
clunky code base that's hard to support, maintain and understand" is in the
same class of good to have problems for early stage start ups/side projects as
"I chose a technology that is tough to scale to 500 million users" as long as
you have a comprehensive test suite.

~~~
ollysb
I tend to vary the amount of refactoring I do depending on how much code is
likely to depend on it. For instance I'd spend more effort in the DOM than I
would in the controller (remember this is degrees of effort not lots/none).
There's also nothing to stop you doing refactoring _before_ you start on a
piece of code. Maybe you didn't think it would be worth refactoring at the
time but now you're adding a new feature you can see that applying the
refactoring would make it easier.

------
lusr
I was very keen on using TDD to lay out the design and technical specification
of a solution to be implemented by our team (8 developers total).

Unfortunately, because the project has tight deadlines (regulatory project),
despite minimal requirement clarity the team needed to get going and in the
end every developer ended up with just a rough guideline of how to implement
the various components.

Does anybody have ideas for integrating TDD with an Agile approach and a team
around this size?

~~~
fennecfoxen
I've done Extreme Programming (cf. _Extreme Programming Explained_ et al)
approaches on a team of about that size and it worked okay, but the deadlines
weren't as hard. The pair-programming aspects helped keep everyone informed
about what sort of implementation decisions and tools were being made (quite
useful on a team of eight) without bogging things down with meetings, and it
also helped keep people very focused... and also honest about actually being
TDD, as in "don't write a single line of code which isn't necessary to make
some test pass", which helps keep you from overengineering some part of the
code and helps you meet deadlines.

But it's hard to say whether it'd work, because (a) your problem is described
very vaguely, and (b) your team might not be up for it (the classic problem),
and (c) even if they were up for it, it's a process they likely haven't
practiced much, so that makes it harder

------
wmt
Like many said, it also bothers me that I don't believe that TDD code is any
better than non TDD code with identical unit test code (or path) coverage.

------
benihana
The fact that when this blog post says TDD, it means "writing unit tests" is
really annoying. TDD is a methodology of writing features by writing tests
first to define the behavior you're targeting, then writing code to match that
behavior. That is different than "write code, then write tests to assert the
behavior of your code." You can have excellent tests and excellent code
coverage and excellent behavior documentation without doing TDD.

