
TDD Derangement Syndrome - gthank
http://blog.objectmentor.com/articles/2009/10/07/tdd-derangement-syndrome
======
thunk
It seems to me that TDD works against both bottom-up and exploratory
programming.

Bottom-up programming develops tiny functions that are each trivially correct.
These tiny functions form a base on which the next layer of tiny functions is
built, and so on -- each layer written in the layer below. Most of these tiny
functions begin life _as_ tests, which are then generalized and encapsulated
in a function. So in that sense BUP is sort of test-driven. But if it's
necessary to write _further_ tests to determine the correctness of these tiny
functions, You're Doing It Worng.

In exploratory programming you're not sure where you're going yet. The
development path during exploratory programming looks like a crooked tree with
lots of stubby dead-end branches. If you first developed tests for each of
those stubs, only to find that that's not where you're going, you may never
stop writing tests, assuming you even know what to test for. Plus, testing
during exploration would kill the flow. You can't be a pioneer and a settler
at the same time.

I'm not saying not to test. And I'm not saying not to use TDD. TDD is probably
great for top-down projects where you already know where you're going. But if
it preempts the process by which many of the great programs were written -- if
it preempts hacking -- then it's not the final word.

~~~
kristiandupont
My experience is that TDD helps you write code that consists of tiny functions
that are trivially correct. For the simple reason that this is the easiest way
to write a test for.

As for the exploratory programming argument, a key element in TDD is to write
_one_ test and the make it succeed. You are not supposed to write a lot of
tests and then the code that makes them succeed. So unless you are so
exploratory that you don't even know what you want your next function to do,
this would not hinder you either.

------
silentbicycle
Does it seem weird to anybody else that about half the list of prominent
projects using TDD he listed are testing tools?

That's like saying type systems are good because they help you write sound
proofs _about type systems_. I know they can be really useful, but it's an
incredibly circular argument.

~~~
idlewords
It seems really weird. I've been trying to get an answer to the question "what
popular apps have actually been built using TDD?" for a while, without
success. All of the entries on his list are either libraries, programming
languages, or testing tools and frameworks. Can anyone name actual _apps_ that
TDD has blessed us with?

~~~
joshuab
<http://www.thoughtworks.com/our-clients/>

<http://pivotallabs.com/clients>

<http://www.hashrocket.com/projects>

<http://www.thoughtbot.com/work>

~~~
joe_the_user
It's hard not to see that as a list of clients of consulting firms rather than
a list of serious open source software built with TDD.

I know the Rubinious project was built aiming to use TDD and it doesn't seem
to have gone anywhere YET. As far as I can tell, a compiler is one beast that
needs a strong abstract design first rather than a set of tests first - but
I'm prepared to be surprised.

Pivotallabs lists Twitter first but I would be rather surprised if someone
could verify that a substantial part of Twitter's app has been built using TDD
(go ahead, surprise me).

~~~
ZeroGravitas
One of the comments on the original post claims open source apps don't count
because they _"are not subject to many of the challenges the average developer
faces in a corporate environment"_ so it appears TDD fans can't win either
way.

But since you're looking for open source success stories there was some very
positive noises about TDD from a Twisted dev recently:

 _"I would strongly recommend at least practicing some TDD, too. Learning how
to write tests was useful for me, but practicing TDD really expanded my design
skills in a bunch of surprising directions that were almost unrelated to
testing. I originally felt as you do, that I wasn't disciplined enough to do
it; now I feel like I'm too undisciplined not to do it ;-)."_

From: [http://glyph.twistedmatrix.com/2009/09/diesel-case-study-
in-...](http://glyph.twistedmatrix.com/2009/09/diesel-case-study-in-that-
thing-i-just_24.html)

Though he specifically states that Twisted didn't start out TDD, which he
regrets, but maybe that disqualifies it.

Regarding Rubinius (and JRuby that was listed in the original post) I'm not
sure trying to create a spec for an existing language implementation and then
try to meet it would count as "true" TDD if people care about such
differences. But maybe they do TDD for other bits lower in the stack, I'm not
sure.

~~~
joe_the_user
Well,

I've only scanned the Rubinius code and noted their blogs/announcements but
... it seems like they shifted from one basic model of their virtual machine
to another in the middle of their process and I'm quite skeptical that this
either indicated they doing well or that it would be considered a victory for
TDD. Looking from the outside, it looked more like "we thought that TDD would
let us get by without design but we were wrong...".

I've heard TDD described as "write a test, write a function, repeat" and THAT
is not enough. You definitely need a guiding design _as_ well.

------
Semiapies
When people start throwing around terms like "derangement syndrome" to refer
to those who don't buy their ideas, that's a warning indicator.

~~~
kashif
this is such a smart and enlightening comment - why has it just got 8 votes?

(sarcasm)

------
JustAGeek
I also think that this list isn't very convincing.

What I don't get, if Robert Martin uses TDD, why doesn't he simply list those
"real" projects (real as in projects for clients) , he's worked on?

~~~
zacharydanger
How many of his client projects do you expect anyone to have heard of?

~~~
nickelplate
Probably not many, but I don't think that is necessary. He just has to show us
a convincing list of projects (e), and demonstrate how these projects improved
with the introduction of TDD, WITHOUT blurring the line between TDD and unit
testing. Not student assignments, but real world projects where real jobs,
real money, real stakes are on the line. The problem here is that if TDD is as
effective as the zealots and XP gurus claim it is, then the evidence for it
should be overwhelming.

Edit: (e) Listing projects that are: 1. real (not just student, testing or
open source projects); 2. large enough that real stakes are on the line (I
want to see an example of a 2 MLOC system before and after TDD, not the 70
KLOC of Fitnesse) would make me pay attention. Then I want to see how TDD
improved the state of the project (and that picture has to include things like
productivity and cost analysis of test first testing everything). Then I want
to see why the same results could not have been achieved with "test last" or
"test whenever" unit testing. Then I would want to see an explanation for the
rate of success other people have had without ever practising TDD - because if
TDD is as effective as the zealots claim, and if the zealots make a living
making those claims, then that explanation HAS to be there.

------
jlouis
I read

"An Initial Investigation of Test Driven Development in Industry" by Boby
George and Laurie Williams, and "Realizing quality improvement through test
driven development: results and experiences of four industrial teams" by
Naggapan et al.

and I am not too impressed. The first study has 6 TDD teams and 6 control
teams. Our null hypothesis is that these teams produce exactly the same
quality of code, measured in defect rate. Now, any sane study would take a
look into (statistical) power, ie the probability the test will reject the
null hypothesis. If we further look at the box-plots, we see that the overlap
between the "treatment" and control group samples is pretty big. Hence, we
will need to test for any statistic significance before coming to the
conclusion. In fact, it may even be that we can _reject_ the idea that writing
in a TDD style takes 16% more time on average.

The second study is much worse. It quotes studies secondhand, only has 4
studies (with 4 "controls" being completely different systems). It generally
fails to the problem of lurking variables, or confounding: That some
unmeasured external factor affects the study. For instance, it is put forth
that the Microsoft teams in the study employ many methods for minimizing
program defects including static analysis. The other grande problem here is
that there are no assesment of the control-group systems at all. They will,
invariably differ in more than one variable and the question is: Have we
measured to benefits of TDD or have we measured the benefits of something
else? We can't say for sure.

Finally, we only test statically typed languages with a subtyping construct
(Java, C#, C++). I think (and I stress it is a hypothesis) dynamically typed
languages would benefit considerably _more_ with a TDD system in place, and I
hypothesize more expressively typed languages (ML, Haskell) would benefit
_less_. But I have no good idea of how to design an experiment that can show
this hypothesis true.

~~~
joe_the_user
Actually, now that you point me toward the real data...

It makes me even more skeptical of TDD.

I would _expect_ most projects using just about any new methodology to _look_
notably better in tests through the programmer's equivalent of the placebo
effect: a new methodology imparts energy and optimism compared to just doing
the same old.

This would lead me to deduce that once the "shine" wears off TDD, it would
actually give no improvement in quality and result in considerably longer
development time...

------
raganwald
What I see far too often in these debates is the following:

Person A claims "Process P is good!"

Person B retorts "Sez you, can't prove it to my satisfaction!"

And the debate rolls on and on about whether we can prove P or not. I am not
blaming Person B, but the question I want to put to them is this: _Can you
prove that whatever you are doing now works? Or is it just a case of you feel
it works for you?_

~~~
silentbicycle
The debate would be more useful for all involved if it were re-framed in terms
of asking what problems TDD is well-suited to, rather than arguing over
whether it's (implicitly, always) a net win or not. (This is also true for
many other techniques, I think.)

~~~
billswift
It would also be useful if people would be more explicit and specific about
the techniques they are actually discussing. Consider, for example, the
earlier comments about confusing or conflating TDD and unit testing.

~~~
silentbicycle
Part of the reason the debate is muddled is that TDD forces one to use unit /
regression testing. People promoting TDD often take credit for the benefits of
those, when evaluating TDD alone concerns whether writing tests upfront as a
design technique is actually better, _provided tests are still written_.

------
DannoHung
Anyone else sort of think that the "To TDD or not" argument eclipses the much
more important argument of "To have high unit test coverage or not"?

~~~
akeefer
The research I've seen on TDD seems to essentially point to two things:
quality and productivity is positively correlated with the number of tests,
and people who use TDD tend to write more tests. People who write the same
number of tests but don't use TDD get similar benefits to people who do use
TDD. So TDD is itself useful primarily in that it ensures that you actually do
write the tests. (Unfortunately I don't have any citations handy to back that
up; that's just my recollection of what I've read on the subject over the
years).

There's that ever-elusive "it improves design" argument for TDD, but that
one's far harder to prove either way, and I've personally seen it cut both
ways: sometimes things end up more nicely decomposed, and sometimes they end
up wayyyyy too decomposed.

~~~
DannoHung
This squares with my experiences.

As for the design argument, I think that the thing that TDD actually
encourages in terms of design is testability (something that I do think is
desirable, in general). If you have high test coverage without using TDD,
you've probably either taken testability into consideration before you wrote
your code or you made some changes after the fact to accommodate your tests.

~~~
akeefer
I'd definitely agree that TDD leads to testable code, which is usually a good
thing. If you know how to write testable code, it's not as much of a win (in
my opinion), since you tend to write with testing in mind. If you don't have
experience with unit testing, though, doing strict TDD for a while can really
help you learn what sort of constructs make for testable code.

The places I've seen TDD lead to less-than-desirable design is when some
larger problem is broken into such tiny pieces with so many interfaces that it
becomes hard to read through the code and follow the flow of control. That's
certainly testable, but often times I personally get as much mileage out of
writing tests one level up, against some higher-level abstraction or API that
those other things are all components of, and not being so religious about
breaking up all the little parts. In my experience those slightly-higher-level
tests (they're not even "integration" tests, since they're still strictly
against one logical component) tend to provide the same amount of value in
terms of preventing regressions while requiring fewer compromises to make the
little parts testable and giving more flexibility as to future refactoring.
Essentially, you're treating a whole bunch of classes/parts as internal
implementation details.

~~~
DannoHung
I think we see eye to eye here. The sort of "one level up" thing is what
initially attracted me to spec testing because that seemed to not only give
you tools for performing your tests, but also helped direct you towards what
would be useful to test: the behaviors of the code that you actually care
about. So when I use spec testing tools, I tend not to think about making sure
that the low level function traverses the directories in a particular way, but
rather that all of the directories have been renamed (for example).

