
Test-Driven Design: not for the early stages of development - yungchin
http://www.tbray.org/ongoing/When/200x/2009/06/23/TDD-Heresy
======
10ren
mjd also wrote on this:

 _"There must be a specification," a couple of people said. "Just write the
tests to check the requirements in the specification." There was an amazing
disconnect here between what I was asking and what they were answering._
[http://use.perl.org/comments.pl?sid=42511&cid=67765](http://use.perl.org/comments.pl?sid=42511&cid=67765)

I think TDD is a great idea - if "you know what you are doing". It solidifies
your interfaces. Which is great, provided you won't want to change your
interfaces.

It's the dogmatism of TDD that scares me. When "heresy" is applied to a
methodology, it makes me think that a theory of reality has been favoured over
reality. But hey, maybe that leaves the field clear for the reality-seekers.

~~~
evilneanderthal
I'm not sure I fully understood everything you were saying here.

But I would mention that unit tests give you a nice picture of what will break
when you DO change your interfaces. The list of broken tests is your new task
set.

This is a significant benefit.

~~~
10ren
I agree with you that tests will tell you when you break your interfaces. My
position is that in some cases _you don't want that_.

mjd explains it well. Here's my attempt at elaboration:

It's only in a particular kind of project, at a particular stage: Imagine you
are designing a project, in which you change the interfaces at the drop of a
hat. Daily (or oftener). In this scenario, the "new task set" of the list of
broken tests is extra work that increases the viscosity of your code. It's a
distraction from your actual exploratory task, of trying out different
interfaces. You are setting out to change interfaces, because you are
experimenting with what they should be: that's your actual task. This only
applies when you need extremely fluid code, and when it doesn't really matter
if there are lots of bugs. Think of a poet daydreaming, or an artist sketching
out a very rough outline, or the first prototype of an inventor (made with wax
and strings).

~~~
10ren
I thought of another way to say this (not that you need it):

Sometimes you write code to _throw away_. James Gosling has said he does this.
Fred Brooks said "build one to throw away". Novelists write drafts, artists
draw sketches and sculptors carve studies (I was amazed to see that
Michelangelo did several studies for his sculpture of _David_ \- full size, in
marble!)

In contrast, _tests are an investment in code_. They make the code better,
more reliable, more correct - more tested. They make it harder to throw away
the code.

------
mavelikara
I agree with and understand this part:

>.. when I’m getting started, I never know what X and Y are.

> .. once you’re into maintenance mode,"

But, how does one know when maintenance mode starts? In the build
something/release/add feature/release/add feature cycle, it is easy to
rationalize that I am not in maintenance mode until it is late.

Of course, as davidw said above, no advice is better than "use your head" :)

~~~
bsaunder
I think migrating from prototype to maintenance mode is a continuum. For some
reason maintenance mode seems to begin (IMHO) some where around 60%--80% of
work for version 1.0. At that point it's too risky not to be testing routinely
and things should have settled enough that you could depend on some solidified
interfaces.

I like this approach to TDD. This feels more efficient than starting with
failed test cases on day 1.

------
davidw
Not much to add to that, it's pretty much exactly the way I feel things ought
to be. I.e. be sensible and use your head: if you've got things figured out,
spend some time putting together some good tests, they'll save you in the long
run. If, on the other hand, you're still exploring, writing code _and_ tests
is likely to be a waste of time if you end up throwing things out a few times.

~~~
hello_moto
As long as your code can be unit tested, it's okay to test it later.

But what if the "exploratory" code becomes hard or almost impossible to test
(due to dependency et al.)? Refactor it? but you have no test that acts as a
safety net during the refactoring period.

------
Arun2009
My last job was quite an eye-opener as far as TDD was concerned. I was a C++
developer in my team who also coded in Java as the need arose. The Java team
lead would absolutely INSIST that we write the JUnit tests before coding the
actual implementation (no such headache for the C++ side). It was a major pain
because the time we should be doing "real work" would be used up for writing
tests.

We came to appreciate that however when we got into the maintenance phase of
the project. Just running the tests quickly would give us confidence that the
nothing we did in the current release had broken whatever was existing. We had
a bunch of test-harnesses for each Java module in the system. It was the
coolest thing I had seen!

------
nagoff
I tried to have a crack at the testing issue as it applies to startups in my
last blog post [http://unfeatureddocuments.com/content/unit-tests-
headlights...](http://unfeatureddocuments.com/content/unit-tests-headlights-
or-handbrakes) Loosely put, my feeling is, that for a startup, you need to
focus testing on those parts of the code that you believe will still be being
used in six months time.

------
prodigal_erik
I used to start with one chunk of prototype code in main() and factor out
functions and classes as I realize they make sense. Now I make assertions as I
go and name it testMain(). Sometimes there's just one assertion of the final
result at the end, because even that helps.

