

Be professional, do TDD - koski
http://weblog.madebymonsieur.com/be-professional-do-tdd/

======
teilo
And yet ... millions of lines of code are written every day and tested the old
fashioned way, and the world keeps spinning round and round.

Granted, every project can benefit greatly from unit tests. As for TDD, it is
an admirable practice, but let's not get religious.

~~~
lyudmil
I didn't like the tone of the post either. I do think that this needs to be
settled, though. If we're ignoring a practice that guarantees an improvement
in the quality of our code and the speed of our delivery without a _very_ good
reason, then we _are_ being unprofessional.

Does TDD deliver what the OP says it does? I think so (anecdotal evidence, so,
worthless), and there is research out there to support it [1] (possibly valid
evidence). It seems we have at least a basis for debate.

Therefore, if you don't use TDD, you should raise objections to the existing
research you have. Or, offer some anecdotal evidence in opposition to TDD.
Then perhaps we could conduct experiments to test the corresponding hypotheses
and finally converge on a verdict regarding this. I think if we (as a field)
fail to do that, we _are_ unprofessional.

[1] Evaluating the Efficacy of Test-Driven Development [pdf].
[http://research.microsoft.com/en-
us/projects/esm/fp17288-bha...](http://research.microsoft.com/en-
us/projects/esm/fp17288-bhat.pdf)

~~~
mhd
That study you cite has a _very_ lenient definition of "comparable team".

Not too surprising, most methodology studies that I've seen can't exactly
claim scientific rigor. (Yes, it's hard to do when it comes to real-world
projects, but other fields of scientific study manage better.)

~~~
lyudmil
"Lenient" would imply that their approach skewed the results. If that's what
you meant, could you explain why you say that? I'm hoping your explanation
could lead to a testable hypothesis.

If you meant that they weren't rigorous in controlling variables related to
team composition, I'd agree. However, I think those variables can safely be
ignored if the study is repeated and validated many more times (which the
authors acknowledge is a work in progress).

~~~
mhd
Project A:

TDD Team: 6 people, 24 man-months, no legacy code, 6 KLOC;

Trad Team: 2 people, 12 man-months, no indication whether it was a legacy
project, whether they used unit testing at all (and if yes, what their
coverage was).

Project B:

TDD Team: 8 people, 46 man-months, no legacy code, 26 KLOC

Trad Team: 12 people, 144 man-months, 149 KLOC, again no indication whether it
was legacy, used unit testing etc.

The report mentions that they were comparable, because both teams reported to
the same manager (Not too surprising for Microsoft).

I hope that there's more info about this study and that I'm a bit
overreacting, but just given these tables, I wouldn't agree that we're talking
about comparable teams and projects here.

So does this test unit-testing? Does it test TDD? Does it test development
without legacy code vs legacy code? And even if you would get results, would
it be the same for people with different experience levels? (Which is a
problem for academic studies)

------
henrikschroder
"It is unprofessional to release code that you are not sure if it will work or
not. Know that it works."

The cost for making software that you know works, for the formal definition of
"know"[1] is very, very high. That level of quality is usually reserved for
space shuttles and similar.

TDD makes software as good as the tests you write. No more, no less. It can be
a great aid in producing better code, and better tested code, but in the end
all software development is a trade-off between time spent and quality. You
have to decide where the line for "good enough" is. TDD itself doesn't
magically move this line to "perfect" without incurring a corresponding cost
in development time.

[1] A knows B iff B is true and A believes B and A has good reason to believe
B.

------
mhd
_Only 40% does Test Driven Development (TDD)? It means that 60% of the
developers don’t know if their code works or not!_

First of all, 40% is way too high. Wonder what kinds of conferences they
attend…

Also, even if you buy in totally to the agile premises, knowing whether your
code works should be an effect of code coverage by unit tests, whether they're
up-front or not isn't an issue at all.

And finally: No, we're not "professional". This is _Hacker_ News.

~~~
jgalvez
Most successful, foundational Unix software is developed on a mailing-list,
with a changelog.txt file in the HEAD. Just saying.

~~~
mhd
I don't quite get the connection to TDD/Agile, but most current Unix projects
actually have two files, one ChangeLog (most of them without the DOSism
".txt"), one NEWS. One for the commits, one for user-readable updates since
the last release. Always found that preferable to a mixed document, where you
have to filter out the major changes (or weren't able to see individual bug
fixes)

~~~
jgalvez
The point I was trying to make is that big projects can evolve and be managed
in a very simple manner, which is the case for many pieces of Unix software.
The details regarding changelog nomenclature vary, and of course there may be
additional plain text files which are still simply kept in the HEAD.

~~~
mhd
True. Although, to play devil's advocate, just because you can do projects in
other ways doesn't mean that you couldn't do them better in Agile. (Don't get
me wrong, I think that Unix projects have a better track record than Agile
projects)

Personally I think the biggest benefit of "Agile" is that it's established
enough to sell it to customers if you're a consultant. As no two Agile
processes/teams look the same (hence the name), you've got a lot of freedom.
If you came to a big company without that name brand recognition of "XP",
you'd probably be forced to do it with ITIL, the Unified Process or other
Godzilla-like monstrosities.

Having been there, I feel their pain.

------
jefffoster
"Be professional" would be a better aim.

In my opinion, the most important thing for code is for the developers to
actually care about it. Whether you test first, test later, or just simplify
the code until it's obviously correct [1] it really doesn't matter, just care
enough to want bug free software.

[1] "There are two ways of constructing a software design: One way is to make
it so simple that there are obviously no deficiencies, and the other way is to
make it so complicated that there are no obvious deficiencies. The first
method is far more difficult." (Tony Hoare)

~~~
henrikschroder
At the end of the day, TDD is but one tool in the toolbox. And as always, you
have to use tools that fit the actual job in question, there's no golden
hammer.

------
lhnz
> Only 40% does Test Driven Development (TDD)? It means that 60% of the
> developers don’t know if their code works or not!

What?! No. No, it doesn't mean that.

~~~
marknutter
Plus it doesn't address the developers who are writing tests, but just not
_before_ they write their code.

~~~
gte910h
Or the people who are doing code reviews, or many other ways shown to be more
efficacious than unit tests at catching defects.

------
devmonk
Some points:

\- True TDD requires 100% test coverage of the code the developer using TDD
has written or altered. I've seen the reports from companies or organizations
claiming they do TDD that aim at least 60%. That isn't _true_ TDD, in the Kent
Beck definition of the term. That's not to say these people are wrong. In
fact, because they have a better balance between test code and production
code, they can change functionality of code more easily.

\- Many developers attempting to practice TDD don't write the tests well
enough to cover every possibly argument/configuration for the various methods
they were testing. That's not TDD.

\- Many developers attempting to practice TDD won't test trivial methods like
getters and setters. That's not TDD.

\- Many developers attempting to practice TDD write tests that duplicate the
parts of the code being tested. While there are valid reasons for this, it
results in frustration for the developer who needs to change 20 tests just to
change one bit of functionality. Avoiding this in many cases can take an
enormous amount of discipline and time, ensuring proper modularization,
mocking, and refactorization of both production code and tests.

So, why do people claim to be doing TDD that aren't? Because there is not a
good term out there that everyone knows of for just writing tests before
writing code some of the time, duplicating functionality of tests, and leaving
out functionality tested by many tests.

In some way, writing and maintaining tests is a lot like auto maintenance. TDD
is like keeping your car in pristine condition. It is super-shiny because it
is always just-washed and sparkly, and it runs like a champ. But in the end,
the primary reason for the car is to get you from point A to point B.

~~~
alextgordon
_Many developers attempting to practice TDD don't write the tests well enough
to cover every possibly argument/configuration for the various methods they
were testing. That's not TDD._

That's ridiculous and impractical. What if my function fails for values of t
>= 946684800?

~~~
devmonk
If you have a conditional or some other piece of code that would act
differently for t >= 946684800, then per TDD, you'd need a test for it to be
true TDD. If you don't have a conditional or different behavior that would
occur with a value that large then you don't need a test, per TDD. However, if
you find that values of t >= 946684800 cause a bug, then per TDD you'd write a
test for it, then write the logic to handle it.

I'm _not_ promoting real TDD, btw. Real TDD is fine as a ideal, and is
achievable in many circumstances and even may make perfect sense even in an
ongoing basis in some environments, but it isn't practiced to the percentage
overall that the OP (in the linked post) stated, and developers that heartily
promote TDD _usually_ don't fully understand its implications when practiced
fully, or what doing _true_ TDD really involves. I'm not against writing
tests, but at some point, you need to relax.

------
jasonkester
Does anybody actually do Test Driven Development for the web? Looking through
the stack of features and bugs I've run through in the last couple days, I
can't find even _one_ of them for which you could write a unit test.

\- "Timeline ticks don't line up with grid when changing scale"

\- "Resizing shapes is jerky in FireFox"

\- "Need a button you can hit to pull up a simple Help screen"

\- "Add an opacity slider for shapes"

Unit testing rendering correctness during mouse actions? Unit testing for UI
features? Unit testing for CSS? Hacka please.

If you're doing little comp-sci type things on the server then sure you can do
TDD. It makes sense in the context of building a string library or wrapping
somebody else's API. In the web startup world though, how often are you
actually doing that stuff?

~~~
kevingadd
TDD doesn't necessarily mean unit tests. We write tons of functional
integration tests to make sure that stuff like you describe works, using
Selenium, YUITest, and the like. It's extremely valuable, since the
alternative for us is manually testing every single feature on the site
anytime something changes.

~~~
jasonkester
How much time would you say you spend writing those tests? Maintaining them?
Just capturing a complex UI interaction that you can verify at a glance in
person is next to impossible with the tools you mention. Unless they've made
some serious progress in the last few months.

For the issues I listed above, how many of them do you actually consider to be
testable, using TDD's "write your tests first" definition of the word.

------
cawhitworth
Just a point - having tests in your code doesn't mean you're doing TDD.

Doing TDD - that is, writing tests up-front - forces you to not only consider
the correctness criteria for your code (and gives you the confidence that the
code, once written, works correctly), but it also forces you to design your
code for test, which enforces separation between components (so that they're
individually testable), generation of sane interfaces (so that simple mock
objects can be written) and so forth. It's not just about making code that you
can be confident in, it's about making saner code for the long term, too.

~~~
ZeroMinx
Yes. This article seem to confuse TDD vs having tests. It's talking about
having tests, but referring to TDD. You can have a solid set of unit tests
even if you didn't doing TDD.

------
gaius
One word: Sudoku

[http://ravimohan.blogspot.com/2007/04/learning-from-
sudoku-s...](http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-
solvers.html)

------
tuomas-mbm
Hi all, I am the author of the blog post.

It seems that the blog post got attention before it was correctly reviewed.
Damn you guys are fast :-)

The idea was not to push TDD or promote it in any way. It was a rant to devs
who don't do unit tests. Someone else needs to one day fix their untested code
without knowing if changing something in X will break Z and Y.

Even the title of the post is changed now. Seems to be too late though.

By the way, your comments are great, they gave me many thoughts.

------
Tomer
i'm working on legacy huge spagetty code with no option to write effective
integration tests, testing that my method retuns some correct string/int etc
will do me nothing, the whole system is huge with many integration points and
no infra for writing integration tests and no time to do it. what should i do?
(I love TDD, tests in general).

~~~
timclark
You can make things better if you are prepared to make the effort, the
politics can sometimes be harder than the technical work.

It might be worth seeking out a copy of Working Effectively with Legacy Code
by Michael C Feathers.

~~~
Tomer
thanks!!

------
bhiggins
Be professional, don't push your religion on other people.

